Big Data, AWS Cloud, Hadoop, spark, scala
Posted: 23-Jan-2018
What\r\nis the experience required?
•\r\nSuccessful\r\nbackground as an architect on EDW/Data Lake projects
•\r\n6\r\n+ years’ experience with Hadoop Eco system (HDFS, SPARK, SQOOP,\r\nHive, PIG, Scala)
• Experience\r\nin Cloudera Impala. Certification is a big plus
•\r\n2\r\nto 4 + years’ experience with Amazon Web Services (S3, EC2, Athena,\r\nRDS, EMR, Red Shift etc)
• Deep\r\nunderstanding of relational databases and data integration\r\ntechnologies.
• Prior\r\nexperience with traditional ETL tools (Informatica, Talend etc.)
•\r\nExperience\r\nwith Data Virtualization Technologies (Denodo, Presto etc.) is a\r\nplus.
• Experience\r\nin integrating disparate data sources such as flat files, databases,\r\nxml files and/or unstructured data & web services
•\r\nExtensive\r\nexperience with data modeling for analytical applications
•\r\nStrong\r\nin Unix shell/Perl scripting
• Excellent\r\ncommunication and collaboration skills are required.
•\r\nAbility\r\nto work independently and as a key contributor in a distributed team\r\nenvironment.