Big Data & Hadoop Developer

United Kingdom
From £35,000 to £40,000 per annum
30 Sep 2017
03 Nov 2017
Contract Type
Full Time


Job Title: Big Data & Hadoop Developer

Location: TW15 3RP, Ashford, Middlesex

Job Ref: BDHD001/2017

Salary: £35000 to £40000 per year

Closing date: 19 October 2017

Duties and responsibilities

      Building platforms and deploying cloud based tools and solutions with technologies like AWS EMR, RDS, KinesisLeveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of end user capabilitiesAssist application development teams during application design and development for highly complex and critical data projectsDeveloping distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Storm, NIFI and KafkaResponsible for Hadoop development and implementation including loading from disparate data sets.Translate complex functional and technical requirements into detailed design.Perform analysis of vast data stores and uncover insights.Managing and deploying HBaseMaintain security and data privacy.Create scalable and high-performance web services for data trackingBuild libraries, user defined functions, jobs and frameworks around Hadoop.Research, evaluate and utilize new technologies/tools/frameworks around Hadoop eco-system such as Apache Spark, HDFS, Hive, HBase, Ozee etc. Deploy and maintain multi-node Hadoop clusters.Translate business requirements into logical and physical file structure design.Propose best practices/standards
    Skills, qualification and experience required

        Linux/Unix including basic commands, shell scripting and solution engineeringExtensive experience in Hadoop developmentHands on experience leading delivery through Agile methodologiesExpertise in HTML5, CSS3 Angular JSS, JavaRest API, Jboss, Apache, Junit, PostMan, FIT.Expertise in HIVE SQL, SPARK and Sqoob,Experience with NoSQL Databases HBase, Apache Cassandra, Vertica, or MongoDBExperience working with Big Data eco-system including tools such as Hadoop, Map Reduce, Yarn, Hive, Pig, Impala, Spark , Kafka, Hive, Impala and Storm to name a fewGood hands on Experience on Hadoop EcoSystem: Spark, Hive, Sqoop, Flume,MR etc.Experience in Tableau, Birt.Extensive knowledge in different programming languages like Java, Scala.Experience using third-party libraries and APIs.Experience in OLAP, Data warehouse and BI technologies and good knowledge of Data warehouse concepts.Database management or SQL query developmentGood knowledge in any of the Technology: SVN/Maven/Graddle/SBT/Jenkins.Ability to build and test rapidly Map Reduce code in a rapid, iterative manner

      This job was originally posted as