Big Data Engineer, Hadoop, Spark
Big Data Engineer, Hadoop, Spark
We have an urgent requirement for a Big Data Engineer to provide expert guidance and deliver the following as part of a team:
Integrate the necessary data from several sources in the Big Data Programme necessary for analysis and for commercial actions;Build applications that make use of large volumes of data and generate outputs that allow commercial actions that generate incremental valueDeliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines and support) to speed up the Local Markets delivery in the Big Data Programme, assuring quality, performance and alignment to the Group technology blueprint of components releases in the platformSupport local markets and Group functions in obtaining benefiting business value from the operational data
Key accountabilities and decision ownership
- Designing and producing high performing stable end-to-end applications to perform complex processing of batch and real-time massive volumes of data in a multi-tenancy Hadoop environment and output insights back to business systems according to their requirements.
- Design and implement core platform capabilities, tools, processes, ways of working under agile development to support the integration of local markets data sourcing and use cases implementation, towards reusability, to easy up delivery and ensure standardisation across Local Markets deliverables in the platform.
- Support the distributed data engineering teams, including technical support and training in the Big Data Programme frameworks and ways of working, revision and integration of source code, support to releasing and source code quality control
- Working with the Group architecture team to define the strategy for evolving the Big Data capability, including solution architectural decisions aligned with the platform architecture
Defining the technologies to be used on the Big Data Platform and investigating modern technologies to identify where they can bring benefits
Core competencies, knowledge and experience
- Expert level experience in designing, building and managing applications to process copious amounts of data in a Hadoop ecosystem;
- Extensive experience with performance tuning applications on Hadoop and configuring Hadoop systems to maximise performance;
- Experience building systems to perform real-time data processing using Spark Streaming and Kafka, or similar technologies;
- Experience with common SDLC, including SCM, build tools, unit testing, TDD/BDD, continuous delivery and agile practises
- Experience working in large-scale multi tenancy Hadoop environments;
Must have technical / professional qualifications:
- Expert level experience with Hadoop ecosystem (Spark, Hive/Impala, HBase, Yarn); desirable experience with Cloudera distribution;
- Strong software development experience in Scala and Python programming languages; other functional languages desirable;
- Experience with Unix-based systems, including bash programming
- Experience with columnar data formats
- Experience with other distributed technologies such as Cassandra, Solr/ElasticSearch, Flink, Flume would also be desirable.
This job was originally posted as www.jobsite.co.uk/job/959521264