Big Data Engineer - Hadoop - Spark - ETL

Experis LTD
13 Feb 2018
16 Feb 2018
Contract Type
Full Time
Our Leeds based Betting and Gaming client are currently seeking experienced Big Data and Senior Big Data Engineers to join their team on an initial 6 month contract.

Key skills sought:

  • Hadoop (or strong big data)
  • ETL
  • Spark


Next Gen ETL Pipelines

As an early Hadoop adopter most of our data ETL pipelines are built using HiveQL orchestrated using a custom Ruby DSL with some HBase manipulation. HiveQL pipelines are especially hard to test and apply rigorous software craftsmanship to and we are in the process of migrating to a combination of standard ETL tooling (Talend), streaming ETL-like tools (StreamSets) and new rebuilding business logic in these tools, Spark and Scala are required.

Real-time Data Streaming & Processing

We already have a number of real-time event stream data flows onsite but most are focused on per-market flows (e.g. changing bet prices in trading) rather than on customer events. We are starting to offer real-time personalised promotions to our customers and introducing stream based processing using Kafka, Spark Streaming and Kafka streams.

It is likely that we will need to additionally create some form of Complex Events Processing capability as well.

Extensive modern development experience is essential as it's a base skill in this area, along with being a quick learner as the Hadoop ecosystem is constantly evolving. The following would be an advantage:
  • ETL tooling (Talend/Pentaho/Informatica)
  • Strong knowledge and experience of Hadoop, MapReduce, Hive, HBase and general data processing
  • Previously part of an agile delivery team

Candidates should submit their CV in the first instance.
This job was originally posted as

Similar jobs

Similar jobs