Big Data Engineer

Recruiter
Searchability
Location
Chester
Salary
50000.0000
Posted
17 May 2017
Closes
16 Jun 2017
Contract Type
Permanent
Hours
Full Time

Big Data Engineer - Big Data / Hadoop / ETL / S3 / Java / Batch Processing / Data Streaming

EXCITING BIG DATA ENGINEER POSITION FOR A HIGHLY INNOVATIVE BIG DATA SOLUTIONS COMPANY IN CHESTER!!

  • Minimum of 3 years commercial experience
  • Excellent progression and career development available
  • Big Data / Hadoop / ETL / S3 / Java / Batch Processing / Data Streaming
  • Competitive salary
  • To apply please call or email James.Lovick@searchability.co.uk

Based in Chester, we are a hugely ambitious Big Data Solutions company who are recruiting for an experienced Big Data Engineer with strong Big Data / Hadoop / ETL / S3 / Java / Batch Processing / Data Streaming experience. We have been established for over 5 years and are currently enjoying a period of sustained success and that success has led to this requirement.

Sourced by: @ITJobs_NW - your 24/7 twitter feed of latest IT vacancies across the North West

WHO ARE WE?

We are a highly innovative Big Data Solution company with ambitious to grow globally over the next 2 years. We have an outstanding technology team and we are using cutting edge technology with will allow us to achieve or ambitions of becoming the market leader in our field.

WHAT WILL YOU BE DOING?

The Big Data Engineer is required to build and continually develop cutting edge data solutions within our state of the art data platform. The Big Data Engineer will be responsible for the design, development and maintenance of complex data processing products.

The candidate will develop our data platform for scalability, performance, information security and reliability. You will be working with senior management and stakeholders to design capabilities of data that could run in to the petabytes.

The Big Data Engineer with work with multiple departments to prioritise backlogs and will work with the Business Analysts to analyse requirements and produce technical specifications. Based on these specifications, you will design and implement new database structures and performance tune existing ones.

A large part of the role will be using Java with Spark to build ETL jobs for batch processing and live streaming

WE NEED YOU TO HAVE

  • Degree in computer science/mathematics related degree
  • Experience of big data environments on a large scale projects
  • Hadoop / Impala / NoSQL
  • Experience using Java with Spark to build ETL jobs for batch processing and live streaming
  • Experience with frameworks and platforms such as Talend
  • Programming experience with any of the following Python, C, C++, Perl,
  • Hadoop clusters

IT'S NICE TO HAVE

  • MapReduce
  • Pig
  • Data Visualisation

TO BE CONSIDERED

Please apply by clicking online or emailing me directly to . For further information please call me on / . I can make myself available outside of normal working hours to suit from 7am until 10pm. If unavailable please leave a message and either myself or one of my colleagues will respond. By applying for this role you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. Also feel free to follow me on Twitter @BigDataJames or connect with me on LinkedIn, just search James Lovick in Google! I look forward to hearing from you. IND123

KEY SKILLS:

Big Data / Hadoop / ETL / S3 / Java / Batch Processing / Data Streaming