Big Data Platform Engineer

Versorium Technical Recruitment
London (Greater)
11 Jan 2018
10 Feb 2018
Contract Type
Full Time
Permanent Candidates Only Will be considered.
We are looking for a Big Data Application Analyst that will work on the managing of our Big Data Hadoop platforms that includes the collecting, storing, processing and analysing massive data sets.
The primary focus will be provide subject matter expertise for the plan, build and tuning of the Big data Hadoop environment to support our Big Data workloads. This includes diving into challenging technical problems, research and conduct testing to improve existing Big Data environment This new role will report to the European Head of Infrastructure and Information Security with particular emphasis on designing and implementing Big Data Hadoop solutions.
  • Use your techinal expertise to lead projects with the ability to design and build Big data solutions at scale, across multiple data centers, and cloud providers.
  • Providing subject matter expertise on Big Data and Hadoop based technologies
  • Finding pragmatic solutions that balance business, design standards, operational support and security needs
  • Build collaborative partnerships with business and technical leads to drive key big data initatives.
  • Efficiently communicate project status, performance reports & service status to clients (Internal & External).
  • Remain informed on trends and issues in the industry, including current and emerging technologies.
  • Advise, counsel, and educate executive and engineering teams on their relative importance
  • Provide training and knowledge transfer to the team to widen their technical skills and understanding of new technologies Support sales and consultative activities

Degree Educated or relevant work experience 8 years in Information Technology 3+ years experience in Big data technologies Experience in management of Hadoop and Spark Clusters, Cloudera Hadoop will be a plus Experience with Integration of Data from multiple data sources Troubleshoot and resolve Data Pipeline related issues Experience with Hadoop technologies like Map Reduce, Hive, HDFS, Oozie, Hue, Sqoop, Flume, Kafka, Zookeeper, Sentry Experience with programming languages like Java, Pig, Python, Scala, MapR Experience on databases and BI tools , such as HBase, MySQL, Oracle, Impala & Tableau Experience scripting for automation and config management (Chef, Puppet) Exposure to AWS EMR, S3, RDS, Lamdda, Kinesis, SNS, SQS and Cloudwatch will be a plus General understanding of Industry Compliance and Regulations such as ISO27001, PCI-DSS and GDPR.
Desired Knowledge
  • Strong customer service skills and an eagerness to excel Flexible, open mind, self-motivated Driven by quality, performance, and automation
  • Excellent analytical and problem-solving skills.
  • Ability to work within a global team and coordinate activities with remote persons
  • Being able to work independently
  • Willing to propose ways for improvements
  • Attention to detail and the ability to learn quickly
  • Excellent written and verbal English

This job was originally posted as