Hadoop Platform Engineer - Global eTravel Leader - Bangkok

Big Wednesday Digital
11 Jan 2018
10 Feb 2018
Contract Type
Full Time

Our client is looking for top quality passionate engineers to build products across their next gen data platform products.

  • Their systems scale across multiple data centres, totalling a few million writes per second and managing petabytes of data. They deal with problems from real-time data-ingestion, replication, enrichment, storage and analytics. They're not just using Big Data technologies; they're pushing them to the edge.
  • In this competitive world of online travel agencies, finding even the slightest advantage in the data can make or break a company. That is why data systems is their top priority.
  • While they are proud of what they've built so far, there is still a long way to go to fulfil their vision of data. They are looking for people like you who are as excited about data technology as they are, to join the fight.
    You can be part of designing, building, deploying (and probably debugging) products across all aspects of the core data platform products.

Why Hadoop Platform Team?

When you join this team, the primary function of the Hadoop Platform is to run multiple Hadoop clusters across multiple data centres, serving teams across all of the company, serving large numbers of concurrent jobs at any one-time processing Petabytes of data for any and all purposes ranging from Machine Learning, BI, Cubes on Hadoop, Advertising, etc.

To do this successfully, you will adopt an approach of infrastructure as code, you will be automating everything from deployment to monitoring and remediation, with a huge focus on monitoring, which will provide you the ability to quickly identify and fix issues whether it be infra, data or job related.

Day to Day:

  • You will manage, administer, troubleshoot and grow multiple Hadoop clusters
  • You will build automated tools to solve operational issues
  • You will run effective POC's on new platform products that can grow the list of services we offer

About You:

  • You'll probably hold a bachelor's degree in Computer Science / Information Systems / Engineering / related field
  • You'll be skilled with Bash and at least one other scripting language (Python, Perl, etc.)
  • You'll have a deep understanding of data architecture principles
  • You'll be experienced in solving problems and working with a team to resolve large scale production issues
  • You'll be experienced with configuration management systems (e.g. Puppet, Chef, Saltstack, etc) and version control (eg. git, svn)
  • You'll have experience with Linux HA, clustering and load balancing solutions
  • You'll have great multi-tasking skills
  • You'll have proficient English oral and written communication skills

Nice to Haves:

  • Experience installing, administering and managing Hadoop clusters (Hadoop2 with YARN)
  • Experience with Cloudera Hadoop (CDH)
  • Experience with Hadoop query engine (Hive, SparkSQL, Impala)
  • Experience with modern languages preferably using JVM-based languages (Java, Scala)
  • Solid experience in JVM tuning
  • Experience working with open source products
  • Working in an agile environment using test driven methodologies

What else?

You can expect to grow rapidly as an engineer.

  • You will work with top Hadoop engineers
  • Have the ability to use and expand your experience
  • Have a big impact on the business.

Some tech you will use:

Hadoop, Spark, Hive, Impala, Scala, Avro, Parquet, Sensu, ElasticSearch, Python, Django, Postgres,

If that's the kind of team you want to join, let's talk - Send us your CV today!

Our client is an equal opportunity employer and values diversity. They do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status or disability status.

Please note this role open to local and international applications. Full visa sponsorship and relocation assistance available.

This job was originally posted as www.totaljobs.com/job/79174743