This job has expired

GCP DevOps Engineer

Closing date
23 Feb 2021

View more

Technology & New Media
Contract Type
You need to sign in or create an account to save a job.

Job Details

About The Arabesque GroupWelcome to the Arabesque Group, a global group of fintech companies providing a range of sustainable investment and data services from its offices around the world. Established in 2013, the Arabesque Group has a founding mission to help mainstream sustainability across capital markets. We believe economic value creation can and should be combined with environmental stewardship, social inclusion and sound governance. Through our group of companies, we combine data and AI to deliver sustainable, transparent financial solutions for our changing world.

About Arabesque S-Ray GmbHArabesque S-Ray GmbH is a global financial services company that focuses on advisory and data solutions by combining big data and ESG metrics to assess the performance and sustainability of publicly listed companies worldwide.

Headquartered in Frankfurt and with offices in London, Boston and Singapore, Arabesque S-Ray empowers investors, corporates and other stakeholders across the world to make more sustainable decisions. The firm's evolution is a story of partnership between leaders in finance, mathematics, data science and sustainability working together to accelerate the transition to a more sustainable future.

RoleWe're looking for a DevOps Engineer with 5+ years of experience to help our Engineering team become more efficient at building and deploying highly scalable, secure and robust data pipelines and APIs. Your main responsibility will be to improve our Kubernetes platform, ensuring best practices are met through CI/CD pipelines. You'll be using industry standard tools with the goal of improving operational efficiency through scaling and automation whilst ensuring security always remains a top priority.

As our ideal candidate, you love working on stable production environments with effective monitoring, scaling and automation in place. Rather than working in isolation, you want to build new DevOps tools and techniques that our engineers will enjoy using to become more efficient. Our junior developers know they can count on you as a mentor and to learn from your knowledge about how to build scalable systems adhering to best practices.

What You'll Be Working On
  • Owning our CI/CD pipelines and identifying tools, patterns and best practices for automation and observability
  • Automating our platform setup and developing infrastructure as code to ensure synergy across our different environments
  • Designing and implementing a comprehensive monitoring, logging and tracing environment, ensuring observability and alerting is in place
  • Reviewing and setting up new security best practices to maintain platform and data integrity

  • GCP hands-on experience (GKE, Cloud Build, Cloud SQL, Pub/Sub, Dataflow, BigTable/BigQuery, IAM, KSM, Container Registry)
  • A good understanding of Docker, containerisation and deployment of reliable, highly available and scalable APIs and ETL processes
  • Kubernetes (ideally GKE) and tooling experience (Helm, Istio, Envoy, Kong, Linkerd, Argo, Knative)
  • Knowledge of Kubernetes multi-cluster communication, service meshes, control plane and ingress controllers
  • Knowledge of automation and scaling Kubernetes operations (HPA)
  • CI/CD experience (GitHub Actions, GitLab CI/CD, Jenkins, CircleCI, GCP Cloud Build)
  • Understanding of infrastructure-as-code (Terraform, Ansible)
  • Familiar with CNCF projects
  • Monitoring tools and best practices (Prometheus, Grafana, FluentD, ELK, statsd, Stackdriver, Splunk)
  • Working knowledge of networking (TCP/IP, UDP, DNS, VPN, VPC)
  • Good knowledge of Scrum and RAD
  • Proficient in English; any other language a plus

  • Exposure to SRE concepts
  • Knowledge of scaling microservices and inter-service communication on Kubernetes (gRPC)
  • Working with ETL processes and data pipelines (Spark)
  • Incident Management (PagerDuty, AlertManager)
  • Hands-on experience with Linux (Ubuntu, Debian)
  • Tracing tools and best practices (OpenTracing, Jaeger)
  • Good knowledge of Python, including best practices about running parallel Python code in production. Knowledge of Go would be beneficial
  • Knowledge of message queues and best practices (Pub/Sub, Kafka)
  • Experience with working on databases (CloudSQL, Postgres, Mongo, Cassandra, InfluxDB)
  • SCM tooling (GitHub, GitLab)
  • Familiarity with OSI model
  • Chaos engineering principles

  • Taking ownership of our Kubernetes platform and develop a roadmap of improvements
  • Maintaining and implementing CI/CD best practices
  • Ensuring our systems are scalable, automated and secure
  • Providing guidance to and sharing best practices with junior team members
  • Working as part of a team to deliver product features and functionality
  • Helping design & architect software for a range of services and systems
  • Working with our development teams to incorporate infrastructure best practices and standardise our build and deployment process
  • Ensuring security best practices are met by our development teams
  • Creating and maintain internal documentation

  • High integrity and openness combined with a commitment to excellence
  • Hands-on mentality and entrepreneurial mindset; known to roll up your sleeves to deliver alongside your team

  • Competitive salary
  • 30 days' annual leave per year
You need to sign in or create an account to save a job.

Get job alerts

Create a job alert and receive personalised job recommendations straight to your inbox.

Create alert