Bolt

Senior Data Engineer, Data Platform – Ingestion

10 October 2024
Apply Now
Deadline date:
£84000 - £156000 / year

Job Description

Bolt engineering teams are working on unique product challenges: complex algorithms for demand prediction, optimal real-time pricing, routing, fraud detection, distributed systems and much more. Volumes are growing at a rapid pace. We are looking for an experienced engineer who is well-versed in data technologies.

Your daily adventures will include

  • Designing, building and optimizing elements of Bolt’s Data Platform. The Ingestion team is focused on the fundamental layers of the Data Platform, like the ingestion of internal and external data to our Data Lake and managing Data storage.
  • Investigating and prototyping new services to improve different aspects of our Data Platform: data quality, monitoring, alerting, performance and cost efficiency.
  • Coding mostly in Python, Scala and/or TypeScript (previous experience is not required), occasionally in other languages.
  • Proactively solving technical challenges and fixing bugs.
  • Contributing ideas and solutions to our product development roadmap.
  • At Bolt, we are using a modern data stack with Data Mesh architecture with Kafka, Presto, Spark, Databricks, Airflow, dbt, Looker, Fivetran and other relevant solutions to serve thousands of internal customers and millions of external customers.
  • We are looking for language-agnostic generalists who are able to pick up new tools to solve the problems they face. Check out our blog to know more about all the exciting projects that we are working on: https://medium.com/bolt-labs.

We are looking for

  • Experience in at least one of the modern OO languages (Python, Scala, Java, JavaScript, C++, etc)
  • 7+ years of experience in software development
  • Excellent English and communication skills
  • Experience with micro-service and distributed systems
  • Solid understanding of algorithms and data structures
  • Experience with Terraform, Kubernetes and Docker
  • Familiarity with streaming data technologies for low-latency data processing (Apache Spark/Flink, Apache Kafka, RabbitMQ, Hadoop ecosystem)
  • A university degree in a technical subject (Computer science, Mathematics or similar)

You will get extra credits for

  • Experience in building and designing real-time and asynchronous systems
  • Experience in building systems based on cloud service providers (AWS, Azure, Google Cloud)
  • Great knowledge of SQL and experience in at least one of the popular online analytical processing (OLAP) technologies (AWS Redshift, ClickHouse, Presto, Snowflake, Google BigQuery, DataBricks etc.)

#LI-Hybrid