Red Bull

Data Engineer

18 April 2024
Apply Now
Deadline date:
£89000 - £150000 / year

Job Description

Company Description

The Red Bull Athlete Performance Center serves as a dynamic accelerator for Red Bull athletes, propelling them towards unparalleled excellence in their respective sports. Leveraging advanced technology, cutting-edge analytics, and insights gleaned from a diverse array of sporting disciplines, we unlock the individual potential of each athlete.

As the APC undergoes rapid expansion, both in physical infrastructure and strategic sport disciplines such as soccer, ice hockey, and Formula 1, our commitment to enhancing data infrastructure, engineering, and analytics capabilities remains unwavering. This investment underpins our mission to provide unparalleled support and resources to athletes and teams striving for greatness.

In this fast-paced environment, the Data Engineer plays a pivotal role as the APC’s Data Engineering lead. They are responsible for supporting heavy data-users across various departments within the APC, ensuring a reliable data pipeline that fuels decision-making and innovation. Additionally, the Data Engineer collaborates closely with high-end sports data analysts and data scientists from the global data team, spanning APC, Red Bull Soccer, and Red Bull team sports/clubs. Together, we harness the power of data to drive performance, unlock insights, and propel our athletes towards victory on the world stage.

Job Description

Technical responsibility over data pipelines and data platforms

Design, create, maintain and scale batch and stream data pipelines to ingest data from various data sources into cloud-based data storages

Transform and model data in close alignment with data analysts and make data available to data consumers such as Data Scientists

Utilize latest technologies to work with structured and unstructured data in a highly integrated landscape

Analyze and discuss business processes, data flows and functional/technical requirements

Evaluate, propose, and select proper application solutions and vendors in alignment with HQ IT

Steer external vendors and partners

Monitor and manage corresponding IT budget

 

Manage and contribute to analytics projects

Lead and support analytics projects according to the IT Project Management methodology

Collaborate with Data Science and with business to refine data requirements

Translate the data requirements into ETL/ELT pipelines and data models

Implement or monitor the implementation of data pipelines while ensuring a high data quality

Train Data Scientists to integrate and use the data they need for their own use cases

Manage communication between business, IT partners and HQ IT

 

Service Ownership for assigned IT data & analytics applications

Act as Service Owner for selected data & analytics applications

Define service strategy and roadmap in alignment with involved stakeholders

Supervise service operations and support

Manage vendors, service level agreements and contracts

Care for proper service definition and documentation

Cooperate with central service or platform owners

 

Qualifications

Higher education in Computer Science, Information Systems, Mathematics, Physics, or related quantitative field or equivalent work experience

3 or more years of work experience as data engineer or software engineer with strong hands-on data skills

Practical experience working with ETL/ELT pipelines, cloud data warehouses (e.g., Snowflake, BigQuery, Redshift) and other cloud-native technologies, ideally in AWS or Azure

Minimum of 3 years’ experience in project management

General skills/knowledge:

Inter-personal contact

Diplomacy

Conceptual working

Organize, prioritize and coordinate multiple tasks

Flexibility

Analytical thinking

Problem solving

Hands-on mentality

Performance and result orientation

Positive attitude and a strong commitment to delivering high-quality work

Team player

Languages: fluent in German and English

IT related skills/knowledge:

Very good skills in SQL, Python or another general-purpose language like R, C++ or Java

Very good skills working with APIs, databases, data modelling and data transformation (ideally with dbt)

Solid understanding of the professional software development process following the DevOps methodology including Git, Branching Workflows, CI/CD, Containers and automated testing

Architectural knowledge related to Cloud, databases and ETL/ELT

IT Project Management

Bonus:

Experience with dbt and/or workflow management tools like Airflow or Prefect

Experience with Infrastructure-as-Code tools like Terraform

Experience with streaming technologies, such as Spark Structured Streaming, Kafka Streams or Apache Flink

Familiarity with best practices for data architecture, data modelling, and/or data engineering

Additional Information

Due to legal reasons we are obliged to disclose the minimum salary according to the collective agreement for this position. However, our attractive compensation package is based on market-oriented salaries and is therefore significantly above the stated minimum salary.

As an employer, we value diversity and support people in developing their potential and strengths, realizing their ideas and seizing opportunities. The job advertisement is aimed at all people equally, regardless of age, skin colour, religion, gender, sexual orientation or origin.

As an employer, we value diversity and support people in developing their potential and strengths, realizing their ideas and seizing opportunities. We believe passionately that employing a diverse workforce is central to our success. We welcome applications from all members of society irrespective of age, skin colour, religion, gender, sexual orientation or origin.