StarHub

Senior Data Engineer

28 March 2024
Apply Now
Deadline date:
£125000 - £195000 / year

Job Description

Key Responsibilities

As the Senior Data Engineer, you will serve as our technical expert and work closely with cross-functional teams to design, build, and optimise data solutions that drive business insights and decision-making. You will be responsible for defining data architecture, developing data pipelines, and ensuring the reliability, scalability, and performance of our data systems.

 

This role is an individual contributor position, with a focus on hands-on data engineering tasks.

  • Data Architecture Design: Design and implement scalable and efficient data architecture, including data models, data warehouses, and data lakes. Understand the business requirements and design appropriate data models, data pipelines, and data warehouses.
  • Data Integration: Integrate data from various sources such as databases and APIs into a unified format for analysis. Develop ETL (Extract, Transform, Load) processes and real-time data pipelines.
  • Data Modeling: Design and implement dimensional and data models to support data warehouses, and analytical and reporting needs.
  • Data Pipeline Development: Develop and maintain robust ETL processes and data pipelines for ingesting, processing, and transforming large volumes of data from various sources. Ensure data quality, reliability, and consistency throughout the pipelines.
  • Performance Optimisation: Optimise the performance of data processing, visualization, and storage systems, including database tuning, query and ETL processes optimisation, and infrastructure scaling, to ensure timely and efficient data access.
  • Data Governance and Security: Establish and enforce data governance policies and procedures to ensure data integrity, privacy, and compliance with regulations and internal policies. Manage access controls, encryption, and auditing of data.
  • Tool and Technology Selection: Evaluate and select appropriate tools and technologies for data storage, processing, and visualisation.
  • Collaboration and Communication: Collaborate with cross-functional teams such as data scientists, analysts, and business stakeholders to understand their requirements and deliver data solutions that meet their needs. Communicate technical concepts effectively to non-technical audiences.
  • Documentation and Knowledge Sharing: Document data pipelines, processes, and best practices to facilitate knowledge sharing and ensure the maintainability of data solutions. Promote a culture of documentation and knowledge sharing within the team.
  • Continuous Learning and Improvement: Stay updated with the latest trends, advancements, technologies, and best practices in data engineering through continuous learning and self-improvement.

Qualifications

  • Critical and logical thinker with keen business acumen to link the dots between data and business.

  • Independent, proactive, and self-motivated attitude.

  • Excellent problem-solving skills and attention to detail.

  • Excellent verbal and written communication skills, with the ability to collaborate with cross-functional stakeholders and communicate technical concepts effectively.

  • Appreciate the advantages and limitations of different technical solutions in meeting analytics needs.

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.

  • At least 5 years of relevant experience in data engineering roles with demonstrated experience in designing and building data infrastructure, pipelines, ETL processes, and data modelling.

  • Proficiency in SQL and experience with relational (e.g. PostgreSQL, MySQL) and NoSQL databases, and data warehousing technologies (e.g. Snowflake, Redshift) will be advantageous.

  • Experience with cloud platforms such as AWS (Amazon Web Services), Azure, or Google Cloud Platform, including services like S3, EC2, EMR, and BigQuery will be advantageous.

  • Strong programming skills in Python, Java, or Scala, with experience in building data processing applications and workflows using frameworks like Apache Spark or Apache Beam will be advantageous.

  • Experience with data visualisation tools such as Power BI, Tableau, or Looker, and proficiency in data modelling and visualisation techniques will be advantageous.

  • Knowledge of data governance principles, data security best practices, and regulatory compliance requirements will be advantageous.

  • Track record in management and working on multiple projects concurrently.

 

We regret that only shortlisted candidates will be notified.