Apply Now
Deadline date:
£45000 - £84000 / year

Job Description

FAQ:

1. How many projects you have worked using on Spark, Hadoop or Big Data Technologies? and check percentage of team management v/s implemtation contribution?

2. How much experience you have specifically building ETL pipeline, Data Modeling and external systems to pull the data?

3. How much relevant experience you have on AWS services such as S3, Glue, EMR, Redshift, and Lambda? 

4. Have you worked on SQL performance tuning? If answer is no then at least candidate show rate SQL knowledge better?

Skillset

  • Strong knowledge of AWS services such as S3, Glue, EMR, Redshift, and Lambda

Responsibilities:

Job Description:

As an AWS Data Engineer, will be responsible for designing, building, and maintaining data pipelines and data storage solutions on Amazon Web Services (AWS). Your expertise will help us leverage data for critical business insights and analytics.

Primary Skills :

1. Experience in implementing at least 2 end to end Data Engineering projects

2. 4 to 7 years experience in AWS Data Engineering

  • AWS Data Services, ETL Processes, Data Quality and Governance, Collaboration, Big Data Technologies
  • Strong knowledge of AWS services such as S3, Glue, EMR, Redshift, and Lambda.
  • Proficiency in SQL and database design.
  • Experience with big data technologies like Hadoop and Spark.
  • Minimum 3 years for experience in databricks.

Secondary Skills:

  • Data Modeling, Data Security and Compliance, Automation
  • AWS Certified Data Analytics or AWS Certified Data Specialty certification
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and teamwork skills.

Desirable skills

  • Good ability to anticipate issues and formulate remedial actions.
  • Sound interpersonal and teamwork skills.
  • Sound Knowledge of unit testing methodologies and frameworks