Sopra Steria
Data Engineer PySpark
Job Description
Company Description
About Sopra Steria
Sopra Steria, a major Tech player in Europe with 56,000 employees in nearly 30 countries, is recognized for its consulting, digital services and software development. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organizations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a fully collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2023, the Group generated revenues of €5.8 billion.
The world is how we shape it.
Job Description
We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will collaborate closely with our Data Scientists to develop and deploy machine learning models. Proficiency in below listed skills will be crucial in building and maintaining pipelines for training and inference datasets.
Responsibilities:
• Work in tandem with Data Scientists to design, develop, and implement machine learning pipelines.
• Utilize PySpark for data processing, transformation, and preparation for model training.
• Leverage AWS EMR and S3 for scalable and efficient data storage and processing.
• Implement and manage ETL workflows using Stream sets for data ingestion and transformation.
• Design and construct pipelines to deliver high-quality training and inference datasets.
• Collaborate with cross-functional teams to ensure smooth deployment and real-time/near real-time inferencing capabilities.
• Optimize and fine-tune pipelines for performance, scalability, and reliability.
• Ensure IAM policies and permissions are appropriately configured for secure data access and management.
• Implement Spark architecture and optimize Spark jobs for scalable data processing.
Requirements:
Mandatory
• Proficiency in Advanced SQL (Window functions), Spark Architecture, Pyspark or Scala with Spark, Hadoop.
• Proven expertise in designing and deploying data pipelines.
• Strong problem-solving skills and ability to work effectively in a collaborative team environment.
• Excellent communication skills and ability to translate technical concepts to non-technical stakeholder
Desirable
• Hands-on experience with Airflow, S3, and Stream sets or similar ETL tools. [ can be trained locally ]
• Understanding of real-time or near real-time inferencing architectures.
- •Basic Knowledge on Kafka ,AWS IAM, AWS EMR and Snowflake.
Total Experience Expected: 06-08 years
Qualifications
BE
Additional Information
At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences.
All of our positions are open to people with disabilities.