Accellor
Data Fabric Engineer Lead
Job Description
Accellor is looking for a Lead Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required.
- Design, develop, and maintain ETL pipelines using PySpark Notebooks and Microsoft Fabric.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions.
- Migrate and integrate data from legacy SQL Server environments into modern data platforms.
- Optimize data pipelines and workflows for scalability, efficiency, and reliability.
- Provide technical leadership and mentorship to junior developers and other team members.
- Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability.
- Develop, maintain, and enforce data engineering best practices, coding standards, and documentation.
- Conduct code reviews and provide constructive feedback to improve team productivity and code quality.
- Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field.
- Experience with Microsoft Fabric or similar cloud-based data integration platforms is a must.
- 10+ years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools.
- Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling.
- Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing.
- Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage.
- Experience working with both structured and unstructured data sources.
- Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues.
- Proven ability to work independently, as part of a team, and in leadership roles.
- Strong communication skills with the ability to translate complex technical concepts into business terms.
Mandatory skills
- Experience with Data lake, Data warehouse, Delta lake
- Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools.
- Knowledge of scripting languages (e.g., Python, Scala) for data manipulation and automation.
- Familiarity with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus.
Benefits
Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them.
Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters.
Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays.
Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings.
Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.