DATAECONOMY
Python Pyspark Developer
Job Description
As a Python PySpark
Developer, you will be tasked with the design, development, and optimization of
big data applications using Python and PySpark. You will work closely with data
engineers, data scientists, and other stakeholders to implement data pipelines
and ensure high performance across our data processing systems.
Responsibilities:
- Develop and maintain scalable data pipelines
using Python and PySpark. - Collaborate with data engineers and data
scientists to understand and fulfill data processing needs. - Optimize and troubleshoot existing PySpark
applications for performance improvements. - Write clean, efficient, and well-documented
code following best practices. - Participate in design and code reviews.
- Develop and implement ETL processes to
extract, transform, and load data. - Ensure data integrity and quality throughout
the data lifecycle. - Stay current with the latest industry trends
and technologies in big data and cloud computing.
Requirements
Qualifications:
- Bachelor’s degree in computer science,
Information Technology, or a related field. - Proven experience as a Python Developer with
expertise in PySpark. - Strong understanding of big data
technologies and frameworks. - Experience with distributed computing and
parallel processing. - Proficiency in SQL and experience with
database systems. - Solid understanding of data engineering
concepts and best practices. - Ability to work in a fast-paced environment
and handle multiple projects simultaneously. - Excellent problem-solving and debugging
skills. - Strong communication and collaboration
abilities.
Skills:
- Python
- PySpark
- Big Data
- Distributed Computing
- ETL Processes
- SQL
- Data Engineering
- Cloud Computing (AWS, GCP, or Azure)
- Data Warehousing
- Apache Spark