WNS Global Services
Deputy Manager – RNA
Job Description
Company Description
WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees.
Job Description
Job DescriptionWe are seeking a skilled Data Engineer with a minimum of 5 years of experience in data engineering and building data pipelines. The ideal candidate must possess exceptional SQL and Python skills and should have a strong understanding of data modelling techniques, particularly Star-schemas. Experience with technologies like Azure Data Factory, Informatica, and Talend is highly desirable. While expertise in Snowflake is preferred, proficiency with other database technologies is also valued. The successful candidate should be adept at sourcing data through APIs, modelling data using SQL/Python, and be well versed in data governance frameworks, including data validation and testingKey responsibilities/Skillsets will include but are not limited to:Experience setting up AWS Data Platform – AWS Cloud Formation, Development Endpoints, AWS Glue, EMR and Jupyter/Sagemaker Notebooks, Redshift, S3, and EC2 instances. Track record of successfully building scalable Data Lake solutions that connects to distributed data storage using multiple data connectors.Must have a background in data engineering – Data Warehouse Development experience would be perfectMust have strong skills in SQL, Python, PySpark, and AWSWell versed with creating HLD and LLD documents.Well versed with ER/Dimensional modelling.Good knowledge of ETL as to manage production ETL processes using our ETL tool on SnowflakeExperience in designing, developing, optimizing and troubleshooting complex Data Pipelines using Spark clusterAbility to lead proofs-of-concepts, then effectively transition, and scale those concepts into production at scale through, engineering, deployment and commercialization.Serve as an expert; envision and integrate emerging data technologies, anticipate new trends to solve complex business and technical problems.Debug the existing ETL for any issues with in the SLASupport and maintain the ETL/ELT(AWS Glue, Datalake, S3)Integrating end to end data pipelines to take data from source systems to stage to data repositories and to Symantec DB ensuring the quality and consistency of data is always maintainedResponsibilities will include developing new tables, implementing business rules, and conducting end-to-end testing.Unit test of developed codeCommunicate design approachesPerform data profiling and communicate resultsAssist in integration testStrong communication and collaboration skillsCore Skills: AWS Tech Stack (AWS Glue, S3, Redshift, Kinesis, DMS, app services etc )SQLPython / PySparkETL/ ELT
Qualifications
Graduate or Post Graduate in any relevant field