FWD Insurance
Assistant Manager, Data Engineering
Job Description
About FWD Group
FWD Group is a pan-Asian life and health insurance business with more than 12 million customers across 10 markets, including some of the fastest-growing insurance markets in the world. The company was established in 2013 and is focused on changing the way people feel about insurance. FWD’s customer-led and digitally enabled approach aims to deliver innovative propositions, easy-to-understand products and a simpler insurance experience.
For more information, please visit www.fwd.com
FWD Technology and Innovation Malaysia Sdn. Bhd., known as FWD TIM, was established in late 2019. Strategically located in Kuala Lumpur, FWD TIM serves as a pivotal shared service location within FWD Group, providing services to multiple markets across the Group. FWD TIM houses a diverse and talented workforce focused on essential business and technology services such as information security, cloud operations, IT solutions delivery, digital and data, actuarial, finance, investments, and customer service, among many others. FWD TIM is dedicated to drive and deliver operational excellence and efficiency, foster innovation and ensure regulatory compliance across all business functions as well as maintain a competitive edge in the market.
PURPOSE
A role to be responsible for system design, development and implementation of regional frontend systems. Provide maintenance and support on production systems.
KEY ACCOUNTABILITIES
- Design, develop, document and implement end-to-end data pipelines and data integration processes, both batch and real-time. This include data analysis, data profiling, data cleansing, data lineage, data mapping, data transformation, developing ETL / ELT jobs and workflows, and deployment of data solutions.
- Monitor, recommend, develop and implement ways to improve data quality including reliability, efficiency and cleanliness, and to optimize and fine-tune ETL / ELT processes.
- Recommend, execute and deliver best practices in data management and data lifecycle processes, including modular development of ETL / ELT processes, coding and configuration standards, error handling and notification standards, auditing standards, and data archival standards.
- Prepare test data, assist to create and execute test plans, test cases and test scripts.
- Collaborate with Data Architect, Data Modeler, IT team members, SMEs, vendors and internal business stakeholders, to understand data needs, gather requirements and implement data solutions to deliver business goals.
- BAU support for any data issues and change requests, document all investigations, findings, recommendations and resolutions.
KEY PERFORMANCE INDICATORS
- Maturity level on technical skillsets of management and development on big data platform including data modeling design, data transformation, big data programming and performance tuning
- Effectiveness of project contribution, including from technical support, project management and team collaboration perspective, to deliver data lake system on assigned local countries
- Level of understanding on business requirements and capability to transform them into technical solution
- Variety of new technologies (such as Azure and AWS Big Data Solution, Power BI, Tableau, Python and etc) and techniques being learnt that can be applied to solution delivery
EXTERNAL & INTERNAL CONTACTS
- Group Infrastructure Team, Security Teams and Operation Team
- Group Project Team
- Local country Project Team (IT and User)
- IT Vendors and/or Service Providers
QUALIFICATIONS / EXPERIENCE
- Bachelor’s degree in IT, Computer Science or related Engineering.
- At least 5 years of relevant experience in data engineering or related field
- In-depth understanding of data structures, algorithms, databases, and software engineering.
- Experience in data management and data warehousing technologies (data mart, data lake, lake house)
- Ability to write efficient and scalable code
- Experience with end-to-end data processing pipelines, ETL/ELT, and data warehousing
- Knowledge of distributed systems and experience working with distributed data processing technologies.
- 5+ years of professional experience in implementing operational data stores, data warehouses, data mart and large scale data architectures in Unix and/or Windows environments
- 5+ years of hands-on ETL development experience, including transforming complex data structures from multiple sources.
- 5+ years of experience with big data technologies including Azure and AWS, Azure data factory, Databricks, Synapse, Hadoop, Hive, Storm, Presto, and real-time data transformation and processing technologies such as Confluent Kafka, ksqldb, kstream, Apache Flink, and Apache Spark Streaming.
- 5+ years of experience in implementing data models using dimensional modeling and data vault modeling techniques.
- Experience with scripting languages such as Shell, Perl or Python
- Experience with any software development methodologies.
KNOWLEDGE & TECHNICAL SKILLS
- Strong knowledge of various database technologies (RDBMS, NoSQL, and columnar).
- Ability to handle and process different types of data (structured, semi-structured, and unstructured).
- Strong proficiency in at least one programming language such as Python, Java or Scala.
- Knowledge of MDM, Data Governance tools, and Informatica technologies.
- Prior experience in the insurance industry is an advantage.
- Knowledge of data security and privacy best practices, such as encryption and data masking
- DevOps (CI/CD) and data privacy regulations knowledge.
Good to Have
- Experience in API development and integration. E.g. SOAP, RestAPI
- Build and modify APIs(Restful) and programs using Phyton or Java