Fulcrum Digital Inc.

Sr. Data Engineer

11 April 2025
Apply Now
Deadline date:
£58000 / year

Job Description

Who are we

Fulcrum Digital is an agile and next-generation digital accelerating company providing digital transformation and technology services right from ideation to implementation. These services have applicability across a variety of industries, including banking & financial services, insurance, retail, higher education, food, health care, and manufacturing.

The
Role

  • Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.
  • Constructing infrastructure for efficient ETL processes from various sources and storage systems.
  • Leading the implementation of algorithms and prototypes to transform raw data into useful information.
  • Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations.
  • Creating innovative data validation methods and data analysis tools.
  • Ensuring compliance with data governance and security policies.
  • Interpreting data trends and patterns to establish operational alerts.
  • Developing analytical tools, programs, and reporting mechanisms.
  • Conducting complex data analysis and presenting results effectively.
  • Preparing data for prescriptive and predictive modeling.
  • Continuously exploring opportunities to enhance data quality and reliability.
  • Applying strong programming and problem-solving skills to develop scalable solutions.

Requirements

  • Experience in the Big Data technologies (Hadoop, Spark, Nifi,
    Impala)
  • 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.
  • High proficiency in Scala/Java and Spark for applied large-scale data processing.
  • Expertise with big data technologies, including Spark, Data Lake, and Hive.
  • Solid understanding of batch and streaming data processing techniques.
  • Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion.
  • Expert-level ability to write complex, optimized SQL queries across extensive data volumes.
  • Experience on HDFS, Nifi, Kafka.
  • Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB
  • Familiarity with Agile methodologies.
  • Obsession for service observability, instrumentation, monitoring, and alerting.
  • Knowledge or experience in architectural best practices for building data lakes.