HighLevel

Senior Data Engineer

23 December 2024
Apply Now
Deadline date:
£49000 - £92000 / year

Job Description

About HighLevel:HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website – https://www.gohighlevel.com/YouTube Channel – https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4gBlog Post – https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage
About the Role:As a Senior Data Engineer, you will be responsible for building and leading a data team to manage our vast and complex data environment. You will oversee the integration and processing of data from multiple sources, manage a large-scale data warehouse, and ensure the data infrastructure is robust and scalable. You will also handle vendor negotiations, understand billing intricacies, and provide insights into the pros and cons of various big data technologies.

Responsibilities:

  • Build and lead a high-performing data engineering team
  • Mentor and provide technical guidance to team members
  • Design, develop, and maintain data pipelines to integrate data from multiple sources, including first-party data, MongoDB, Pendo, Firestore, Stripe, Twilio, Mailgun, Freshdesk, Google Sheets, and more
  • Ensure data quality, reliability, and efficiency throughout the data lifecycle
  • Optimise data processing workflows to handle large volumes of data
  • Manage and scale the data warehouse, ensuring it meets the needs of various departments (finance, marketing, product, etc.)
  • Implement best practices for data governance, security, and compliance
  • Utilise ETL tools and data orchestration platforms like Airflow and Fivetran
  • Negotiate contracts with data vendors and service providers
  • Understand and manage billing and cost structures associated with data services
  • Provide deep insights into the pros and cons of various big data technologies and solutions
  • Write and review code in Node.js and Python to support data integration and transformation
  • Collaborate with software engineering teams to define standards and best practices for data ingestion
  • Partner with business leaders to understand data needs and translate business use cases into technical solutions
  • Work with Product Managers and Technical Leads to ensure data initiatives align with business goals

Requirements:

  • 7+ years of experience in data engineering or a related field
  • Proven experience in building and leading data engineering teams
  • Proven experience in managing large data warehouse contains 50-100+ TB data
  • Strong background in managing large-scale data warehouses and data infrastructure
  • Proficiency in Node.js and Python
  • Extensive experience with ETL tools (e.g., Fivetran, Airflow) and data warehousing solutions (e.g., Snowflake or similar)
  • In-depth knowledge of SQL and database management
  • Familiarity with data integration from various sources and APIs
  • Excellent problem-solving and analytical skills
  • Strong communication and interpersonal skills
  • Ability to influence and collaborate across all levels of the organisation
  • Understanding of billing structures and cost management for data services
  • Ability to negotiate and manage vendor contracts effectively
  • Commitment to data quality, security, and privacy

Key Performance Indicators:

  • Data Pipeline Reliability and Uptime: Percentage of successful data pipeline executions vs. failures. Aim for 99.9% or higher uptime for data pipelines
  • Data Processing Efficiency: Average time to process data from extraction to loading into the data warehouse. Optimise data processing times to meet or exceed defined SLAs
  • Data Quality and Accuracy: Number of data quality issues identified and resolved within a specific time frame. Minimise data quality issues and ensure quick resolution, aiming for fewer than a specified threshold of issues per month

EEO Statement:At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.