DeleteMe

Data Engineer

28 March 2024
Apply Now
Deadline date:
£125000 - £195000 / year

Job Description

About DeleteMe, The Online Privacy Company
DeleteMe is the online privacy company that makes easy-to-use tools for consumers and businesses to control what personal information companies, third parties, and other people see about them online.
DeleteMe is a rapidly growing SaaS privacy business operating globally and remotely / WFH – we are the emerging leader based on # customers and $ revenues in a fast-growing nascent category of consumer and enterprise security Privacy Services.  What we do – our mission – matters because we are restoring a sense of privacy, fairness, and control of personal data in the possession of others.  Easier, simpler, control underpinned by a suite of new data privacy laws being passed worldwide will play a part in greater personal security, freedom, and in stronger democracies in an era where data collection is at unprecedented levels.  This is what our work and brand stand for and we are building a large, sustainable, for-profit business to catalyze this.  We have strong B2C and B2B businesses with respective product offerings informed by feedback from an active customer base growing between 30% and 200% year/year. 
DeleteMe is well-capitalized: profitable for the last three years with an 8-figure balance sheet and large-scale venture firms as investors.
DeleteMe is led by a passionate team, backed by premier investment firms, and supercharged by a strong mission to empower consumers with privacy.
Job Summary
The Data Engineer/Analyst will help us manage and use data to make informed decisions about what we build, how we help our customers, and how we grow our company. The ideal candidate combines the ability to operate and evolve the data pipelines as well as perform data analysis tasks, and will work closely with our engineering and product teams to improve the way we collect, structure, and store data. Work with our management, marketing, sales, and service teams to understand what they need and drive the requirement process peaking their domain language to get them the data they need the way they want to empower them to make good decisions.  After documenting requirements, candidates should be able to extract data from various source systems, ensure the data is clean, transform it according to business requirements, and provide an intuitive dashboard for business partners to view data. We are building our data practice from the ground up so you You will have the opportunity to work on a new modern data engineering tech stack. We use Snowflake as our Data Warehouse and Fivetran as our ETL data extraction tool. We use DBT to extract, transform and load our data and enable CI/CD capabilities. We are big Open Source users and use Apache Superset as our reporting tool. Reverse ETL is performed using Hightouch and components of Rudderstack. Our tech platform includes AWS, RDS, Terraform, Docker, and Kubernetes. We prioritize using SQL first when transforming data via DBT, unless Python offers a simpler, more supportable, solution for the given use case. Python is planned to become the to-go tech for data analysis and modeling. We are just getting started, so  You will be a key driver in maturing our data engineering practice by recommending and implementing best practices.  from the ground up. 
Responsibilities
Maintain and evolve our data pipeline infrastructure; help identify the new/best tools and tech for the job. Properly document code according to department standards.  Valuable knowledge is gained translating business requirements into code and applicable business rules. Pass on this knowledge to the other team members and business partners.Improve our data collection and storage practices to enhance the value of our internal data pipelines and data flows.Ensure our data is available where it is needed and when it is needed.Build internal user trust and confidence in the data warehouse through developing, maintaining and monitoring tests throughout the Data Engineering tool landscape.Interview business users and document data reporting and analysis requirements, develop ETL, data models, and dashboards.Design and implement a strategy and systems for data analysis, reporting, querying, and visualization to enable internal and external partners to explore our data, answer their questions, and make good decisions. 
Requirements
We’re looking for someone with 4+ years of proven working experience in data engineering/warehousing, data analysis, data modeling, and reporting for a business department of a SaaS company, who wants to be a big part of helping a team achieve big things, who believes in privacy as a mission, and who can both operate independently and help a team be more than the sum of its parts. These are some of the specific characteristics we are looking for. Strong technical and people skills. Excellent written and verbal communication skills. Experience with data engineering/data warehousing/analytics engineering. Experience with building SQL based data pipelines and data models. Experience with database design and implementation. Experience with gathering and document reporting requirements. Experience with data reporting and visualization tools. Experience with appropriate programming languages and technologies.
Preferred experience with existing toolsets: *Familiarity with Snowflake and DBT is a prerequisite/requirement for this position.Any SQL DB as a primary data source NoSQL is a plus.Fivetran, Snowflake, DBT, Hightouch and Shipyard.Apache Superset, HEX, Tableau or a similar interactive data visualization software.Basic knowledge of Python, R, or another standard data analysis language/framework.BS in Computer Science / STEM / Economics.Ability to thrive in a fast-paced work environment, where change is constant, and flexibility is key.