Senior Data Engineer (Remote)

30 October 2025
Apply Now
Deadline date:

Job Description

This role may require an onsite interview at our headquarters in Austin, TX. SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job – no more, no less.

Built on a foundation of AI and ML, our Identity Security Cloud Platform delivers the right level of access to the right identities and resources at the right time—matching the scale, velocity, and changing needs of today’s cloud-oriented, modern enterprise. About the role: As a Senior Data Engineer, you will collaborate with a team of AI/ML Engineers, Data Science and product management to design and build scalable data solutions that power the ML and analytics products our team owns. You will immediately join the team that is building AI Agents for the SailPoint Product Offering. In this role, your technical skills and analytical mindset will be utilized designing and building data sets that will be consumed by other teams and functions to perform analytics on AI products.

In the longer term will become well-versed in building complex feature sets that power AI/ML products. About the team: The AI team at SailPoint applies AI and domain expertise to create AI solutions that solve real problems in identity governance.

We believe the path to success is through meaningful customer outcomes, and we leverage traditional AI/ML as well as recent innovations in Generative AI and Graph ML to bring our solutions to SailPoint’s core product lines. Requirements: B. S.

in Computer Science, or a related field. 5 to 9 years of professional experience Experience leading projects Ability to design and implement data models that support business requirements and analytics needs Experienced in engineering data pipelines and orchestrating workflows Demonstrated system-design experience orchestrating ELT processes targeting data Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes.

Familiarity with software engineering best practices (e. g.

source control, code reviews, unit testing) Can-do attitude with a focus on delivering measurable business impact. Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Strong communicator who can convey complex topics to different audiences.

Preferred Familiarity with machine learning concepts and tools to support data-driven decision-making Experience working with our team Tech Stack The Tech Stack: Required Core Programming: SQL, Python, Shell/Bash Preferred Cloud Platform: AWS Preferred Data: Snowflake, DBT, Kafka, Airflow, Feast Preferred Visualization: Tableau, Qlik Preferred CI/CD: Cloudbees, Jenkins Roadmap for success- 30 days: Understand the current data infrastructure and workflows. Get familiar with the team and ongoing projects. Start contributing to small tasks and bug fixes.

90 days: Take ownership of a scoped project or feature. Collaborate with team members to design and implement data models. Begin building data pipelines and orchestrating workflows.

6 months: Lead the design and implementation of ELT processes. Develop and maintain scalable data pipelines for both stream and batch processing. Ensure data security, quality, and compliance in your areas of ownership.


EWJD3