Sinclair Inc.

Senior Data Scientist / LLM Engineer – AI Optimizer Team

20 November 2024
Apply Now
Deadline date:
£72000 - £96000 / year

Job Description

Why CAST AI?

CAST AI is the leading Kubernetes automation platform for AWS, GCP and Azure customers. The company is on a mission to deliver a fully automated Kubernetes experience. What’s unique about CAST AI is that its platform goes beyond monitoring clusters and making recommendations; it utilizes advanced machine learning algorithms to analyze and automatically optimize clusters, saving customers 50% or more on their cloud spend, improving performance and reliability, and boosting DevOps and engineering productivity.

The company has raised $73M from investors, including Cota Capital, Creandum, Uncorrelated Ventures, and Vintage Investment Partners. CAST AI has nearly 200 employees globally and is headquartered in Miami, Florida.

However, this is merely the beginning. Our product roadmap is filled with exciting innovations that are yet to come. We are searching for intelligent, motivated, and self-reliant people to help us fulfill this ambitious mission.

Core values that hold us all together:

PRACTICE CUSTOMER OBSESSION. Focus on the customer journey and work backwards. Strive to deliver customer value and continuously solve customer problems. Listen to customer feedback, act, and iterate to improve customer experience.

LEAD. Take ownership and lead through action. Think and act on behalf of the entire company to build long-term value across team boundaries.

DEVELOP AND HIRE THE BEST. Strive to raise the performance bar by continuously investing in yourself, the team and by hiring the best possible candidates for every position. Drive towards personal development and professional growth, and mentor others to raise the collective bar.

EXPECT AND ADVOCATE CHANGE. Strive to innovate and accept the inevitable change that comes with innovation. Constantly welcome new ideas and opinions. Share insights responsibly with unwavering openness, honesty, and respect. Once a path is chosen, be ready to disagree and commit to a direction.

What does AI Optimizer Team do?

At the AI Optimizer team, our day is usually full of R&D challenges. Have you ever encountered a situation where you need to expand your AI infrastructure so that the applications can automatically pick the right large language models (LLMs) that are both more cost-efficient and better performing? Most of us probably do nowadays, or at least understand the complexity of making such decisions while keeping track of our cloud budget.

One of the team’s responsibilities is ensuring that whenever a customer makes AI-related decisions regarding their K8s infrastructure, they are implemented automatically without unnecessary costs or hassle. This is just one small piece of a bigger puzzle. To get into a more detailed perspective, ask yourself the following questions:

  • How often do you use LLMs?
  • What is the least expensive LLM you can pick for a given prompt without degrading the quality of the response?
  • How much do your applications cost per 1 million tokens and how can you improve it?
  • Which API keys have the biggest waste?
  • How can you improve your frequently running prompt to use fewer tokens?
  • What is fine-tuning and how to do it efficiently?
  • What is a transformer?

These are just several of the many questions that are part of the daily work of this team.

Being part of this team would involve design and decision-making end-to-end while collaborating with colleagues from other teams. CAST AI, being a technical product, encourages not only coding something as written in the JIRA ticket but also coming up with new features and potential solutions to customers’ problems. Given that the team is working on a technical greenfield project, you will have the opportunity to impact it in many ways positively.

Responsibilities for the role:

  • Evaluate and Analyze LLM performance
  • Fine-Tune LLMs
  • Optimize AI Models for Cost Efficiency
  • Develop and implement data science solutions
  • Monitor and improve AI systems
  • Stay up to date with industry trends.

Here are some of the tools we use daily:

  • GoLang is our main language, while Python is an accepted alternative for some cases
  • ClickHouse and PostgreSQL for persistence
  • GCP Pub/Sub for messaging
  • gRPC for internal communication
  • REST for public APIs
  • Kubernetes which our product is evolving around
  • AWS, GCP, and Azure cloud providers, which are currently supported in our platform
  • We use GitLab CI with ArgoCD as our GitOps CD engine
  • Prometheus, Grafana, Loki, and Tempo for observability.

Requirements

  • You have to be physically in any of the European countries GMT 0 to GMT +3
  • Strong software engineering skills
  • Minimum of 3 years of hands-on experience in Data Science and Machine Learning, with a proven track record, demonstrated through a robust portfolio of projects
  • Strong English skills
  • Strong verbal and written communication skills
  • Ability to work independently or with a group.

What’s in it for you?

  • Join a fast-growing, cutting-edge company that’s redefining cloud-native automation and optimization.
  • Collaborate with a global team of cloud experts and innovators, passionate about pushing the boundaries of Kubernetes technology.
  • Enjoy a flexible, remote-first work environment with opportunities to travel and engage with customers worldwide.
  • Receive a competitive compensation package, equity options, and extensive benefits.
  • Benefit from a short feedback loop, where our customer-oriented approach means we ship code changes fast to receive customer feedback immediately.
  • Experience focus time with a minimum of meetings, bureaucracy, and overhead.
  • Dedicate 10% of your time to self-improvement and personal projects.
  • Earn a monthly salary from €6000 to €8000 (gross) depending on the level of experience.