Allegro
Mid/Senior Software Engineer (Machine Learning)
Job Description
Job Description
The MLOps team is a part of the Machine Learning Research lab and provides tools for optimizing, scaling, and deploying advanced machine learning models. We blend artificial intelligence, software engineering, and DevOps expertise to embrace the full potential of research engineers and data scientists from other teams. We orchestrate the entire machine learning lifecycle, from data preprocessing and annotation to model deployment, using the cutting-edge infrastructure of Google Cloud and Kubernetes. We’re operating at a massive scale with several terabytes of data processed daily and thousands of predictions per second.
As a Machine Learning Engineer (Software Engineer – Machine Learning) you will support Research Engineers in building Machine Learning models and then deploy them on production, ensuring high availability and performance. Your work will be related to:
- MLOps ecosystem development for ML models training and deployment automation
- Gen AI enablement platform development
- Model training pipelines development
- Production-running models management, monitoring and tweaking
- Internal feature-store development
- Supporting product teams in ML-based microservices development
- Internal data annotation platform development
Your daily toolkit will consist of:
- Python
- Kotlin and Java
- Docker and K8s
- Google Cloud Platform (Cloud Composer, Dataflow, Vertex AI)
- internal Allegro libraries and toolkits
What we offer:
- Support from experienced Machine Learning Engineers, Research Engineers, Data Engineers and Data Scientists
- A hybrid work model that you will agree on with your leader and the team. We have well-located offices (with fully equipped kitchens and bicycle parking facilities) and excellent working tools (height-adjustable desks, interactive conference rooms)
- Annual bonus up to 10% of the annual salary gross (depending on your annual assessment and the company’s results)
- A wide selection of fringe benefits in a cafeteria plan – you choose what you like (e.g. medical, sports or lunch packages, insurance, purchase vouchers)
- English classes that we pay for related to the specific nature of your job
- 16″ or 14″ MacBook Pro with M1 processor and, 32GB RAM or a corresponding Dell with Windows (if you don’t like Macs) and other gadgets that you may need
- Working in a team you can always count on — we have on board top-class specialists and experts in their areas of expertise
- A high degree of autonomy in terms of organizing your team’s work; we encourage you to develop continuously and try out new things
- Hackathons, team tourism, training budget and an internal educational platform, MindUp (including training courses on work organization, means of communications, motivation to work and various technologies and subject-matter issues)
- If you want to learn more, check it out
We are looking for people who:
- Know ML models lifecycle
- Have a practical experience in ML-based solutions development
- Are familiar with any cloud platform (preferably Google Cloud Platform)
- Have a practical experience in Python microservices and libraries development as well as maintenance
- Can independently make decisions within a designated scope and take full responsibility for tasks taken, during their entire lifecycle: from requirements engineering, through implementation to deployment and maintenance
- Know English on at least B2 level
Nice to have:
- Practical knowledge of ML algorithms and common libraries used to deal with models (such as scikit-learn, PyTorch, Pandas)
- SQL
- Experience in JVM stack
- Practical knowledge of Kubernetes-based MLOps solutions (Kubeflow) and/or MLOps solutions available in the cloud (preferably Google Cloud Vertex AI)
- Familiarity with at least one of the ML subdomains or techniques: vector search/NNS, NLP, computer vision or other
This may also be of interest to you:
Send in your CV and see why it is #dobrzetubyć (#goodtobehere)