Amazon.com
Software Engineer – Generative AI, AGI Inference Engine
Job Description
Key job responsibilities
As a Software Development Engineer, you will be responsible for designing, developing, testing, and deploying high performance inference capabilities, including but not limited to multi-modality, SOTA model architectures, latency, throughput, and cost. You will collaborate closely with a team of engineers and scientists to influence our overall strategy, and define the team’s roadmap. You will drive system architecture, spearhead best practices, and mentor junior engineers.
A day in the life
You will read papers and consult with scientists to get inspiration of emerging techniques, and blend those into our roadmap; You will design and experiment with new algorithms, benchmark the latency and accuracy of your implementations; Most importantly you will implement production grade solutions, and see them through the deployments swiftly; You may need to collaborate with other science and engineering teams to get things done properly; You will hold highest bar in operational excellence and support production systems, and constantly create solutions to minimize the ops load.
About the team
Our mission is to build best-in-class, fast, accurate, and cost-efficient large language model inference solutions and infrastructure that will enable Amazon businesses to deliver more value to their customers.
We are open to hiring candidates to work out of one of the following locations:
Boston, MA, USA | New York, NY, USA
Basic Qualifications
– 3+ years of non-internship professional software development experience
– 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
– Experience programming with at least one software programming language
Preferred Qualifications
– 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
– Bachelor’s degree in computer science or equivalent
– Experience with Python, PyTorch, and C++ programming and performance optimization
– Experience with Large Language Model inference
– Experience with Trainium and Inferentia Development
– Experience with GPU programming (TensorRT-LLM)
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $115,000/year in our lowest geographic market up to $223,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.