Job Description
About the TeamSecurity is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity. The Security team protects OpenAI’s technology, people, and products. We are technical in what we build but operational in how we execute, and we support all of OpenAI’s research and product initiatives.
Our tenets include: prioritizing impact, enabling researchers, preparing for future transformative technologies, and fostering a robust security culture. Current security clearance is not mandatory, but being eligible for sponsorship is required. About the RoleLead an effort to map, characterize, and prioritize cross-layer vulnerabilities in advanced AI systems – spanning data pipelines, training/inference runtimes, system and supply chain components. You’ll drive offensive research, produce technical deliverables, and serve as OpenAI’s primary technical counterpart for select external partners (including potential U.
S. government stakeholders).
What you’ll do:Build an AI Stack Threat Map across the AI lifecycle, from data to deploymentDeliver deep-dive reports on vulnerabilities and mitigations for training and inference, focused on systemic, cross-layer risks. Orchestrate inputs across research, engineering, security, and policy to produce crisp, actionable outputs. Engage external partners as the primary technical representative; align deliverables to technical objectives and milestones.
Perform hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, and control planes. )Translate complex technical issues for technical and executive audiences; brief on risk, impact, and mitigations. You may thrive if you:Have led high-stakes security research programs with external sponsors (e.
g. , national-security or critical-infrastructure stakeholders).
Have deep experience with cutting edge offensive-security techniquesAre fluent across AI/ML infrastructure (data, training, inference, schedulers, accelerators) and can threat-model end-to-end. Operate independently, align diverse teams, and deliver on tight timelines. Communicate clearly and concisely with experts and decision-makers.
Goals & impactProvide decision-makers a common vulnerability taxonomy, early warning of systemic weaknesses, and a repeatable methodology that measurably raises the bar for adversaries. Outcomes include: more resilient AI architectures, reduced exploit windows, and better-targeted security R&D investments across defense and public-sector stakeholders. Key technical challengesEnd-to-end coverage: Tracking threats across the AI lifecycle, including data, software, and system-level components.
Cross-disciplinary integration: Reconciling perspectives from owners of disjoint stack layers to capture composite attack paths. Stochastic inference: Non-determinism from temperature/top-k/top-p decoding complicates reproducibility; requires seeded runs, harness control, and careful methodology to validate vulnerabilities. About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
EWJD3