Anthropic
Anthropic AI Security Fellow
Job Description
About AnthropicAnthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Please apply by January 12, 2026AI Security at AnthropicWe believe we are at an inflection point for AI’s impact on cybersecurity. Models are now useful for cybersecurity tasks in practice: for example, Claude can now outperform human teams in some cybersecurity competitions and help us discover vulnerabilities in our own code. We are looking for researchers and engineers to help us accelerate defensive use of AI to secure code and infrastructure. Anthropic Fellows Program OverviewThe Anthropic Fellows Program is designed to accelerate AI security and safety research, and foster research talent.
We provide funding and mentorship to promising technical talent – regardless of previous experience – to research the frontier of AI security and safety for four months. Fellows will primarily use external infrastructure (e.
g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e. g.
a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below). We run multiple cohorts of Fellows each year.
This application is for our next two cohorts, starting in May and July 202What to ExpectDirect mentorship from Anthropic researchers Access to a shared workspace (in either Berkeley, California or London, UK)Connection to the broader AI safety research communityWeekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAN & access to benefits (benefits vary by country)Funding for compute (~$15k/month) and other research expensesMentors, Research Areas, & Past ProjectsFellows will undergo a project selection & mentor matching process.
Potential mentors include:Nicholas CarliniKeri WarrEvyatar Ben AsherKeane LucasNewton ChengOn our Alignment Science and Frontier Red Team blogs, you can read about some past Fellows projects, including:AI agents find $6M in blockchain smart contract exploits: Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin PengStrengthening Red Teams: A Modular Scaffold for Control Evaluations: Chloe Loughridge et al. , mentored by Jon Kutasov and Joe BentonYou may be a good fit if youAre motivated by reducing catastrophic risks from advanced AI systemsAre excited to transition into full-time empirical AI safety research and would be interested in a full-time role at AnthropicPlease note: We do not guarantee that we will make any full-time offers to fellows.
However, strong performance during the program may indicate that a Fellow would be a good fit here at Anthropic. In previous cohorts, over 40% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on safety at other organizations. Have a strong technical background in computer science, mathematics, physics, cybersecurity, or related fieldsThrive in fast-paced, collaborative environmentsCan implement ideas quickly and communicate clearlyStrong candidates may also have:Contributed to open-source projects in LLM- or security-adjacent repositoriesDemonstrated success in bringing clarity and ownership to ambiguous technical problemsExperience with pentesting, vulnerability research, or other offensive securityA history demonstrating desire to do the “dirty work” that results in high-quality outputsReported CVEs, or been awarded for bug bounty vulnerabilitiesExperience with empirical ML research projects Experience with deep learning frameworks and experiment managementCandidates must be:Fluent in Python programmingAvailable to work full-time on the Fellows program for 4 monthsWe encourage you to apply even if you do not believe you meet every single qualification.
Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work. We think AI systems like the ones we’re building have enormous social and ethical implications.
We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Interview processThe interview process will include an initial application & references check, technical assessments & interviews, and a research discussion. CompensationThe expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAN, with an expectation of 40 hours per week, for 4 months (with possible extension).
EWJD3