Docusign
Sr. AI Detect & Respond Engineer
Job Description
Company OverviewDocusign brings agreements to life. Over 5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives.
With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you’ll doWe are seeking a talented and proactive AI Security Operations Engineer to join our team.
This position is focused on defending the organization against AI-enabled threats and leveraging AI to enhance our defensive capabilities. You will act as the bridge between AI security and our operational defense teams (CSIRT, Detection amp; Response, and Threat Intelligence).
In this role, you will analyze how adversaries utilize AI to attack the enterprise, ranging from AI-enhanced phishing and deepfakes to automated vulnerability scanning, and design defenses to mitigate these risks. You will also work to implement AI-powered tooling that improves the speed and efficacy of our threat detection and response workflows. This position is an individual contributor role reporting to the Sr Director of AI & Data Security.
ResponsibilityMonitor the threat landscape for emerging adversarial AI tactics, techniques, and procedures (TTPs) used by attackers against enterprisesCollaborate with the Detection and Response teams to develop playbooks and detection logic for AI-enabled attacks, such as deepfakes, voice cloning, and AI-generated social engineeringConduct threat modeling and simulation exercises to test the organization’s resilience against AI-driven attacksEvaluate and implement AI-powered security tools to enhance security operations center (SOC) automation, anomaly detection, and incident triageAnalyze and mitigate risks associated with Shadow AI and unauthorized use of external AI tools by employees that may introduce threat vectorsPartner with Threat Intelligence teams to track threat actors leveraging LLMs for code generation, exploit development, or reconnaissanceDevelop countermeasures for adversarial machine learning attacks (e. g. , evasion, extraction)Define and track measurable security outcomes related to AI threat defense and report progress to leadershipTranslate technical AI security risks into business impact and communicate recommendations to operational stakeholdersJob DesignationHybrid:Employee divides their time between in-office and remote work.
Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job.
Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position’s job designation depending on business needs and as permitted by local law. What you bringBasic8+ years of experience in information security, with a focus on Incident Response, Threat Intelligence, or Security Operations (SOC)Experience or strong understanding of how AI/ML is used in offensive cyber operations (e.
g. , automated phishing, exploit generation)Experience with the MITRE ATLAS framework (Adversarial Threat Landscape for Artificial-Intelligence Systems) and MITRE ATT&CKExperience with scripting languages such as Python, Go, or PowerShell for security automationExperience with SIEM, SOAR, and EDR platforms, and an understanding of how to integrate AI/ML models into these workflowsExperience with adversarial machine learning concepts (e. g.
, data poisoning, model inversion, evasion attacks)Demonstrated ability to translate technical security risks into business context and actionable recommendationsPreferredExcellent communication and collaboration skills, with the ability to influence technical and non-technical stakeholdersBachelor’s or Master’s degree in Computer Science, Information Security, or a related fieldCertifications: GCIH, GCTI, CISSP, or AI-specific security certificationsExperience dealing with deepfake detection technologies and media authentication standardsExperience with Red Teaming AI systems or conducting adversarial simulationsKnowledge of frameworks such as NIST AI RMF, ISO 42001, and NIST CSFExperience driving automation strategies, predictive analytics, and data-driven insightsWage TransparencyPay for this position is based on a number of factors including geographic location and may vary depending on job-related knowledge, skills, and experience. Based on applicable legislation, the below details pay ranges in the following locations: Washington, Maryland, New Jersey and New York (including NYC metro area): $151,200. 00 – $222,450.
00 base salary This role is also eligible for the following:Bonus: Sales personnel are eligible for variable incentive pay dependent on their achievement of pre-established sales goals. Non-Sales roles are eligible for a company bonus plan, which is calculated as a percentage of eligible wages and dependent on company performance. Stock: This role is eligible to receive Restricted Stock Units (RSUs).
EWJD3