SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact UsNOW // AI delivers one of three immersive, hands-on executive workshops developed by SANS Institute and siberX to protect, operationalize, and govern AI systems under real-world pressure. Join us May 27 in Toronto.

These workshops combine SANS Secure AI Blueprint principles with siberX’s high-fidelity simulation environment to deliver hands-on, decision-driven training for leaders and practitioners responsible for AI systems in production. Choose the workshop that best aligns with your AI oversight and enterprise risk priorities. Review the three executive tracks below.
Led by SANS Senior Instructor Jason Lam, Director at World Wide Technology and author of LDR520: Emerging Trends for Cyber Leaders: AI, Cloud, and Cybersecurity Strategy, this workshop prepares leaders to secure AI systems operating in production environments.
Participants examine real-world AI failure scenarios, including prompt injection, data poisoning, over-permissive agents, and model compromise. The session emphasizes control validation, response coordination, and executive accountability when AI systems are under operational pressure.
Grounded in enterprise experience across regulated industries, this workshop challenges participants to strengthen defensive strategy while maintaining business continuity and innovation velocity.

Led by SANS instructor Foster Nethercott, drawing on themes from SEC535: Offensive AI Attack Tools and Techniques, this workshop prepares security experts to apply AI in cyber operations, penetration testing, and red teaming, informing defense as well.
Built on operational concepts from SEC535 participants learn where AI-enabled attackers exploit speed gaps, how to safely automate cybersecurity workflows, and how to validate cyber defenses under real-world pressure and in a world of AI-driven tools.

Led by SANS instructor Kevin Garvey, drawing on themes from LDR516: Strategic Vulnerability and Threat Management, this workshop prepares executives, management, and risk leaders to govern AI systems with clear accountability and defensible decision-making.
Grounded in leadership and governance concepts reflected in LDR516, participants translate technical AI risk into executive-level strategy, establish ownership across teams, and practice making high-impact decisions under regulatory and operational pressure. Participants will engage in gamified learning via the SANS Cyber42 leadership exercise.

What participants will be able to do after completing an AI Under Pressure workshop. These outcomes apply across the Protect AI, Utilize AI, and Govern AI workshops.
Pinpoint where AI systems introduce meaningful security, operational, and governance risk across LLMs, RAG pipelines, agentic systems, and AI-enabled workflows.
Make informed decisions by balancing technical realities, business impact, regulatory exposure, and accountability in high-stakes AI scenarios.
Apply SANS Secure AI Blueprint principles to real-world, production environments rather than theoretical models or lab-only scenarios.
Support AI deployment, pause, or shutdown decisions based on evidence, shared decision language, and operational readiness, not assumptions or vendor claims.
What organizations gain when participants return from an AI Under Pressure workshop.
Gain a clear view of where current AI systems are exposed to meaningful risk, and which issues demand action now versus later.
Leave with a practical, prioritized plan for what to secure, automate, govern, or pause based on real operational constraints.
Establish shared decision language across executives, managers, and technical teams, reducing friction and delay during AI-related incidents.
Increase confidence in approving, deploying, or stopping AI initiatives based on evidence, accountability, and readiness rather than assumptions or vendor claims.