SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals

In computer security, the CIA Triad represented the three security properties systems should have: confidentiality, integrity, and availability. Of the three, integrity has been the most elusive; and, in an AI-powered internet-of-things world, the most important.
A year ago, AI-assisted cyber operations were mostly a trouble-shooting story, threat actors trouble-shooting tasks faster. That's no longer the picture. Drawing on Anthropic's threat intelligence from nearly 600 banned actors over twelve months and going deep on a few cases studies, this talk walks through what's actually changed:
AI attack workflows run 47 times faster than human operators. Your adversary already has agentic AI. The question is whether defenders do too. Rob T. Lee wired Claude Code into the SIFT Workstation via Model Context Protocol. Two words typed. Fourteen minutes later: a complete C drive forensic analysis, timeline generation, memory analysis, malware sweeps, all via natural language.
Speaker 1: Chris Hughes—"What Got Us Here Will Get Us There"
Speaker 2: Daniel Bardenstein—"AI Security Reality Check: Stop Chasing Shiny Threats, Do the Basics"
Speaker 3: Kellep Charles—"No Current GenAI Model is Secure by Default Without Continuous Adversarial Testing"
Speaker 4: Harry Thomas—"The Blind Spot That Almost Killed Us: Why AI for ICS Needs IT/OT Integration"
Speaker 5: Josh Snavely—"Bridging the Geek—Wonk Divide: A Practical Guide to AI and Third-Party Risk Management Programs"
Speaker 6: Zakery Stufflebeam—"AI vs. AI: They're Already Inside"
Speaker 7: Yotam Perkal—"Phishing Without Phishers: Fully Automated AI Campaigns in the Wild"
Speaker 8: Ismael Valenzuela—"Vibe Detection Engineering: Accelerating Defense with Compound AI & Deception"
Speaker 9: Andre Piazza—"Predictive AI Shrinks Brand Takedown Cycles: Weeks of Manual Triage to <7 Minutes, $12M ROI"
Speaker 10: Jason Garman—Talk Title to be Announced
We all want to connect AI to everything and provide it with our data — as long as we can trust it. But how do we secure it? First, we need to understand what can go wrong: we need to identify and understand the threats.
SANS AI Gauntlet
12 challenges. Think you can handle all of them? (You can’t.)
AI can write your scripts, automate your SOC, and streamline your operations—but it can also be tricked, broken, and exploited in ways most teams aren’t ready for. SANS AI Gauntlet is a hands-on, points-based tournament where you’ll take on 12 challenge sets targeting real vulnerabilities in AI-enabled systems and IoT environments. Over 6 hours across 2 days at the SANS AI Summit, you’ll race the scoreboard, test your skills, and learn exactly how AI systems fail in the real world.
Dive into the wonderful world of using machine learning (ML) and large language models (LLM) to surface attacker activity.
AI is not replacing attackers, it is amplifying their capability, speed, and scale. This fully hands on workshop walks participants through deploying a vulnerable application locally using Docker and executing real exploitation exercises in a controlled environment.
This workshop introduces participants hands-on to the security of AI systems, using the OWASP AI Exchange (owaspai.org) as a framework. Attendees will gain insight through hacking labs how modern AI architecture operates, how it can be attacked, and how to secure them in real-world deployments. The workshop covers threats to LLMs and to conventional machine learning models, covering critical risks such as prompt injection, sensitive data leakage, model and data poisoning, supply chain threats, vector database vulnerabilities, excessive agent behavior, system prompt exposure, misinformation, and resource abuse.
Join us for an evening of networking with AI/ML experts and industry leaders at the Continental Pool Hall & Beer Garden. Exchange ideas, spark collaborations, and unwind with the brightest minds in AI.
Location: Continental Pool Lounge & Beer Garden
The rise of OpenClaw hints at the pent-up demand for a truly useful personal AI assistant. Despite all our efforts to create layered security boundaries and implore adherence to principles like Zero Trust, it seems like all that was thrown out the window as millions rushed to install OpenClaw and experience an AI personal assistant that actually does things for you.
Indirect prompt injection is not just another vulnerability to patch. It is a structural reality of how large language models operate. This session explores how the context window, or "cram hole," contributes to the success of prompt injection exploits and why that reality fundamentally reshapes how we must think about trust, control, and data boundaries in AI systems.
The industry is fixated on the model. Jailbreaking it, guarding it, aligning it. But the most consequential AI security vulnerabilities aren't in the AI. They reside in the orchestration layer: serialization boundaries, state management, credential stores, and trust boundaries between agents. Old bug classes, new topology.
Panel moderated by: Sam Sabin, Cybersecurity Reporter at Axios
INVITATION ONLY
AI Security Policy Forum, April 21, 12:30-4:30 PM
A closed, invite-only gathering of selected policy stakeholders and standardization leaders. Convened by the OWASP AI Exchange, in partnership with SANS Institute. The forum will take place on April 21 alongside the SANS AI Summit at a venue nearby.
Speaker 1: Chris Cochran—"Dancing Between Raindrops: 3 Keys to Weather the Autonomous Attack Storm"
Speaker 2: Teri Green—"AI Isn’t the Risk. We Are, and That’s Where Security Must Change"
Speaker 3: Ferhat Dikbiyik, Ph.D.—"Are We AI’ing Much? GenAI-Washing, Specialized Agents, and the Security Trade-off"
Speaker 4: Charles Everette—"AI In The SOC Without Losing The Plot"
Speaker 5: Bryant Pickford—"When AI Becomes the Attack: How Threat Actors Weaponize AI APIs to Hide Their Tracks"
Speaker 6: Marissa Morales-Rodriguez, Ph.D.—"AI Turned an Engineering Workflow into a Security Boundary"
Speaker 7: Sydney Marrone—"Designing AI-Assisted Threat Hunting That Remembers"
Speaker 8: Dr. Ugur Koc—"Operationalizing AIBOMs: Policy-Gating Models & Datasets in AI Supply Chains"
Speaker 9: Allen Westley—"When Humans Stop Thinking: The First Undetected Failure Mode of Agentic AI"
Speaker 10: Yevhen Pervushyn—"MCP Under Attack: Securing the New Trusted Control Plane"
INVITATION ONLY
SANS AI Gauntlet
12 challenges. Think you can handle all of them? (You can’t.)
AI can write your scripts, automate your SOC, and streamline your operations—but it can also be tricked, broken, and exploited in ways most teams aren’t ready for. SANS AI Gauntlet is a hands-on, points-based tournament where you’ll take on 12 challenge sets targeting real vulnerabilities in AI-enabled systems and IoT environments. Over 6 hours across 2 days at the SANS AI Summit, you’ll race the scoreboard, test your skills, and learn exactly how AI systems fail in the real world.
In this immersive, hands-on workshop, participants will use FinBot—an interactive, multi-agent Capture-the-Flag (CTF) platform—to attack and then defend a realistic agentic financial workflow:
Invoice Intake → Validation → Approval → Funds Transfer → Reconciliation
Modern cyber defenders are inundated with vast volumes of raw threat reports, advisories, technical analyses, incident summaries, and narrative threat write-ups, which are rich in context but unstructured and difficult to operationalize. In this hands-on workshop, participants will learn how to build an AI-augmented threat intelligence platform using a popular data Lakehouse, the free edition of Databricks, that transforms unstructured reports into structured, actionable intelligence and then applies Generative AI features and analytics to extract high-value insights at scale.
Please join us for an In-Person Networking Breakfast. Share stories, make connections, and learn how to make the most of your week in Arlington, VA. Complimentary coffee and breakfast items to be provided. Hope to see you there!