SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


In computer security, the CIA Triad represented the three security properties systems should have: confidentiality, integrity, and availability. Of the three, integrity has been the most elusive; and, in an AI-powered internet-of-things world, the most important. This talk explores all the facets of integrity: data, processing, storage, and contextual. Web 1.0 was all about availability; Web 2.0 about privacy. If we are ever going to build the distributed, decentralized, intelligent web of tomorrow - and trust these systems to take complex on our behalf - we are going to need to solve integrity.
Topics to Include:
"Predictive AI Shrinks Brand Takedown Cycles: Weeks of Manual Triage to <7 Minutes, $12M ROI"
"AI Security Reality Check: Stop Chasing Shiny Threats, Do the Basics"
"DPRK insiders using AI was actually there detriment!"
"Phishing Without Phishers: Fully Automated AI Campaigns in the Wild"
This workshop introduces participants hands-on to the security of AI systems, using the OWASP AI Exchange (owaspai.org) as a framework. Attendees will gain insight through hacking labs how modern AI architecture operates, how it can be attacked, and how to secure them in real-world deployments. The workshop covers threats to LLMs and to conventional machine learning models, covering critical risks such as prompt injection, sensitive data leakage, model and data poisoning, supply chain threats, vector database vulnerabilities, excessive agent behavior, system prompt exposure, misinformation, and resource abuse.
Participants will leave being able to perform both AI and ML techniques for surfacing attacker activity in logs. The participant will learn about LLM limitations and how they can be overcome with additional ML techniques that allows an analyst to analyze larger data sets.
12 AI-powered cyber threat challenges 6 hours across two days Live scoreboard + competition format Based on real-world adversary tactics
The rise of OpenClaw hints at the pent-up demand for a truly useful personal AI assistant. Despite all our efforts to create layered security boundaries and implore adherence to principles like Zero Trust, it seems like all that was thrown out the window as millions rushed to install OpenClaw and experience an AI personal assistant that actually does things for you.
Indirect prompt injection is not just another vulnerability to patch. It is a structural reality of how large language models operate. This session explores how the context window, or "cram hole," contributes to the success of prompt injection exploits and why that reality fundamentally reshapes how we must think about trust, control, and data boundaries in AI systems.
The industry is fixated on the model. Jailbreaking it, guarding it, aligning it. But the most consequential AI security vulnerabilities aren't in the AI. They reside in the orchestration layer: serialization boundaries, state management, credential stores, and trust boundaries between agents. Old bug classes, new topology.
Hosted by: OWASP AI Exchange and SANS
INVITE ONLY
Topics to Include: "MCP Under Attack: Securing the New Trusted Control Plane"
"AI Isn’t the Risk. We Are, and That’s Where Security Must Change."
"When Humans Stop Thinking: The First Undetected Failure Mode of Agentic AI"
"AI In The SOC Without Losing The Plot"
"120 Days to AI-Driven QA/QC for Power Systems: Governing Accountability and Cyber Risk"
"Operationalizing AIBOMs: Policy-Gating Models & Datasets in AI Supply Chains"
Designing AI-Assisted Threat Hunting That Remembers
Are We AI’ing Much? GenAI-Washing, Specialized Agents, and the Security Trade-off
In this immersive, hands-on workshop, participants will use FinBot—an interactive, multi-agent Capture-the-Flag (CTF) platform—to attack and then defend a realistic agentic financial workflow:
Invoice Intake → Validation → Approval → Funds Transfer → Reconciliation
Working in a pre-configured cloud environment (no setup required), attendees will reproduce three high-impact failure modes observed in real-world multi-agent systems:
ASI01 – Agent Goal Hijack
ASI02 – MCP-Driven Indirect Zero-Click (Tool Misuse & Exploitation)
ASI05 – Unexpected Remote Code Execution (RCE)
Modern cyber defenders are inundated with vast volumes of raw threat reports, advisories, technical analyses, incident summaries, and narrative threat write-ups, which are rich in context but unstructured and difficult to operationalize. In this hands-on workshop, participants will learn how to build an AI-augmented threat intelligence platform using a popular data Lakehouse, the free edition of Databricks, that transforms unstructured reports into structured, actionable intelligence and then applies Generative AI features and analytics to extract high-value insights at scale.
12 AI-powered cyber threat challenges 6 hours across two days Live scoreboard + competition format Based on real-world adversary tactics
We all want to connect AI to everything and provide it with our data — as long as we can trust it. But how do we secure it? First, we need to understand what can go wrong: we need to identify and understand the threats.
Please join us for an In-Person Networking Breakfast. Share stories, make connections, and learn how to make the most of your week in Arlington, VA. Complimentary coffee and breakfast items to be provided. Hope to see you there!