SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals

Virtual
AI is transforming the enterprise at breakneck speed—and rewriting the threat landscape just as fast. As organizations adopt prompts, pipelines, agents, and autonomous systems, AI itself is becoming the newest insider threat. In this session, Nicole Carignan, SVP of Security & AI Strategy at Darktrace, exposes the emerging risks of the AI multiverse: weaponized prompts, rogue agents, Shadow AI, and identity‑drifting non‑human actors. She’ll outline a defense‑in‑depth blueprint for securing AI from prompt to pipeline, and reveal why securing AI now demands a radical shift in visibility, governance, and behavioral analytics.
*Sponsored by Darktrace
Virtual
Virtual
As AI systems become deeply embedded in business operations, security leaders may face emerging risks such as model tampering and hidden backdoors. This session introduces the latest Microsoft research on detecting backdoored language models, highlighting how subtle model poisoning techniques can create “sleeper agent” behaviors that appear normal until triggered. We’ll explore observable signatures, such as attention hijacking, output randomness collapse, and leakage of poisoning data, that can enable scalable detection of compromised models. Get equipped with practical guidance to help strengthen AI supply chain security, implement defense in depth, and improve organizational readiness against AI focused threats.
*Sponsored by Microsoft
Virtual
Virtual
AI agents are an integral part of the emerging AI Stack in organizations. OpenAI Frontier enables and powers this stack where processing payments, sensitive data, and customer interactions are handled. As capabilities scale, so does the risk surface. The question isn’t if agents will be attacked, but whether they have been rigorously tested before deployment. I will highlight Promptfoo’s Red Teaming framework, now a part of OpenAI Frontier to build and deploy safe, reliable AI systems. Using a payment-processing AI assistant, I will show how to define an agent’s risk surface, simulate adversarial behaviors, and evaluate outcomes with calibrated graders.
*Sponsored by Promptfoo
Virtual
The battlefield has changed. The enemy isn't just using manual scripts anymore; they're orchestrating complex attacks with AI frameworks and Model Context Protocol (MCP) servers, rendering traditional signature-based defenses obsolete. This session cuts through the hype to provide a technical, data-driven analysis of how FortiNDR identifies AI-driven threats hiding in legitimate, encrypted API traffic.
We’ll reveal specific findings from real-world scenarios—including malicious MCP servers exfiltrating email data and malware leveraging LLM APIs for command-and-control (C2). Discover how behavioral analytics on network metadata (TLS, DNS, and HTTP events) can uncover these invisible attack chains.
*Sponsored by Fortinet
Virtual
Agentic workflows enable a new model for securing the software supply chain: systems that continuously plan, act, verify, and remediate without waiting on human intervention. This session shows how to build an OS distribution that effectively secures and patches itself, using agentic build–scan–attest–validate pipelines. In a world where AI-generated code rapidly expands dependency risk and outpaces human review, “ship and patch” is no longer sufficient. Attendees will learn how to proactively control dependency intake, automate verification, and continuously reduce exposure—turning security from a reactive process into a built-in, self-improving system.
*Sponsored by Chainguard
Virtual
Virtual
AI has fundamentally changed the threat landscape — and it's happening now. Nation-state actors are actively deploying AI to automate vulnerability scanning, generate and execute exploits, bypass detections, and run social engineering campaigns at a speed and scale no human-operated security program can match. The evidence speaks for itself: APT31 used Gemini to automate RCE exploits, WAF bypasses, and SQL injections against U.S. targets. UNC795 uses AI-integrated code auditing and scanning tools multiple times a week to exploit CMS and PHP vulnerabilities. Coral Sleet ran an AI-powered IT worker impersonation scheme that compromised 300 companies and generated $800M in fraudulent revenue. Crimson Sandstorm leverages LLMs to disable antivirus and research malware evasion. And in 2025, the Drift supply chain campaign weaponized AI integrations to compromise 700+ organizations. The question is no longer whether AI-native attacks are coming. It's whether your defenses can keep up.
*Session Sponsored by Zafran
Virtual
Your board no longer asks whether AI is deployed. They want to know if it's working. And most security leaders can't answer - not because AI isn't delivering value, but because no one built the layer that measures it.
*Sponsored by Witness AI
Virtual
Securing Modern AI Apps: From Code to Runtime
AI adoption is driving a "Context Crisis" as AI-generated code ships 100x faster than traditional software. This rapid innovation has outpaced legacy security tools, leaving teams blind to their AI footprint and the "toxic combinations" of risk created by interconnected models, agents, and cloud data.
Join us for a demo of Wiz AI-APP, the end-to-end platform built to secure AI from development to production. You’ll learn how to unify cross-layer context across infrastructure, models, and application behavior to operationalize AI security.
*Sponsored by Wiz
Virtual
Virtual