SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals

Virtual
James, and his new venture Harmonic Security, has been working with enterprises around the world to help them adopt AI with confidence. In his talk, he’ll share candid insights into what’s going well and what’s not going well as organisations navigate the transformative power of AI in their workforce.
*Sponsored by Harmonic
Virtual
AI is already here, embedded in every SaaS application your employees touch, from Slack and Microsoft 365 to "shadow" tools like Claude and Grammarly. For security teams, the "Block All" era is over, but the "Allow All" era is a data-loss nightmare. To survive the AI explosion, organizations must move from reactive blocking to proactive governance.
In this session, we will explore why traditional CASB falls short and demonstrate how a browser-centric approach provides the "last mile" visibility needed to protect corporate IP.
Key Takeaways:
*Sponsored by Palo Alto Networks
Virtual
AI is transforming how organizations operate—but without a security strategy that spans the entire AI lifecycle, innovation can quickly become a liability. We will talk about strategies to align your security posture with your AI ambitions before a breach forces the conversation.
*Sponsored by Orca Security
Virtual
Virtual
In large, fast-moving engineering organizations shipping products at scale, threat modeling remains one of the most valuable security practices. It enables developers to identify potential security issues early in design, but traditional approaches struggle to keep pace with modern engineering velocity. Reviews are often point-in-time, expert-driven, and difficult to sustain consistently as product complexity and delivery speed increase. This session explores how AI can act as a force multiplier, shifting threat modeling from a centralized security function to a scalable, self-service capability that empowers product teams to take greater ownership of security design decisions.
*Sponsored by Adobe
Virtual
AI systems evolve faster than traditional security testing can keep up. Join F5 experts to learn how AI Red Team accelerates continuous adversarial testing across models, apps and agents—using an extensive attack database, multi-turn Agentic Resistance campaigns, and operational stress tests—to surface vulnerabilities before they’re exploited.
We’ll demo how severity and risk-scored results and Agentic Fingerprints produce audit-ready, explainable reports and show how findings can be operationalized into runtime protections via F5 AI Guardrails.
Key Takeaways:
*Sponsored by F5
Virtual
As organizations rapidly adopt AI tools, the real emerging risk isn’t just AI-powered attacks—it’s unmanaged AI usage inside your own environment. Shadow AI, unclear ownership, and policy lag are quietly expanding the attack surface and introducing operational, compliance, and data exposure risks. Drawing on data from our latest State of Trust report, this session explores where organizations are falling behind and offers practical steps to implement AI governance structures and policies that reduce risk without slowing innovation.
*Sponsored by Vanta
Virtual
Virtual
Autonomous AI tools are no longer a concept, they're actively shaping workflows, decisions, and operations across enterprises. But as these tools act on their own, intent becomes the new attack surface.
Industry predictions suggest that by 2030, over 50% of software will include agentic AI, driving more than $450B in revenue by 2035. Yet the reality is that once enterprises deploy AI-powered applications and agents, they face entirely new risks: prompt injection, data leakage, unsafe tool use, model manipulation, and unpredictable behavior across complex AI pipelines.
Join Elad Schulman for a practical, real-world discussion on where AI security truly matters in autonomous agents:
*Sponsored by Lasso Security
Virtual
Security leaders are confronting a new generation of AI-powered email attacks that are faster, more convincing, and more scalable than ever before. Even organizations with mature security stacks are discovering that traditional controls, secure email gateways, and manual review processes are not designed for adversaries leveraging generative AI. In this session, you'll hear a candid, first-hand account from a security leader who believed his organization had built a best-in-class security program—until a single AI-crafted email exposed a critical gap.
Through real-world incidents, including payroll fraud, sextortion, and vendor email compromise, we'll examine how attackers use AI to enhance social engineering, personalize pretexting at scale, and exploit human trust, urgency, and authority. We'll break down what changed in the threat landscape, why conventional detection models struggle against AI-generated attacks, and how security teams must rethink controls at the human layer.
Attendees will leave with practical guidance to: Identify indicators of AI-augmented social engineering Reduce human-layer risk without increasing operational friction Align people, process, and technology to counter evolving BEC tactics Build resilience against increasingly adaptive AI-driven threats
This session delivers actionable insights for security practitioners navigating the rapidly shifting intersection of AI and email-based threats.
*Sponsored by Abnormal AI
Virtual
AI-assisted coding is dramatically accelerating software development. Code that once took weeks to design, implement, and review can now be generated, tested, and merged in minutes. But most application security programs were built for a slower, human-paced development process with design reviews, pull requests, and investigations acting as control points. As those checkpoints shrink or disappear, security teams are being asked to evaluate risk at machine speed.
This session explores how security Semgrep Workflows can help close that gap. We’ll demonstrate how combining deterministic program analysis with AI reasoning can automate discovery, detection, triage, and remediation. The talk includes a live demo demonstrating security workflows that detect business logic vulnerabilities in real code within an AI-assisted development pipeline.
*Sponsored by Semgrep
Virtual
Virtual