SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact UsAs threat actors weaponize AI across the attack lifecycle, defenders are building their own AI-powered arsenals to keep pace in an escalating arms race.


Feb 14, 2026 (Episode recorded Feb 3, 2026)
In the February 2026 episode of the SANS “Stay Ahead of Ransomware” livestream, we flipped the script on the AI conversation. In January (recording | blog), we explored how threat actors weaponize AI across the attack lifecycle. This time, we brought in Raymond "Mr. AI" DePalma from Palo Alto Networks Unit 42 to tackle the defensive side and to help us answer the question: How can security teams harness AI to prevent, detect, and respond to ransomware and cyber extortion attacks?
Ray joined show hosts Ryan Chapman and Mari DeGrazia for a technical, demo-heavy session covering everything from foundational AI concepts to live demonstrations of AI-powered phishing classifiers and memory forensics. Ray also shared his open-source “AI for the Win” GitHub repository, which contains over 50 hands-on labs designed to help security professionals build practical AI skills. All for free!
Ray started with a foundational walkthrough of the AI landscape as it applies to cybersecurity. Large Language Models (LLMs) like OpenAI’s GPT series, Anthropic’s Claude, and Google’s Gemini are powerful prediction engines trained on vast text data. They’re primarily reactive: you ask a question, and they generate a response. They’re excellent at summarization, report drafting, and log analysis, but they’re limited to the data on which they were trained.
The real paradigm shift is the integration of agentic AI LLMs with tools, APIs, and external data sources, making them capable of autonomously planning and executing multi-step tasks. An agentic system can break down a complex investigation into sub-tasks, query databases, run code, browse the web for current threat intelligence, and self-correct along the way. This moves AI from a reactive assistant to a proactive collaborator in security workflows.
Building on the January episode, Ray outlined how AI acts as a force multiplier for attackers. AI accelerates reconnaissance through automated social media and public data scraping. It enables polymorphic payload generation and AI-assisted evasion techniques that challenge traditional signature-based detection. We also discussed techniques like prompt injection, which allows attackers to bypass safety filters, along with the fact that some threat actors run malicious models locally without any restrictions at all.
Perhaps most concerning, AI lowers the barrier to entry. Less skilled attackers can now perform complex operations, from crafting convincing spear-phishing campaigns to handling ransom negotiations across multiple languages and cultural contexts, with AI doing the heavy lifting.
The conversation then shifted to how defenders can fight back. Ray walked through practical applications of AI across the incident response lifecycle, noting important concepts including:
The conversation then kicked into a higher gear when Ray showcased a live visualization of an attack timeline, demonstrating how AI mapped attacker actions from initial access through credential dumping and defense evasion, including EDR tampering, into a clear, presentable format.
Mari and Ryan discussed how AI is reshaping the dynamics of DFIR teams. Traditionally, specialists at every level: consultants, seniors, and principals manually analyze logs from Entra ID, EDR systems, and other sources. AI changes this equation through scaled analysis of large log files, automated identification of malicious tools and techniques, and the ability to present complex findings in formats that executives and clients can understand.
Critically, AI democratizes expertise. Junior analysts and consultants can now perform advanced analysis with AI support, accelerating their growth and expanding team capacity without sacrificing quality.
Ray demonstrated how AI can assist with threat actor attribution, even at low to medium confidence levels. Starting with ransomware note analysis, AI can extract details that distressed clients might overlook financial demands, urgency indicators, infrastructure details such as clear-net email addresses versus official Tor portals, and indicators of double, or multi-extortion techniques.
AI also provides contextual attribution insights; for example, identifying whether an attack leverages the leaked LockBit 3.0 builder with commodity infrastructure (suggesting an opportunistic actor) versus a more sophisticated, established group. Ray walked through how AI applies to the Diamond Model to correlate intelligence from OSINT, threat intelligence providers, and other sources, analyzing code artifacts for language indicators, toolings such as Cobalt Strike, Remote Monitoring & Management (RMM) tools, and behavioral TTPs to build attribution profiles.
One of the show’s most practical segments focused on AI’s ability to generate suggested pivots and investigative queries based on observed activity. This capability is especially valuable for newer team members who may not yet have the instinct and experience of seasoned threat hunters. For example, during the initial identification assessment, AI can quickly surface relevant pivot points, helping DFIR resources assess the situation and focus their efforts where it matters most.
The trio spent significant time on a critical topic: AI hallucinations. AI models can generate plausible but entirely fabricated information, such as inventing file paths, creating seamless narratives with no evidentiary gaps, or fabricating attribution details. Ryan shared a personal anecdote about an LLM that invented names and details when asked to analyze a calendar schedule, which underscore the real-world risks. In this case, the AI system even apologized to Ryan when he noted the issues, specifically explaining that the results provided were hallucinations.
The consensus: A rigorous "trust but verify" approach is essential. This means performing quality assurance checks on AI-generated content, implementing multi-model verification with confidence scoring, always providing raw data alongside AI summaries for human validation, and continuously auditing and refining prompts and agent workflows. Human oversight isn't optional; it's the safety net that makes AI-powered security operations trustworthy.
Ray provided live demonstrations from his "AI for the Win" GitHub repository, which contains over 50 labs covering a range of skill levels. The labs are accessible via local setups, Docker, or Google Colab notebooks, and support multiple AI models, including Anthropic Claude, OpenAI GPT, Google Gemini, and local open-source alternatives via Ollama.
Key demonstrations included:
The labs progress from Python basics and ML fundamentals to prompt engineering, LLM usage and agentic AI development, culminating in advanced defensive applications such as automated log parsing, threat-hunting query generation, and hardening defenses against AI-powered attacks.
Mari asked about model compatibility, and Ray noted that the labs are intentionally agnostic: users can choose their preferred provider. The repository's guides section helps users map AI model capabilities to specific tasks, such as detection rule creation, malware analysis, and development workflows. No API keys are required for initial labs.
To learn more, we invite you to watch the February 3, 2026, episode of the SANS "Stay Ahead of Ransomware" livestream. Want to watch prior episodes? Be sure to check out our Stay Ahead of Ransomware playlist on Youtube.
Join us each month for the SANS "Stay Ahead of Ransomware" livestream on the first Tuesday of each month at 1:00 PM Eastern (10:00 AM Pacific).
Remember to check out our upcoming SANS training events, including FOR528: Ransomware and Cyber Extortion, where we dive into the technical details of preventing, detecting, and responding to ransomware and cyber extortion attacks. On the AI side of things, we also have FOR563: Applied AI for Digital Forensics and Incident Response: Leveraging Local Large Language Models, which teaches cyber defenders to leverage AI to aid in DFIR and IR investigations.


Ryan Chapman has redefined ransomware defense through hands-on leadership in major incidents like Kaseya and by arming thousands with proactive threat hunting tactics now standard across the industry.
Read more about Ryan Chapman

Raymond is a Principal DFIR Technical Architect with Unit 42's Engineering Team at Palo Alto Networks, bringing over 13 years of experience across the cybersecurity landscape from digital forensics and SOC operations to SIEM engineering, incident response consulting, solutions architecture, and now building AI-driven tooling for DFIR investigations at scale.
Read more about Raymond DePalma