Contact Sales
Contact Sales

Stay Ahead of Ransomware: The AI Arms Race – When Both Sides Have Copilots

As threat actors weaponize AI across the attack lifecycle, defenders are building their own AI-powered arsenals to keep pace in an escalating arms race.

Authored byRyan Chapman & Raymond DePalma
Ryan ChapmanRaymond DePalma

Feb 14, 2026 (Episode recorded Feb 3, 2026)

In the February 2026 episode of the SANS “Stay Ahead of Ransomware” livestream, we flipped the script on the AI conversation. In January (recordingblog), we explored how threat actors weaponize AI across the attack lifecycle. This time, we brought in Raymond "Mr. AI" DePalma from Palo Alto Networks Unit 42 to tackle the defensive side and to help us answer the question: How can security teams harness AI to prevent, detect, and respond to ransomware and cyber extortion attacks?

Ray joined show hosts Ryan Chapman and Mari DeGrazia for a technical, demo-heavy session covering everything from foundational AI concepts to live demonstrations of AI-powered phishing classifiers and memory forensics. Ray also shared his open-source “AI for the Win” GitHub repository, which contains over 50 hands-on labs designed to help security professionals build practical AI skills. All for free!

Understanding LLMs and Agentic AI

Ray started with a foundational walkthrough of the AI landscape as it applies to cybersecurity. Large Language Models (LLMs) like OpenAI’s GPT series, Anthropic’s Claude, and Google’s Gemini are powerful prediction engines trained on vast text data. They’re primarily reactive: you ask a question, and they generate a response. They’re excellent at summarization, report drafting, and log analysis, but they’re limited to the data on which they were trained.

The real paradigm shift is the integration of agentic AI LLMs with tools, APIs, and external data sources, making them capable of autonomously planning and executing multi-step tasks. An agentic system can break down a complex investigation into sub-tasks, query databases, run code, browse the web for current threat intelligence, and self-correct along the way. This moves AI from a reactive assistant to a proactive collaborator in security workflows.

How Threat Actors Weaponize AI

Building on the January episode, Ray outlined how AI acts as a force multiplier for attackers. AI accelerates reconnaissance through automated social media and public data scraping. It enables polymorphic payload generation and AI-assisted evasion techniques that challenge traditional signature-based detection. We also discussed techniques like prompt injection, which allows attackers to bypass safety filters, along with the fact that some threat actors run malicious models locally without any restrictions at all.

Perhaps most concerning, AI lowers the barrier to entry. Less skilled attackers can now perform complex operations, from crafting convincing spear-phishing campaigns to handling ransom negotiations across multiple languages and cultural contexts, with AI doing the heavy lifting.

The Defender’s AI Toolkit

The conversation then shifted to how defenders can fight back. Ray walked through practical applications of AI across the incident response lifecycle, noting important concepts including:

  • Incident Timeline Generation: AI processes multi-source logs to create coherent attack timelines, and map attacker actions to MITRE ATT&CK techniques, from initial access via macro-enabled documents, to spawning PowerShell, to performing credential dumping via renamed Mimikatz binaries.
  • Automated Alert Triage: AI accelerates threat detection by analyzing and prioritizing alerts, reducing the manual workload that bogs down SOC teams.
  • Detection Engineering: Continuous AI-assisted tuning of detection rules helps identify emerging threats and close coverage gaps.
  • Visualization: Tools like Plotly and Mermaid diagrams make attack timelines and threat actor attribution digestible for both technical teams and executives.

The conversation then kicked into a higher gear when Ray showcased a live visualization of an attack timeline, demonstrating how AI mapped attacker actions from initial access through credential dumping and defense evasion, including EDR tampering, into a clear, presentable format.

AI in Incident Response Roles

Mari and Ryan discussed how AI is reshaping the dynamics of DFIR teams. Traditionally, specialists at every level: consultants, seniors, and principals manually analyze logs from Entra ID, EDR systems, and other sources. AI changes this equation through scaled analysis of large log files, automated identification of malicious tools and techniques, and the ability to present complex findings in formats that executives and clients can understand.

Critically, AI democratizes expertise. Junior analysts and consultants can now perform advanced analysis with AI support, accelerating their growth and expanding team capacity without sacrificing quality.

Threat Attribution and Intelligence

Ray demonstrated how AI can assist with threat actor attribution, even at low to medium confidence levels. Starting with ransomware note analysis, AI can extract details that distressed clients might overlook financial demands, urgency indicators, infrastructure details such as clear-net email addresses versus official Tor portals, and indicators of double, or multi-extortion techniques.

AI also provides contextual attribution insights; for example, identifying whether an attack leverages the leaked LockBit 3.0 builder with commodity infrastructure (suggesting an opportunistic actor) versus a more sophisticated, established group. Ray walked through how AI applies to the Diamond Model to correlate intelligence from OSINT, threat intelligence providers, and other sources, analyzing code artifacts for language indicators, toolings such as Cobalt Strike, Remote Monitoring & Management (RMM) tools, and behavioral TTPs to build attribution profiles.

Actionable Intelligence Pivoting

One of the show’s most practical segments focused on AI’s ability to generate suggested pivots and investigative queries based on observed activity. This capability is especially valuable for newer team members who may not yet have the instinct and experience of seasoned threat hunters. For example, during the initial identification assessment, AI can quickly surface relevant pivot points, helping DFIR resources assess the situation and focus their efforts where it matters most.

The Problem of Hallucinations and Trust

The trio spent significant time on a critical topic: AI hallucinations. AI models can generate plausible but entirely fabricated information, such as inventing file paths, creating seamless narratives with no evidentiary gaps, or fabricating attribution details. Ryan shared a personal anecdote about an LLM that invented names and details when asked to analyze a calendar schedule, which underscore the real-world risks. In this case, the AI system even apologized to Ryan when he noted the issues, specifically explaining that the results provided were hallucinations.

The consensus: A rigorous "trust but verify" approach is essential. This means performing quality assurance checks on AI-generated content, implementing multi-model verification with confidence scoring, always providing raw data alongside AI summaries for human validation, and continuously auditing and refining prompts and agent workflows. Human oversight isn't optional; it's the safety net that makes AI-powered security operations trustworthy.

Hands-On: AI for the Win Labs

Ray provided live demonstrations from his "AI for the Win" GitHub repository, which contains over 50 labs covering a range of skill levels. The labs are accessible via local setups, Docker, or Google Colab notebooks, and support multiple AI models, including Anthropic Claude, OpenAI GPT, Google Gemini, and local open-source alternatives via Ollama.

Key demonstrations included:

  • Phishing Classifier: A machine learning model that classifies phishing emails by type, credential phishing, BEC, spear phishing, and malware delivery, and generates campaign timelines to identify trends.
  • ML Model Evaluation: Applying confusion matrices and precision-recall curves to evaluate classifier performance and understand model reliability.

The labs progress from Python basics and ML fundamentals to prompt engineering, LLM usage and agentic AI development, culminating in advanced defensive applications such as automated log parsing, threat-hunting query generation, and hardening defenses against AI-powered attacks.

Mari asked about model compatibility, and Ray noted that the labs are intentionally agnostic: users can choose their preferred provider. The repository's guides section helps users map AI model capabilities to specific tasks, such as detection rule creation, malware analysis, and development workflows. No API keys are required for initial labs.

Learning More and Looking Forward

To learn more, we invite you to watch the February 3, 2026, episode of the SANS "Stay Ahead of Ransomware" livestream. Want to watch prior episodes? Be sure to check out our Stay Ahead of Ransomware playlist on Youtube.

Join us each month for the SANS "Stay Ahead of Ransomware" livestream on the first Tuesday of each month at 1:00 PM Eastern (10:00 AM Pacific).

Remember to check out our upcoming SANS training events, including FOR528: Ransomware and Cyber Extortion, where we dive into the technical details of preventing, detecting, and responding to ransomware and cyber extortion attacks. On the AI side of things, we also have FOR563: Applied AI for Digital Forensics and Incident Response: Leveraging Local Large Language Models, which teaches cyber defenders to leverage AI to aid in DFIR and IR investigations.