SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact UsFor the First Time, Every Threat Carries an AI Dimension
For more than a decade, the SANS Institute’s flagship RSAC keynote has served as the security industry’s most reliable early warning system, surfacing the attack techniques that will define the threat landscape before most organizations have had to face them. This year’s session, moderated by SANS Technology Institute President Ed Skoudis, delivered an unprecedented signal: for the first time in the history of this keynote, every one of the five most dangerous new attack techniques carries an AI dimension.
“We would be lying to you if we pointed out a trend in attacks that did not involve AI. That is just where we are in this industry,” Skoudis told the audience at Moscone Center.
The unifying theme is a collision of two forces: the complexity of modern infrastructure now defies the limits of human understanding, and AI is what both attackers and defenders are deploying to operate beyond that limit. Speed and comprehension are the twin crises every organization must confront, and the five techniques presented at this year’s session show exactly where those crises are breaking through.
Joshua Wright, Faculty Fellow and Senior Technical Director, SANS Institute | Counter Hack Innovations
Zero-day exploit development once required months of specialized research and cost millions from brokers, making these tools the exclusive domain of well-funded nation-state actors who deployed them sparingly. AI has collapsed that barrier entirely. Independent researchers have already demonstrated AI-discovered zero days in widely deployed production software, and Wright cited ongoing work from Google, Anthropic, and security researcher Sean Shields finding new vulnerabilities in code that has been reviewed many times over.
The economics have fundamentally shifted. When a zero day costs tokens rather than millions from a broker, the strategic logic of how attackers use them changes. Broad, opportunistic exploitation campaigns become economically viable for the first time, and capabilities once reserved for nation-states are now accessible to far less sophisticated threat actors. Wright quoted security researcher Nicholas Carlini: “Future LLMs will likely be better than any of us at identifying vulnerabilities and building exploits.”
“Attackers were already faster than us. AI has made the gap unbridgeable at our current pace,” Wright warned. He projected a near-term scenario where organizations face not one or two zero days per week, but hundreds, generated by AI at industrial scale. The Verizon 2024 DBIR found that half of all critical vulnerabilities remain unpatched 55 days after a fix becomes available, a window that was survivable when zero days were rare and expensive. It is not survivable in a world where AI generates new exploits faster than vendors can produce patches.
Wright acknowledged a possible brighter future in three to seven years, as AI helps reduce the number of software vulnerabilities overall. But the critical challenge, he argued, is surviving the transition period. Organizations must accelerate every phase of the patching lifecycle, automate wherever possible, and integrate AI-powered detection tools to match the speed at which attackers are already operating.
Joshua Wright, Faculty Fellow and Senior Technical Director, SANS Institute | Counter Hack Innovations
Supply chain compromise is no longer a rare risk affecting a handful of high-profile targets. According to Reversing Labs, 65% of organizations experienced a software supply chain attack in 2026. Third-party involvement in breaches has doubled to 30%, and in 2025 alone, more than 454,000 malicious packages were published to open-source registries, a 75% increase over the prior year. At the same time, AI-generated patches are enabling malicious actors to produce and distribute compromised code at scale.
To illustrate the scope of the problem, Wright deconstructed 7-Zip, a widely used utility that ships with many enterprise gold images. Beneath its minimal installer, he found 300 unique dependencies. “The problem of supply chain threats is not the software we choose,” Wright told the audience. “It’s the vendor’s software and their vendors’ software. It’s the entire ecosystem of software supporting that threat.” An attacker can exploit any one of those 300 packages to compromise the systems that depend on them.
Wright predicted a convergence of the session’s first two threats: AI-generated exploits delivered through supply chain channels. He quoted SANS founder Alan Paller: “If organized crime is not into hacking, then they should be sued for malpractice.” The same logic now applies to any adversary ignoring supply chain as an attack vector. The opportunity is too large and too powerful to overlook.
Organizations must plan for supplier compromise before it occurs, demand verifiable proof of how software was built, and extend their definition of supply chain to every update channel and developer tool their teams depend on daily. Seventy-nine percent of organizations have cybersecurity programs covering less than half of their supplier ecosystem. That gap is where the next major compromise is already forming.
Robert M. Lee, SANS Institute Fellow | CEO and Founder, Dragos, Inc.
When something fails inside critical infrastructure, the most urgent question is not how to restore operations as quickly as possible. It is what actually happened, and whether it was intentional. Recovering a plant without understanding what brought it down risks recovering it incorrectly, causing more damage in the process, or restoring operations directly into a compromised environment.
Lee traced a structural transformation that has made this problem exponentially harder. For decades, industrial environments were heterogeneous: a carbide cracker in Saudi Arabia had nothing in common with a water treatment facility in Pennsylvania. That heterogeneity made attacks difficult to scale. Organizations existed in a world of low frequency, high consequence. But for all the right reasons, efficiency, safety, profitability, the industry moved to homogeneous, software-defined environments with common frameworks, common software stacks, and common networking protocols. The scalability that benefits operators now benefits attackers too.
At the same time, the human expertise required for root cause analysis has eroded. Lee described how every facility used to have someone who knew the system across generations. In today’s world of complex, interconnected, software-defined OT environments, no single individual can perform root cause analysis alone. The system is too complex, requiring multiple integrators and specialists to even understand how the digital infrastructure operates.
The consequences are already visible. Lee disclosed that over the past year, he has worked on no fewer than a half-dozen cases where investigators could not determine whether a cyber attack caused a major disruption at a power system, a manufacturing facility, or an oil and gas explosion. In one case, a state-level adversary with a documented history of targeting safety instrumented systems to destroy equipment and kill people (referencing the 2017 TRITON attack in Saudi Arabia) breached an organization’s IT network. The organization had no OT monitoring in place. A month later, the facility exploded. Whether it was an attack or an accident remains unknown.
Agentic AI is now entering OT environments faster than most organizations realize, compressing adoption timelines from 18 months to three months. Lee warned of a potential market correction that could leave organizations dependent on AI-augmented automation from vendors that may not survive. The network traffic and commands that represent the evidentiary record of what occurred in an industrial environment are only available if they were captured before the failure event. If they were not collected, they are gone.
During the panel discussion, Lee addressed the realistic impact of future attacks directly: “It is very realistic that you could have outages that we don’t know how to recover in a reasonable amount of time. And you could absolutely have outages that lead to any physical consequence that’s physically possible in that environment, which absolutely includes the loss of life.” He referenced multiple state actors planning infrastructure disruption scenarios for potential conflicts, where the objective is to destabilize a country’s population enough to influence whether that country enters a war.
The SANS ICS Five Critical Controls and NERC CIP-015 provide a proven path forward. The investment decision cannot wait for the next incident to force it.
Heather Barnhart, Head of Faculty and Senior Forensic Expert, SANS Institute | Cellebrite
Every security team is being pushed to adopt AI, and in many contexts that pressure reflects genuine capability improvements. But Barnhart, one of the world’s leading DFIR practitioners, argued that deploying AI without the training, validation frameworks, and investigative discipline to use it reliably is creating a dangerous new failure mode from within.
She grounded this in the highest-stakes example possible: her own casework on the Idaho murders investigation. “If I allowed AI to work the Idaho murder investigation for me, Kohberger would have been found innocent because he cleared his digital footprint,” Barnhart told the audience. AI cannot find what it doesn’t know to look for. It cannot interpret the significance of absent data the way a trained investigator can. In high-stakes investigations, an AI system that returns a confident wrong answer without signaling uncertainty is not an efficiency gain. It is a liability that can shape case outcomes in ways that are extraordinarily difficult to detect or correct.
The threat extends beyond investigative accuracy. AI is also being used against organizations through channels no one is monitoring: a third-party legal advisor uploading proprietary documents to a commercial AI service with no guardrails, or a therapist using an AI note-taking tool without patient consent or security controls, becoming the vector through which an attacker obtained sensitive personal information and leveraged it for extortion.
Vendor pressure compounds the problem. Forensic tool vendors are incorporating AI into their products faster than validation can keep pace, driven by competitive pressure to be first to market. “If you simply use AI and spit out a report, there’s a chance that you will lose all the credibility that you have in your career,” Barnhart warned.
To address this, Barnhart announced the release of two new AI integration frameworks developed with a team of experts. The first, a Digital Forensics & AI Investigation Framework mapped to Scientific Working Group on Digital Evidence (SWGDE) guidelines, takes a more restrictive approach: AI is recommended for identification, triage, and examination/analysis, but validation must remain exclusively human, and AI can never be used to associate an artifact with a person to prove attribution. The second, an Incident Response & AI Investigation Framework mapped to NIST CSF 2.0, allows broader AI integration because speed is more critical in incident response, while maintaining that lessons learned and final decision points must be human-driven. Both frameworks use a color-coded risk classification: AI Recommended (low risk), AI Optional (elevated risk), AI Never (high risk), and Human Driven.
Both frameworks will be released to the community as posters, with accompanying blogs and webinars. They will be updated dynamically as AI capabilities evolve. “Most breaches don’t fail because of tools. They fail at decision points,” Barnhart said. “The human is the decision point.”


Rob T. Lee, Chief AI Officer & Chief of Research, SANS Institute
The speed of cyberattacks has changed the math for defenders. At the Untrusted conference three weeks before RSAC, practitioners reported that AI-driven attack workflows have compressed the time from initial vulnerability analysis to exploit discovery down to a single day. In demonstrated scenarios, attackers escalated from initial intrusion to full domain administrator compromise in eight minutes. Once a public vulnerability is discovered, weaponization and deployment can follow within 24 hours.
The attack surface is expanding simultaneously. Lee cited data showing thousands of enterprise AI agents currently deployed with no authentication, 17,000 different payloads thwarting state-of-the-art defenses, and command lines of 48,000 characters defeating entropy-based detection in security sensors. Senior leaders, including in congressional testimony, are calling for defensive AI agents that can reason and react faster than any human, and for defenders to stop studying the AI threat from the sidelines and start building with it.
“They have their artificial intelligence,” Lee told the audience. “Now we’ve got to build ours.”
That idea underpins Protocol SIFT, an open-source initiative from SANS Institute designed to help defenders keep pace. Lee demonstrated a proof of concept: the SIFT Workstation, his open-source forensics platform used by the industry for 18 years, embedded with Claude Code. Running against a complex, multi-week intrusion scenario, the system completed a full investigation in 14 minutes and 27 seconds, producing an executive summary, a complete attack timeline, indicators of compromise, and prioritized remediation recommendations. The same analysis would typically take a human analyst approximately three days.
The approach is intentionally constrained, consistent with Barnhart’s frameworks: AI is used to organize workflows, surface insights, and coordinate tools, but humans remain responsible for validating findings and making decisions. The goal is to accelerate analysts, not replace them.
Lee pointed to OpenClaw as proof that the defender community can build at speed: developed over a single weekend, it accumulated 250,000 lines of code on GitHub within 16 days, attracted 84,000 developers, and generated 350 distinct capabilities. His argument: if the open-source community can do that in 16 days, a focused hackathon can take Protocol SIFT from proof of concept to a production-grade defensive capability.
To that end, Lee announced the launch of a community-driven hackathon on April 15, with cash prizes, inviting the global defender community to develop Protocol SIFT into what he described as a “weaponized defensive capability” built by the community, for the community. “The adversaries, because there’s three or four of them, can’t hold hackathons,” Lee said. “We have this structural advantage.”
The RSAC 2026 keynote confirms what many practitioners have sensed but few have articulated this clearly: AI has become a through-line across every major attack category. From zero-day generation to supply chain distribution to OT misoperation to forensic failure to the speed of autonomous offense, each of the five techniques presented reflects a different facet of the same transformation. The question for every organization is no longer whether AI affects their threat model, but where in the kill chain it appears and whether their defenses are designed to operate at the same speed.
As Barnhart put it: “AI is not going to take your job. However, if you learn to use AI to make yourself more powerful, you will steal that person’s job.”
The SANS Top 5 Most Dangerous New Attack Techniques keynote was presented at RSAC 2026 on Tuesday, March 24, 2026. Moderated by Ed Skoudis, President, SANS Technology Institute. Panelists: Joshua Wright, Robert M. Lee, Heather Barnhart, and Rob T. Lee.
The Digital Forensics & AI Investigation Framework and the Incident Response & AI Investigation Framework will be released by SANS Institute as community resources. Follow SANS for release details.
The Protocol SIFT community hackathon launches April 15. Details at sans.org.