Group Purchasing
Group Purchasing

AI Isn’t a Tool Anymore. It’s an Operator. Notes from the SANS AI Cybersecurity Summit

The Summit Confirmed What We Already Knew (But Hoped Wasn't True)

Authored byRob T. Lee
Rob Lee

The SANS AI Cybersecurity Summit  delivered the same message from different speakers: attackers are operating autonomously at machine speed, our defenses are still human-speed, and the gap keeps widening in their favor.

Anne Neuberger, who spent years thinking about this at NSA, described the shift with brutal clarity. In the past, "it was a race against an attacker jiggling a doorknob at a time. Now what we have is a set of attackers who can really jiggle every doorknob all at once.” She meant it literally. Not checking one system, one user, or one service, but all of them. Simultaneously. Automated. That's not a tactic. That's infrastructure.

Greg Isenberg captured the asymmetry even more precisely: "AI made it 100x easier to build. It also made it 100x easier to attack." The barrier to entry collapsed for both sides at the same time. Asymmetry is dead.

The uncomfortable part wasn't the threat data. It was watching people realize that most of what we've built for "incident response" is built for human-speed attacks that unfold over weeks. Attackers now operate in seconds. (I've been saying this for years. Seeing it land in the room validated something I was hoping I was wrong about.)

Here's what actually changed my thinking during the summit.

The Pattern We're Missing

Jacob Klein from Anthropic was direct: "The landscape has already changed. It's not just changing in the future, though it is. The landscape has changed today." He pulled numbers showing 48% of tracked malicious actors saw meaningful capability uplift from AI between July 2025 and February 2026. Not just usage: uplift. The median attacker isn't using AI for one step of their chain anymore: they're automating an average of 16 techniques from reconnaissance through exfiltration. (I've seen the data. This isn't speculation.)

That's not "AI-assisted attacks." That's AI as infrastructure.

Yotam Perkal nailed the threat modeling problem: "Stop threat modeling AI like a tool and start threat modeling it like an operator." We're still modeling attackers as humans using tools. AI introduces what he called a "third operator" into our environments. When you give an autonomous agent business context, access to tools, and permission to act, it stops being software. It becomes an actor, a privileged actor. We're still threat modeling it like it's a calculator. (Yeah, we're way behind on this.)

But here's what landed harder: Julie Davila from GitLab pointed out that we're looking in the wrong place. "Failures rarely occur where you're watching. They occur in the seams between the components." While everyone obsesses over prompt injections and model outputs, the real vulnerabilities live in the orchestration layers, the APIs and the points where untrusted AI output gets serialized into privileged execution boundaries without validation.

Complex systems don’t usually fail at the obvious points. They fail at the integration points no one thought to harden because they weren't built with AI in mind. Any time AI-generated data crosses a trust boundary that assumes human input, failure is guaranteed. Sounil Yu put it differently: "AI is a great magnifying glass in the sense that a lot of these issues are the result of brittle primitives that existed well before the LLM craze." We didn't invent these problems. AI just weaponized them.

The meta-problem underneath all of this: we built security controls assuming we were defending against humans or relatively simple malware. Not autonomous agents. Not systems that can test thousands of variations in the time it takes you to sip your coffee. (And yes, someone in the room actually said this problem was "unprecedented." I hate that word. It's rarely unprecedented; we're just repeating history faster.)

What Actually Matters Right Now

So, the threat is real, and we're unprepared. What do you do Monday morning?

Stop modeling the AI. Start modeling the workflow. Most teams I talk to are still thinking about "securing the model" or "securing the output." That's the wrong frame. The question isn't whether Claude could do something bad. The question is: what does your workflow do when Claude makes a bad decision? If your AI system can approve a transaction, trigger an alert, or query sensitive data, you're not running a chatbot, you're running a system with business authority.

Map every place in your environment where AI makes decisions or accesses data. Ask what happens when it’s wrong. Then ask whether a human is actually reviewing that decision or just clicking "approved" at machine speed. (That's still failure. You're just more efficient at it.)

Know what you're running. Build a basic AI asset inventory. What models are your teams using? Where are they deployed? What data are they touching? Use AI Bills of Materials (AIBOMs) the same way you use software BOMs, now, not later. Most teams can't answer basic questions like "how many models are interacting with our sensitive data?" or "which models are developers pulling from Hugging Face?"

You can't defend what you don't know exists. (We learned this lesson 20 years ago with shadow IT. Apparently, we’re learning it again.)

Embrace deception. Ismael Valenzuela made a point that stuck: autonomous agents are fast but fragile. They're trained to trust environmental signals. A single fake breadcrumb, tripwire, or honeypot hostname can derail an entire automated attack chain. Deploy deception layers specifically designed for AI, not humans. The speed that gives attackers an advantage becomes a disadvantage when they're running into systems they can't interpret.

Rethink what "human oversight" actually means. Allen Westley said it plainly: "Visible approval is not always meaningful authority." Everyone says, "human in the loop," but if your loop is a human clicking "approved" on a queue of 50 decisions per minute, your human isn't in the loop, they're decoration. Teri Green made it clearer: "AI isn't the risk. We are. Most AI failures do not come from some dramatic model escape. They actually come from human overconfidence."

Reduce the decision velocity. Map which decisions actually require human authority (approving a code deploy) versus which ones just need human visibility (logging what the AI did). The ones that require authority need real authority, not a rubber stamp. (Perfect is the enemy of shipped but shipped with zero human judgment is just an unaccountable system with extra steps.)

Don’t outsource decisions you can’t explain. Dr. Ferhat Dikbiyik framed it as an imperative: "Do not outsource a decision that you cannot explain to a system that you cannot audit in a race with no finish line." If you hand off authority to AI but can't articulate why the AI made that decision, you've created an accountability vacuum. If you can't audit how it made the decision because the system is opaque, you have no recourse when it fails. And it will fail.

Use existing frameworks. Leverage what already exists. Download the NIST AI Risk Management Framework. Read the OWASP AI Exchange  (Rob van der Veer and the team have done incredible work there). Look at what the Coalition for Secure AI  published. Stop inventing security from scratch. We have decades of experience securing complex systems. Apply that experience to AI.

The organizations moving fastest right now aren't inventing brand-new AI security strategies. They're taking proven security practices and applying them to AI. Inventory, governance, testing, verification. These aren't new concepts. They're just new applications.

The Thing Nobody Wanted to Say Out Loud

Here's what didn't make it into any talk: speed is the asymmetric advantage, and our governance structures weren't built for it. We have change management processes that assume human-paced deployment. We have compliance reviews that take months. We have training that happens once a year. None of that works in an environment where your threat surface constantly shifts and your attackers operate at machine speed.

Nation-state actors aren’t waiting for quarterly compliance audits. They're deploying autonomously. By the time your legal team finishes their review of a new defensive tool, you're already breached. Dr. Ferhat Dikbiyik said this directly: "We are sacrificing human judgment for speed without considering what we are doing as far as damage to the professional development structure." But also: "Security must shape the future of AI before AI irreversibly shapes the future of humanity. That sequence matters."

I don't have a clean answer to that problem yet (if I did, I'd be charging way more for consulting). But I know the answer isn't "move slower." And I know the answer isn't "ignore the governance problem." The organizations moving fastest right now are the ones figuring out how to execute governance at machine speed without losing the rigor that makes governance matter. The summit was worth the trip for that one gap-identification alone. Now we actually have to close it.

If you want to work on this problem directly, check out the Find Evil! hackathon and our proof of concept for AI-augmented incident response; Protocol SIFT.

And if you figure it out before I do, call me.