SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact Us
In February 2024, SANS instructors Brandon Evans and Eric Johnson reported a critical confused deputy vulnerability in Microsoft Defender for Cloud to the Microsoft Security Response Center. The vulnerability was straightforward but alarming: when organizations onboard Defender for Cloud, they grant Microsoft extensive privileges to scan their environments. Because Defender is a multi-tenant service with access to all of its customers’ cloud accounts, Evans and Johnson found that under certain conditions, one customer’s security findings could be disclosed to unauthorized third parties. Microsoft rated the vulnerability Critical and awarded a bounty.
The confused deputy problem is not new. AWS has documented the pattern for years: an entity without permission coerces a more privileged entity into performing an action on its behalf. What Evans and Johnson demonstrated was a cross-cloud variant operating at the vendor integration level. Their research, presented in a SANS webcast and incorporated into SEC510: Cloud Security Engineering and Controls, established an important precedent: even the organizations building cloud security tooling can become confused deputies.
Now consider what happens when we add AI agents to the equation.
Enterprise AI agents are the newest, and potentially the most dangerous confused deputies in your cloud environment. When you grant an AI agent access to your cloud APIs, email, calendar, code repositories, and databases, you are deputizing a semi-autonomous entity with broad privileges across multiple services. Unlike a traditional service account with a fixed scope, an AI agent’s behavior is influenced by natural language instructions that can be manipulated through prompt injection. The agent becomes a confused deputy not because of a configuration error, but because of the fundamental nature of how it operates.
This is not a theoretical risk. In March 2026, the TeamPCP supply chain campaign compromised LiteLLM, an AI gateway proxy used by thousands of enterprises to route requests to large language model providers. LiteLLM’s purpose is to hold API keys for dozens of AI services, making it one of the highest-density credential targets in any infrastructure. The attackers harvested SSH keys, cloud credentials, LLM API keys, and database passwords affecting an estimated 500,000 corporate identities. The attack succeeded because LiteLLM concentrated long-lived credentials in a single location with broad access, the exact pattern that makes confused deputy attacks devastating.
The Model Context Protocol (MCP), which is rapidly becoming the standard interface between AI agents and tools, introduces another dimension to this problem. A recent SANS webcast on MCP and authorization dissected the challenges: MCP relies on OAuth 2.0 for authorization, but the protocol itself does not address how those OAuth credentials are securely stored, lifecycle-managed, or policy-governed before being issued to agents. Every MCP tool connection is a potential confused deputy vector if the agent’s credential scope is not properly constrained.
Cloud security practitioners are well-versed in identity and access management (IAM). We configure IAM policies, enforce least privilege, rotate credentials, and implement multi-factor authentication. These controls work for human users and traditional service accounts because they operate within predictable boundaries. AI agents break these assumptions in three fundamental ways.
First, agents aggregate access. A single AI agent might hold credentials for your cloud provider, email system, code repository, ticketing system, and half a dozen SaaS applications. Each credential individually might follow least privilege, but the aggregate access creates a blast radius that no single service account was ever designed to have.
Second, agents are susceptible to novel compromise vectors. Prompt injection can cause an agent to misuse legitimately obtained credentials. A malicious instruction embedded in an email, document, or API response can redirect the agent’s actions without triggering any traditional security controls. The 2026 SANS State of Identity Threats and Defenses Survey highlighted non-human identity challenges, including machine identities and AI agents, as a growing concern. Identity has become the new security perimeter, and agents are the entities most difficult to contain within it.
Third, agents need dynamic scope. A human user’s access patterns are relatively stable day-to-day. An agent’s scope requirements change with every task. It might need read-only access to a database for one operation and write access to a deployment pipeline for the next. Traditional static IAM policies cannot express this dynamic behavior without over-provisioning.
The AI-Driven DevSecOps webcast series from SANS demonstrated how AI coding agents and MCP are extending cloud-native security tooling. But extending tooling means extending the trust boundary, and extending the trust boundary without corresponding credential controls means extending the attack surface.
What if agents never held real credentials at all?
This is the core idea behind the Credential Broker for Agents (CB4A), an architecture I have proposed in an IETF Internet-Draft that specifies a credential vaulting and brokering layer for AI agents. The design is built on a simple principle: separate the entity that decides whether to grant access from the entity that holds the credentials.
CB4A introduces a broker that sits between AI agents and the services they need to access. The broker consists of two components inspired by NIST SP 800-207 (Zero Trust Architecture):
This separation means that compromising the policy engine does not yield credentials, and compromising the credential store does not yield the ability to approve requests. The agent never sees real long-lived credentials. Instead, it receives proxy credentials that expire in seconds to minutes, are bound to the agent’s cryptographic identity using DPoP (RFC 9449), and cannot be replayed if exfiltrated.
The broker uses SPIFFE (Secure Production Identity Framework for Everyone) and SPIRE (the SPIFFE Runtime Environment) for agent identification. Each agent gets a cryptographic workload identity, an X.509 certificate tied to its runtime environment, which must be re-attested periodically. This is the same workload identity approach that Evans and Johnson advocate through their Nymeria tool, which replaces long-lived cloud credentials with Workload Identity Federation across AWS, Azure, and GCP. CB4A extends this pattern to AI agents.
One of the most powerful techniques in a defender’s toolkit is deception. Honey tokens (fake credentials planted where an attacker would look for real ones) are a proven detection mechanism. SEC 502: Cloud Security Tactical Defense introduces a number of concepts including honey tokens as part of a layered defense strategy. In the context of AI agents, honey tokens become even more valuable because agents interact with credentials programmatically and at high speed, making the use of a canary credential an immediate and unambiguous indicator of compromise.
CB4A’s architecture includes a dedicated compromise detection layer that plants canary credentials alongside real proxy credentials. If an agent, or an attacker who has compromised an agent, attempts to use a canary credential, the broker immediately knows the agent is operating outside its expected behavior. The response is automatic: revoke all active credentials for that agent, alert the security team, and quarantine the agent’s sessions.
This approach transforms credential compromise from a delayed-discovery problem into a near-real-time detection event. Defenders do not need to wait for anomalous behavior patterns or log analysis. The canary credential fires, and the blast radius is contained by the short-lived nature of every other credential the agent holds.
Consider a concrete scenario that any cloud security practitioner would recognize. Your organization deploys an AI agent to assist with cloud infrastructure management. The agent needs to:
Without a credential broker, this agent holds four sets of long-lived credentials simultaneously. A prompt injection attack that compromises the agent gives the attacker access to all four services.
With CB4A, the agent holds zero long-lived credentials. For each action, it requests a proxy credential from the broker:
If the agent is compromised by prompt injection between steps 2 and 4, the attacker has a single CloudWatch read-only token that expires in under a minute. They cannot pivot to the database, Jira, or Slack because those credentials do not exist yet. Compare this to the traditional model where all four credential sets are immediately available.
The tiered approval framework adds human oversight where it matters. Low-risk operations (Tier 1) are approved automatically. Moderate-risk operations (Tier 2) require asynchronous human approval via notification. High-risk operations like administrative actions or destructive changes (Tier 3) require synchronous approval with MFA challenge. The cloud practitioner stays in control of the most consequential actions while allowing routine operations to flow without friction.
CB4A does not exist in isolation. The IETF landscape for AI agent identity and authorization has expanded rapidly, with over 20 individual submissions and three formal working groups (WIMSE, SPICE, and WebBotAuth) addressing different aspects of the problem. CB4A is designed to compose with this ecosystem:
The convergence across these independent efforts validates the approach: the industry is recognizing that AI agent credential management is a distinct architectural concern that needs a dedicated solution layer.
Whether or not your organization is ready to implement a credential broker, there are practical steps you can take today to reduce your AI agent confused deputy risk:
The confused deputy problem is not going away. As AI agents become more capable and more deeply integrated into cloud infrastructure, the credential management challenge will only intensify. A credential broker architecture ensures that when (not if) an agent is compromised, the blast radius is measured in seconds and scoped to a single action, not an entire infrastructure.
Kenneth G. Hartman is a SANS Instructor and the author of draft-hartman-credential-broker-4-agents, an IETF Internet-Draft specifying a credential broker architecture for AI agents. He can be reached at khartman@sans.org.


Ken owns Lucid Truth Technologies, a private investigation agency and forensic consulting firm specializing in computer, mobile, network, and cloud forensics. Ken’s mission is to “make the truth clear,” and that's reflected in his teaching style.
Read more about Kenneth G. Hartman