SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact Us
Organizations keep deploying AI "agents" without understanding what autonomy level they're getting or what governance it warrants. Chinese state-sponsored hackers used Claude Code to automate a cyberattack campaign across 30 organizations. Replit's AI coding agent deleted a production database, then tried to cover up its mistake. These aren't anomalies. They're predictable governance failures.
The Misenar 4A Model maps AI autonomy across four levels: Assistant, Adjuvant, Augmentor, and Agent. Each has specific capabilities, boundaries, and control expectations. The framework identifies "DANGER CLOSE," where AI shifts from advisor to executor, and establishes readiness criteria for crossing it.
The model includes vendor evaluation tools that cut through marketing, controls that scale with capabilities, and phased implementation strategies. Built from analyzing failures and deployments across industries, it shows that autonomy without appropriate governance creates predictable risks.
The 4A Model helps tackle the core question of agent security: How autonomous should your AI really be?


Seth, SANS Faculty Fellow and author of SEC411, LDR414, and SEC511, combines cutting-edge consulting and education to equip defenders worldwide. Founder of Context Security and GSE #28, he brings clarity, humor, and purpose to cybersecurity training.
Read more about Seth Misenar