SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact UsThe choice is not between AI risk and no AI risk. It is between managed AI risk and unmanaged AI risk.

We are both tired of watching companies lose control of AI.
I sat down with Wade Foster, CEO of Zapier, for his podcast Agents of Scale. Wade has direct visibility into how thousands of organizations are trying to integrate AI into their workflows.
We both spend our time inside companies that think they have AI governance under control.
We agreed on this: the controls aren’t there, the policy is paper, and your people are routing around you.
And we are both tired of watching companies lose control of AI by denying employees the chance to innovate and learn to use AI tools.
Here is what leaders say:
“We blocked risky AI tools.”
“We turned on our own internal AI assistant.”
“We published an AI policy, so we are covered.”
Here is what people do:
“Risky” AI tools are the tools they have been trained on through daily life. They can work faster and now, may even feel reliant on those tools. So, they turn back to them on personal accounts and use them on unmanaged devices.
They stop asking security or the AI committee because they already know the answer is NO.
They sit through demos of tools that only some employees have access to and can only perform some of what they are trying to do.
I call the security deny-by-default AI policy the "Framework of No.” When security says no, employees do not stop using AI. When an incident happens, you have no map of which AI systems were involved.
How do you avoid analysis paralysis? How do you avoid the risk of just simply standing still?
Security, we need to default to yes unless there is clear justification for no.
Wade shared this about Zapier’s internal AI journey: “One of the first things as we were starting to make this transition, I sat down with our legal team, our head of security, our procurement team. I said, ‘We got to go figure out how to greenlight purchasing of a handful of these tools and make sure they fit within our policy framework so the organization can start adopting this stuff and then we’ll figure out what comes next.’”
Wade and I talked about how people are worried. They’re worried about losing their jobs and if they think they’re going to have their hand slapped for doing AI experiments. Of course they will keep silent about their wins and their failures.
If security stops being the jailer, and instead, lets people experiment, we know that every now and then there are issues. But the risk of pretending AI is not being used across the organization, hiding behind governance excuses, these are greater risks than moving forward with proper oversight.
Wade put it simply: “If you can’t start there, people are going to find a place to start somewhere, and it may be worse than the option you do provide.”
When employees know they can come to security with questions and will likely get a yes, they stop hiding their experiments.
Wade shared another big hang-up they see at Zapier is partial AI capabilities: “You need organizations to fully enable the capabilities, otherwise your people are saying, ‘I can’t get access to the things that are going to actually make my job easier.’”
This is a specific failure mode: half-enablement creates more frustration and more shadow AI than no enablement at all.
If you lead security at an organization, pick three AI tools. Greenlight them. Tell your employees they can use them. Tell them security is there to watch and answer questions and be there to help. Here’s a framework for this approach that you can choose to operate come Monday morning.
The companies that figure this out will lap the ones that do not.
The ones that keep saying no will be running their organizations on shadow AI while pretending they are not.
The choice is not between AI risk and no AI risk. It is between managed AI risk and unmanaged AI risk.
Wade and I both confessed we’re struggling to keep up with AI’s pace. I have 30 years of cybersecurity experience, but this is a brand-new playing field.
The leaders and companies that admit it will lap the ones pretending they have it handled.
Thanks to Wade Foster and Zapier for the chat on Agents of Scale. Watch on YouTube or listen on Apple Podcasts or Spotify.
Rob T. Lee is Chief AI Officer & Chief of Research, SANS Institute



Rob T. Lee is Chief AI Officer and Chief of Research at SANS Institute, where he leads research, mentors faculty, and helps cybersecurity teams and executive leaders prepare for AI and emerging threats.
Read more about Rob T. Lee