SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals

Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact UsAI makes attackers faster—automated recon, self-rewriting malware, phishing campaigns testing a thousand variants before lunch. They’re running at machine speed, not asking for permission.
AI is making attackers faster. Automated recon, malware that rewrites itself every hour, phishing campaigns that test a thousand variants before lunch. They're running at machine speed, and they're not asking for permission.
Defenders have the same tools. Better tools, in most cases. But we're the only ones filling out forms first.
The General Data Protection Regulation (GDPR) was written to stop data brokers from selling your browser history. The EU AI Act wants to prevent algorithmic discrimination in hiring. The California Consumer Privacy Act (CCPA) gives California residents the right to know what companies know about them. None of these laws were designed to manage what happens when a ransomware gang locks down a hospital at 2 a.m. and your incident response (IR) team needs to correlate login data across three continents before the next shift starts.
So here we are. The criminals operate without lawyers. We operate with a legal team on speed dial and a compliance checklist that grows every quarter. That's not a fairness complaint. It's a structural defect.
I spent the last few months digging into this topic for a SANS RSAC whitepaper, talking to CISOs who've been forced to choose between stopping an active breach and staying compliant with a data-transfer restriction that was never designed to apply to threat intelligence. The gap isn't theoretical. It's costing time, and time is the only resource that matters when you're already breached.
We can fix this. Not with looser privacy laws — the privacy protections matter — but with narrow, transparent exceptions that let defenders act at the speed the threat demands. Here are five moves that would actually work:
The risk isn't that we'll weaken privacy by creating these exceptions. It’s that we'll protect data so rigidly that we can't defend the systems holding it. I've seen IR teams pull analysts off live hunts to verify data-transfer agreements. I've watched companies delay threat-sharing because they couldn't confirm whether the recipient's jurisdiction allowed it under GDPR.
That isn’t caution. It’s systemic failure.
Security and privacy don't have to trade off. But only if we build the legal and operational space for defenders to act before the damage is done. Right now, we're asking people to choose between being compliant and effective. That's a problem we can solve. We just need to decide it's worth solving.
My full SANS RSAC whitepaper has the legal weeds, case examples, and a breakdown of what "safe harbor" would look like in practice. If this is your world, it's worth the read. Download the whitepaper here.
Rob T. Lee is Chief of Research and Chief AI Officer at SANS Institute, where he leads research, mentors faculty, and helps cybersecurity teams and executive leaders prepare for AI and emerging threats.
Read more about Rob T. Lee