SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact Us
The intersection of AI and cybercrime is rapidly reshaping the threat landscape. In a recent SANS webinar, experts explored how malicious actors are weaponizing AI—especially generative models—to automate, accelerate, and amplify cyberattacks. As artificial intelligence becomes more accessible, its misuse in criminal ecosystems continues to evolve. This summary highlights the webinar’s key insights and actionable strategies for cybersecurity professionals.
Cybercriminals are adopting AI-powered tools across the attack lifecycle:
AI is being packaged into plug-and-play tools:
A major focus of the webinar was how prompt injection attacks exploit LLM-powered tools:
A financial services firm integrated a GenAI tool to summarize documents. A malicious PDF contained hidden instructions that caused the LLM to email client data to an external address—bypassing all traditional DLP tools.
An executive's voice was cloned using public recordings. The attacker used this deepfake to initiate a fraudulent wire transfer. Only a multi-layer verification process stopped the attempt.
These examples show how artificial intelligence and cyber crime are converging in real-world attacks.
Organizations must proactively defend against this emerging class of threats. The webinar emphasized the following best practices:
The use of AI in cybercrime is no longer theoretical. Threat actors are already integrating tools like ChatGPT, Stable Diffusion, and custom-trained LLMs into their attack chains. These developments create a fast-moving threat environment that outpaces traditional defenses.
Organizations must evolve their defenses with an AI-aware approach.
To combat the weaponization of AI, cybersecurity professionals must strengthen their skills in behavioral detection, incident response, and cloud security. Foundational knowledge and practical experience are critical.
Courses that align with these needs include:
As attackers continue weaponizing AI, defenders must treat this as a core capability area—not a niche concern. AI and cybercrime are already intersecting in dangerous and sophisticated ways. From phishing and impersonation to model manipulation and data leaks, the risks are growing. Now is the time for organizations to:
Security teams that understand and prepare for these risks will be better equipped to defend against the next generation of cyberattacks.


Jason is a digital forensics, incident response, and cybercrime investigation specialist. He began his career in the early development of the discipline, when he combined his love for computers and technology with his role as a police detective.
Read more about Jason Jordaan