Talk With an Expert
AI-FOCUSEDBeta

SEC545: (5-Day) GenAI and LLMs Application Security

SEC545Cloud Security, Artificial Intelligence
  • 5 Days (Instructor-Led)
  • 30 Hours
Course authored by:
Ahmed Abugharbia
Ahmed Abugharbia
SEC545: GenAI and LLM Application Security
Course authored by:
Ahmed Abugharbia
Ahmed Abugharbia
  • 30 CPEs

    Apply your credits to renew your certifications

  • Virtual

    Attend a live, instructor-led class remotely from anywhere

  • Advanced Skill Level

    Course material is geared for cyber security professionals with hands-on experience

  • 20 Hands-On Lab(s)

    Apply what you learn with hands-on exercises and labs

Learn to defend every layer of your GenAI stack including applications, models, and MLOps in modern cloud environments.

Course Overview

SEC545 explores GenAI security, from core concepts like LLMs and RAG to real-world risks like prompt injection and supply chain threats. Students learn to build, secure, and deploy GenAI apps using best practices for tools like LangChain, agents, and cloud platforms such as AWS Bedrock.

What You'll Learn

  • Understand GenAI, LLMs, and LangChain agents
  • Explore model fine-tuning and customization
  • Identify GenAI-specific risks and mitigations
  • Secure RAG pipelines and vector databases
  • Apply security controls across MLOps workflows
  • Perform AI threat modeling using MAESTRO
  • Align GenAI security with existing frameworks and pipelines

Business Takeaways

  • Understand how GenAI applications are built and deployed
  • Identify risks introduced by models, tools, and MLOps workflows
  • Mitigate security issues across GenAI infrastructure and pipelines
  • Implement end-to-end security from development to production
  • Align GenAI security practices with existing cloud frameworks
  • Support responsible AI use while maintaining business agility

Course Syllabus

Explore the course syllabus below to view the full range of topics covered in SEC545: GenAI and LLM Application Security.

Section 1GenAI, Large Language Models (LLMs), and Security Risks

The course starts with GenAI fundamentals, covering key concepts like Large Language Models (LLMs), embeddings, and Retrieval-Augmented Generation (RAG). Students will explore security risks unique to GenAI, including prompt injection, malicious models, and third-party supply chain vulnerabilities.

Topics covered

  • GenAI Introduction and Concepts
  • Fine-Tuning Models
  • Augmenting GenAI Knowledge
  • Safe Use and Moderation

Labs

  • LLMs and Prompt Injection
  • Fine-tuning OpenAI Models
  • Compromising Vector Database
  • Safe Use and Moderation

Section 2Securing GenAI Applications

Section 2 dives into core components for GenAI apps, like vector databases, LangChain and AI agents. Students also explore deployment zstrategies, comparing cloud and on-premises setups with a focus on the security risks unique to each. The section concludes by introducing agents communication protocols such as MCP.

Topics covered

  • AI Agents
  • GenAI Applications Architecture
  • AI Development Frameworks Security
  • Agents Communication Protocols

Labs

  • Pivoting from LLMs
  • Compromising LLM Supply Chain
  • Langchain Security
  • Model Context Protocol (MCP)

Section 3Agentic AI Security

In Section 3, students continue exploring MCP security before diving into Transformers, the core technology behind LLMs. They examine the foundation of predictive modeling, evaluate secure hosting options for AI applications, and conclude with securing data orchestration pipelines and tools such as Airflow.

Topics covered

  • MCP Attacks and OAuth Security
  • Transformer Architecture Fundamentals
  • Hosting GenAI applications
  • Data Workflow Orchestration

Labs

  • Attacking MCP Infrastructure 1
  • MLSecOps – Securing AI Deployment Pipeline
  • Attacking MCP Infrastructure 2
  • AWS Bedrock
  • Attacking Airflow

Section 4MLSecOps and Securing GenAI Applications Lifecycle

Section 4 focuses on MLOps and integrating security across pipelines. It covers model-specific attacks like serialization flaws and backdoors, then explores securing pipelines using controls such as model signing and automated scanning. The section ends with a hands-on AI threat modeling exercise using the MAESTRO framework.

Topics covered

  • Machine Learning Ops (MLOps)
  • Hosting Models
  • MLSecOps
  • AI Threat Modeling

Labs

  • Training Model Using SageMaker
  • Model Serialization Attacks
  • MLSecOps - Securing AI Deployment Pipeline
  • Threat Modeling with MAESTRO

Section 5AI for Security

Section 5 covers using AI for threat hunting and incident investigation and response, followed by a Capture the Flag (CTF) exercise. Students apply what they’ve learned to identify and remediate issues within AI infrastructure that includes Kubernetes, Docker Compose, MCP servers, Airflow, SageMaker, AWS Bedrock, and other cloud environments.

Topics covered

  • Incident handling and Investigation with AI
  • Threat Hunting with AI

Labs

  • Investigating Incidents Using Investigator MCP Server
  • Threat Hunting Using AI
  • CTF

Things You Need To Know

Relevant Job Roles

Cloud Security Engineer Training, Salary, and Career Path

Cloud Security

Cloud Security Engineers integrate advanced security measures into cloud and cloud-native environments, maximize security automation within DevOps workflows, and proactively mitigate threats to safeguard modern cloud infrastructures.

Explore learning path

Cloud Security Analyst Training, Salary, and Career Path

Cloud Security

A Cloud Security Analyst monitors and analyzes activity across cloud environments, proactively detects and assesses threats, and implements preventive controls and targeted defenses to protect critical business systems and data.

Explore learning path

Cybersecurity Architecture (OPM 652)

NICE: Design and Development

Responsible for ensuring that security requirements are adequately addressed in all aspects of enterprise architecture, including reference models, segment and solution architectures, and the resulting systems that protect and support organizational mission and business processes.

Explore learning path

Technology Research and Development (OPM 661)

NICE: Design and Development

Responsible for conducting software and systems engineering and software systems research to develop new capabilities with fully integrated cybersecurity. Conducts comprehensive technology research to evaluate potential vulnerabilities in cyberspace systems.

Explore learning path

Network Operations (OPM 441)

NICE: Implementation and Operation

Responsible for planning, implementing, and operating network services and systems, including hardware and virtual environments.

Explore learning path

Software Security Assessment (OPM 622)

NICE: Design and Development

Responsible for analyzing the security of new or existing computer applications, software, or specialized utility programs and delivering actionable results.

Explore learning path

Enterprise Architecture (OPM 651)

NICE: Design and Development

Responsible for developing and maintaining business, systems, and information processes to support enterprise mission needs. Develops technology rules and requirements that describe baseline and target architectures.

Explore learning path

Secure Systems Development (OPM 631)

NICE: Design and Development

Responsible for the secure design, development, and testing of systems and the evaluation of system security throughout the systems development life cycle.

Explore learning path

Course Schedule & Pricing

Looking for Group Purchasing Options?Contact Us

We couldn't find a match for your selection

Please try a different combination of filters and search again.

Benefits of Learning with SANS

Instructor teaching to a class

Get feedback from the world’s best cybersecurity experts and instructors

OnDemand Mobile App

Choose how you want to learn - online, on demand, or at our live in-person training events

Resources

Get access to our range of industry-leading courses and resources