SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals


In-Person
While GenAI allows organisations to tackle new challenges and minimise resource expenditure, it also presents several security risks, as is common with any new technology. Cybercriminals can exploit novel attack vectors, often due to organisations' limited understanding of GenAI's complexities.
In this talk, we will explore the architecture of GenAI applications and examine the security risks that can impact them. We will begin our discussion by introducing fundamental terminology, including concepts such as Large Language Models (LLMs), Vector Databases, and Retrieval-Augmented Generation (RAG).
Next, we will dive into a typical architecture, examining how different components interact with each other and with the external environment. Following this, we will explore the various risks associated with RAG-based GenAI applications, categorising them into three main areas: data risks, LLM model risks, and application risks. For each category, we will provide practical examples to illustrate these security concerns.
One attack we will discuss is the concept of Prompt Injection. Prompt Injection can manipulate the instructions programmed into an AI assistant, potentially causing it to reveal sensitive information or perform malicious actions, depending on its integrations. A specific example of this involves targeting the Vector Database. RAG-based GenAI applications rely on Vector Databases for their knowledge base. Unauthorised access to these databases could expose sensitive data and, more critically, facilitate Prompt Injection attacks.
Finally, we will conclude with security recommendations for effectively managing and securing GenAI implementations.
In-Person & Virtual