SANS Community Nights are a great way to stay in touch with your local InfoSec community and to hear the latest in technical wizardry, industry intelligence, and thought leadership from our amazing instructors.
Join us at the etc.Venues Monument, 8 Eastcheap, London, EC3M 1AE, United Kingdom
View the agenda below:
Wednesday 6th November 2024
Registration and Drinks
17:30 – 18:00
Maintaining Trust in our Supply Chain
With Jason Dely
18:00 – 19:00
Don't Trust Smart Sheep: An exploration of attacks against and potential defenses for LLMs
With Jim Simpson
19:00 – 20:00
Abstracts:
Maintaining Trust in our Supply Chain with Jason Dely
When using devices and services there has always been, rightly so, an acceptance of trust in our supply chain and believing in the people and technologies. For cybersecurity to be effective, however, trust must not only exist but also continuously validated. This validation of trust must extend into our operations. Validating this trust is difficult to maintain which increases when the methods and mechanisms available to us are limited, incorrect or nonexistent. This presentation will explore challenges around obtaining and validating trust in our supply chain to meet our cyber security objectives. The methods we use to secure our operations requires us to deal with the knowns, find what we don't know (known unknowns) but also seek out the things we don't known that we don't know (unknown unknowns). We will explore the gaps in the current processes used to measure and validate trust within the supply chain used to improve or maintain the state of our operations before, during and after a cyber security incident. By the end of this presentation you will have a better understanding how to better maintain trust in our supply chain.
Don't Trust Smart Sheep: An exploration of attacks against and potential defenses for LLMs with Jim Simpson
In this presentation, "Don't Trust Smart Sheep: An Exploration of Attacks and Potential Defenses for LLMs," we delve into the inner workings of Large Language Models (LLMs) and the vulnerabilities they face. Beginning with an introduction to LLM fundamentals, we explore the meaning behind the acronym GPT and key concepts such as Bayes Theorem, Supervised Fine-Tuning, and Reinforcement Learning from Human Feedback (RLHF), illustrating how these techniques train models to talk, to be helpful, and to be ethical. We then examine a range of common attacks on LLMs, and discuss strategies for mitigating, detecting, and responding to these threats. This presentation aims to equip attendees with an understanding of the risks associated with LLMs and practical guidance on enhancing their security and reliability.