“Can we use AI in our workplace?” — This question is being asked of every CISO and technical leader worldwide. The answers are not easy.
Many CISOs and technical managers are trying to educate themselves quickly on the risks and benefits associated with using AI in the workplace. In addition, businesses and organizations are concerned about how AI can be used to possibly disrupt or introduce new challenges in their various industries from energy, banking, medical, and more.
The risks, vulnerabilities, and benefits linked with the rapid introduction of machine learning and artificial intelligence in the world need to be discussed. Attend this Summit to hear from industry leaders about AI risks, concerns, and the solutions they've found.
The AI Cybersecurity Summit will address a wide range of topics, including:
Join us for a chance to network with the industry's top experts and your peers tackling the same hard-to-solve problems in AI. You’ll walk away with a greater understanding of the risks and benefits associated with AI and how to better leverage these new approaches for cybersecurity.
Join our SANS AI Summit Slack workspace and network with your peers!
Wednesday, May 31
Welcome & Opening Remarks
Rob Lee, SANS
Zero Hype Real World Applied AI/ML in Cybersecurity
Dave Hoelzer, COO, Enclave Forensics / Partner, Occulumen Ltd.
ChatGPT is cool. You can convince it to do interesting things. What's the reality for enterprise defenders and threat hunters? Are there useful things and quick wins you can achieve today? In other words, are there hype-free applications that you can make today?
David Hoelzer, AI/ML evangelist within an MSSP and SANS Fellow, will walk you through several real-world applications that his organization is using today to identify threats, monitor logs, and develop actionable threat intelligence.
The Ethics of AI and ML: Ensuring Cybersecurity and Privacy in Automated Decision Making
Dr. Martin Ignatovski, CIO, SimplePractice
Artificial intelligence (AI) and machine learning (ML) technologies are rapidly transforming the way decisions are made in various industries, from finance and healthcare to transportation and logistics. These technologies have the potential to improve efficiency, reduce costs, and increase accuracy. However, the use of AI and ML can also raise ethical concerns around issues such as bias, privacy, and accountability. In this presentation, we will explore the importance of ethics in AI and ML, and how organizations can ensure digital trust in their automated decision-making systems.
Overall, this presentation aims to provide attendees with a comprehensive understanding of the ethical considerations involved in AI and ML, and the practical steps they can take to ensure digital trust in their automated decision-making systems. Attendees will leave with a better understanding of the ethical implications of AI and ML, and the knowledge and skills to implement ethical frameworks and practices within their organizations.
11:55 am – Noon
Fighting Fire with Fire: Protecting Inboxes from AI Generated Phishing using AI, Michael Lopez
New to Cyber: AI-Powered Careers, Chris Cochran, Chief Evangelist, Huntress; Co-Founder, Hacker Valley Media
Preserving Creativity and the Ethical Implications of AI Art Generators, Senior Solutions Engineer, Stacy Dunn
Deepfake It Til You Make It: How IO Actors are Leveraging Generative AI Avatars, Tyler Williams, Director of Investigations, Graphika
Insider Risk & Generative AI, Matthew Kraft, CISSP, Insider Risk Advisor, Code42
Navigating the Impact of Emerging Technologies on Education: A Teen's Perspective on AI and Cybersecurity, Sully Vickers, High School Senior and Content Creator
Generative AI in Cybersecurity: Rise of the Machines?
David J. Bianco, Staff Security Strategist, SURGe by Splunk
In recent years, generative artificial intelligence (AI), especially Large Language Models (LLMs) like ChatGPT, has revolutionized the fields of AI and natural language processing. From automating customer support to creating realistic chatbots, we rely on AI much more than many of us probably realize. The AI hype train definitely reached full steam in the last several months, especially for cybersecurity use cases, with the release of tools such as Stable Diffusion, Midjourney, DALL-E, ChatGPT / GPT-3.5 / GPT-4. Unfortunately, almost all of this attention focuses on the potential negative impacts of AI, while ignoring beneficial use cases to help organizations defend their networks. As we know, disasters almost always make for better primetime news viewing than cute puppies, and most of these articles have big “if it bleeds, it leads” energy. But is all this negative hype warranted? Let's examine a few concrete use cases and find out!
Generative AI and ChatGPT Enterprise Risks
David B. Cross, CISO, Oracle Cloud
Gadi Evron, CISO-in-Residence, Team8
With Generative AI, and ChatGPT specifically, our industry finds itself behind the technology adoption curve, while employees and business units rapidly adopting the technology.
Key questions CISOs are asking: Who is using the technology in my organization, and for what purpose? How can I protect enterprise information (data) when employees are interacting with GenAI? How can I manage the security risks of the underlying technology? How do I balance the security tradeoffs with the value the technology offers?
In this talk, we will deep-dive into enterprise security risks, threats, and impacts stemming from GenAI, examine how these can be effectively managed, walk through considerations in developing enterprise policy on the topic, and provide examples to that end.
We'll also touch on threat modeling GenAI, discuss how to hold a conversation with product teams and other stakeholders, and raise non-security risks such as legal and regulatory concerns. Lastly, we will provide a sample policy, along with a paper by the same name written with 80 CISOs, released on April 20th.
- Understanding GenAI enterprise security risks, threats, and impacts.
- Considerations in building a GenAI policy, where it is and isn't necessary, and Risk Exceptions based on Risk Appetite.
- What a threat model for GenAI looks like.
- An understanding of some non-security risks, such as legal and regulatory.
- Heys to having productive conversations with product teams and other stakeholders.
AI for Red Team & Malware Development
Kirk Trychel, Senior Red Team Engineer III, Box
Attendees can expect to learn about how red teams and threat actors are able to leverage new AI technologies and LLMs for a variety of offensive operations, including custom malware development. We'll talk about the current landscape and how AI tech is already being used by both the offensive and defensive players in the Cybersecurity industry. We will also discuss the future of AI use in Cyber. The goal of the presentation is to provide information for both red teams and blue teams, to help get them ready to begin to leverage this emerging tech for their own operations.
Addressing Practical and Ethical Issues in AI-Assisted Threat Intelligence Analysis, Avneesh Chandra, Data Analyst, Grapika Inc. & Santiago Lakotos, Intelligence Analyst, Graphika
AI and Its Impact on Cybersecurity, Jayant Thakre, Product Management Leader
Demystifying the NIST AI Risk Management Framework, Matt MacDonald, Senior Manager, Cybersecurity & IT Advisory, Wolf & Company
How to Hack an AI
Harriet Farlow, CEO, Mileva Security Labs
Adversarial machine learning (or AML) is a field growing in prominence that represents the ability to “hack” Artificial Intelligence (AI) and Machine Learning (ML) algorithms by poisoning data sets imperceptibly before training, by evading classification, leaking confidential information or by hijacking the model's function to make it do something it wasn't intended to. The rapid uptake of AI/ML systems by organizations means the attack surface is growing significantly. I believe AI/ML security may soon join cyber security as one of the greatest technological and geostrategic threats. However, there is still time to learn from the lessons of cyber security.
This talk is intended to inform information security professionals about the increasing relevance of their field to AML and AI/ML security. It will describe how ML models work, why vulnerabilities exist and how they can be exploited. I will demonstrate the cutting edge of AML - glasses that deceive facial recognition detectors, stickers that can disguise objects in the physical world from image classification engines, and how carefully crafted noise can cause speech to text systems to hear messages that humans can't. I will also describe some of my own research. The audience should come away with an appreciation for the field of AML, why AI/ML security is a growing concern, and how in their roles they can contribute to the dialogue.
Unraveling the AI and ML Potential: Boosting OSINT and OPSEC Capabilities and Effectiveness
Matt Edmondson, SANS Author and Principal Instructor, Founder at Argelius Labs
Technology is constantly evolving, and nowhere is that truer than in the AI/ML space. Recently I gave a SANS webinar on using ChatGPT for OSINT. Everything I talked about in the webinar is still incredibly valid, but new offerings in the AI/ML world, like Google Bard, AutoGPT, and Vicuna, have been released that both enhance existing capabilities and open up amazing new possibilities. This talk will cover what these new capabilities can provide in the OSINT and OPSEC space.
Closing Remarks & Action Items