Talk With an Expert

When Attackers Tune In: Weaponizing LLM Fine-Tuning for Stealthy C2

When Attackers Tune In: Weaponizing LLM Fine-Tuning for Stealthy C2 (PDF, 2.15MB)Last updated: 29 Oct, 2025
Presented by:
Bar MatalonNoa Dekel
Bar Matalon & Noa Dekel

Large Language Models (LLMs) like ChatGPT, Claude and Gemini are increasingly being integrated into enterprise environments for the purposes of automation, analytics, and decision-making. Although their fine-tuning capabilities enable the development of tailored models for specific tasks and industries, LLMs also introduce new attack surfaces that can be exploited for malicious purposes. In this presentation, we unveil how we transformed an LLM into a stealthy command and control (C2) channel - blurring the lines between AI innovation and cyber warfare. We will demonstrate a proof-of-concept attack that leverages the fine-tuning capability of a popular generative AI model. In this attack, a victim unwittingly trains the model using a dataset crafted by an attacker. This technique transforms the model into a covert communication bridge, enabling attackers to exfiltrate data from any compromised endpoint, deploy malicious payloads, and execute arbitrary commands - all while remaining hidden in plain sight. We will discuss challenges we faced, such as AI hallucinations and consistency issues, and share our approach and the techniques we developed to mitigate the issues. Additionally, we will examine this attack from a defender's perspective, highlighting why traditional security solutions struggle to detect this type of C2 channel, and what can be done to improve visibility and detection. Join us as we break down this unconventional attack vector, and demonstrate how LLMs can be leveraged for offensive operations.

SANS Hack & Defend Summit 2025