Trust But Verify: Evaluating the Accuracy of LLMs in Normalizing Threat Data Feeds

This paper examines whether Large Language Models (LLMs) can be reliably applied to the normalization of Indicators of Compromise (IOCs) into Structured Threat Information Expression (STIX) format.
By
Nicholas Peterson
July 16, 2025

All papers are copyrighted. No re-posting of papers is permitted

470x382_Generic_Whitepaper.jpg