Contact Sales
Contact Sales

Detecting AI Pickling

Detecting AI Pickling (PDF, 0.99MB)Published: 12 Mar, 2026
Created by:
Bryan Nice

This study examines whether static analysis is a dependable "certification gate" for ingesting third-party, pickle-based AI model artifacts from open-source model hubs into a trusted internal registry. Pickle-derived formats can execute attacker-controlled logic during deserialization. Organizations need pre-execution scanning methods requiring no model instantiation.

This study constructed a controlled collection of baseline and poisoned artifacts across three common serialization pathways and injected logic using multiple high-risk opcode families. Then benchmarked four static inspection approaches, one on opcode disassembly (pickletools), another on decompilation/static analysis (Fickling), and two model-focused scanners (ModelScan, PickleScan).

Results show pre-execution screening can detect injected logic in this collection, yet each tool's behavior diverges in operationally significant ways. Some tools over-flag benign models, while others exhibit format-dependent blind spots for poisoned artifacts to pass. These findings indicate that a single tool to enforce pass/fail policies is brittle for enterprise certification, given the continuously evolving evasion techniques.