HOPE XV (2024): "Incubated Machine Learning Exploits: Backdooring ML Pipelines Using Input-Handling Bugs" (Download)
Friday, July 12, 2024: 4:00 pm (Tobin 201/202): Machine learning (ML) pipelines are vulnerable to model backdoors that compromise the integrity of the underlying system. Although many backdoor attacks limit the attack surface to the model, ML models are not standalone objects. Instead, they are artifacts built using a wide range of tools and embedded into pipelines with many interacting components. In this talk, Suha will introduce incubated ML exploits in which attackers inject model backdoors into ML pipelines using input-handling bugs in ML tools. Using a language-theoretic security (LangSec) framework, they systematically exploited ML model serialization bugs in popular tools to construct backdoors. In the process, they developed malicious artifacts such as polyglot and ambiguous files using ML model files. The team also contributed to Fickling, a pickle security tool tailored for ML use cases. Finally, they formulated a set of guidelines for security researchers and ML practitioners. By chaining system security issues and model vulnerabilities, incubated ML exploits emerge as a new class of exploits that highlight the importance of a holistic approach to ML security.
Suha Sabi Hussain (suhacker)