Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

Commercial Vehicle Electronic Logging Device Security: Unmasking the Risk...

Jake Jepson, Rik Chatterjee, Jeremy Daily (Colorado State University)

Read More

Compromising Industrial Processes using Web-Based Programmable Logic Controller Malware

Ryan Pickren (Georgia Institute of Technology), Tohid Shekari (Georgia Institute of Technology), Saman Zonouz (Georgia Institute of Technology), Raheem Beyah (Georgia Institute of Technology)

Read More

Acoustic Keystroke Leakage on Smart Televisions

Tejas Kannan (University of Chicago), Synthia Qia Wang (University of Chicago), Max Sunog (University of Chicago), Abraham Bueno de Mesquita (University of Chicago Laboratory Schools), Nick Feamster (University of Chicago), Henry Hoffmann (University of Chicago)

Read More

Research on the Reliability and Fairness of Opinion Retrieval...

Zhuo Chen, Jiawei Liu, Haotan Liu (Wuhan University)

Read More