Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

GNNIC: Finding Long-Lost Sibling Functions with Abstract Similarity

Qiushi Wu (University of Minnesota), Zhongshu Gu (IBM Research), Hani Jamjoom (IBM Research), Kangjie Lu (University of Minnesota)

Read More

SENSE: Enhancing Microarchitectural Awareness for TEEs via Subscription-Based Notification

Fan Sang (Georgia Institute of Technology), Jaehyuk Lee (Georgia Institute of Technology), Xiaokuan Zhang (George Mason University), Meng Xu (University of Waterloo), Scott Constable (Intel), Yuan Xiao (Intel), Michael Steiner (Intel), Mona Vij (Intel), Taesoo Kim (Georgia Institute of Technology)

Read More

The Advantages of Distributed TCAM Firewalls in Automotive Real-Time...

Evan Allen (Virginia Tech), Zeb Bowden (Virginia Tech Transportation Institute), J. Scot Ransbottom (Virginia Tech)

Read More