Shiqing Ma (Purdue University), Yingqi Liu (Purdue University), Guanhong Tao (Purdue University), Wen-Chuan Lee (Purdue University), Xiangyu Zhang (Purdue University)

Deep Neural Networks (DNN) are vulnerable to adversarial samples that
are generated by perturbing correctly classified inputs to cause DNN
models to misbehave (e.g., misclassification). This can potentially
lead to disastrous consequences especially in security-sensitive
applications. Existing defense and detection techniques work well for
specific attacks under various assumptions (e.g., the set of possible
attacks are known beforehand). However, they are not sufficiently
general to protect against a broader range of attacks. In this paper,
we analyze the internals of DNN models under various attacks and
identify two common exploitation channels: the provenance channel and
the activation value distribution channel. We then propose a novel
technique to extract DNN invariants and use them to perform runtime
adversarial sample detection. Our experimental results of 11 different
kinds of attacks on popular datasets including ImageNet and 13 models
show that our technique can effectively detect all these attacks
(over 90% accuracy) with limited false positives. We also compare it
with three state-of-the-art techniques including the Local Intrinsic
Dimensionality (LID) based method, denoiser based methods (i.e.,
MagNet and HGD), and the prediction inconsistency based approach
(i.e., feature squeezing). Our experiments show promising results.

View More Papers

Total Recall: Persistence of Passwords in Android

Jaeho Lee (Rice University), Ang Chen (Rice University), Dan S. Wallach (Rice University)

Read More

DroidCap: OS Support for Capability-based Permissions in Android

Abdallah Dawoud (CISPA Helmholtz Center i.G.), Sven Bugiel (CISPA Helmholtz Center i.G.)

Read More

Cleaning Up the Internet of Evil Things: Real-World Evidence...

Orcun Cetin (Delft University of Technology), Carlos Gañán (Delft University of Technology), Lisette Altena (Delft University of Technology), Takahiro Kasama (National Institute of Information and Communications Technology), Daisuke Inoue (National Institute of Information and Communications Technology), Kazuki Tamiya (Yokohama National University), Ying Tie (Yokohama National University), Katsunari Yoshioka (Yokohama National University), Michel van Eeten (Delft…

Read More

ML-Leaks: Model and Data Independent Membership Inference Attacks and...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More