Yansong Gao (The University of Western Australia), Huaibing Peng (Nanjing University of Science and Technology), Hua Ma (CSIRO's Data61), Zhi Zhang (The University of Western Australia), Shuo Wang (Shanghai Jiao Tong University), Rayne Holland (CSIRO's Data61), Anmin Fu (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61), Derek Abbott (The University of Adelaide, Australia)

In the Data as a Service (DaaS) model, data curators, such as commercial providers like Amazon Mechanical Turk, Appen, and TELUS International, aggregate quality data from numerous contributors and monetize it for deep learning (DL) model providers. However, malicious contributors can poison this data, embedding backdoors in the trained DL models. Existing methods for detecting poisoned samples face significant limitations: they often rely on reserved clean data; they are sensitive to the poisoning rate, trigger type, and backdoor type; and they are specific to classification tasks. These limitations hinder their practical adoption by data curators.

This work, for the first time, investigates the textit{training trajectory} of poisoned samples in the textit{spectrum domain}, revealing distinctions from benign samples that are not apparent in the original non-spectrum domain. Building on this novel perspective, we propose TellTale to detect and sanitize poisoned samples as a one-time effort, addressing textit{all} of the aforementioned limitations of prior work. Through extensive experiments, TellTale demonstrates the ability to defeat both universal and challenging partial backdoor types without relying on any reserved clean data. TellTale is also validated to be agnostic to various trigger types, including the advanced clean-label trigger attack, Narcissus (CCS'2023). Moreover, TellTale proves effective across diverse data modalities (e.g., image, audio and text) and non-classification tasks (e.g., regression)---making it the only known training phase poisoned sample detection method applicable to non-classification tasks. In all our evaluations, TellTale achieves a detection accuracy (i.e., accurately identifying poisoned samples) of at least 95.52% and a false positive rate (i.e., falsely recognizing benign samples as poisoned ones) no higher than 0.61%. Comparisons with state-of-the-art methods, ASSET (Usenix'2023) and CT (Usenix'2023), further affirm TellTale's superior performance. More specifically, ASSET fails to handle partial backdoor types and incurs an unbearable false positive rate with clean/benign datasets common in practice, while CT fails against the Narcissus trigger. In contrast, TellTale proves highly effective across testing scenarios where prior work fails. The source code is released at https://github.com/MPaloze/Telltale.

View More Papers

The (Un)usual Suspects – Studying Reasons for Lacking Updates...

Maria Hellenthal (CISPA Helmholtz Center for Information Security), Lena Gotsche (CISPA Helmholtz Center for Information Security), Rafael Mrowczynski (CISPA Helmholtz Center for Information Security), Sarah Kugel (Saarland University), Michael Schilling (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

Tweezers: A Framework for Security Event Detection via Event...

Jian Cui (Indiana University), Hanna Kim (KAIST), Eugene Jang (S2W Inc.), Dayeon Yim (S2W Inc.), Kicheol Kim (S2W Inc.), Yongjae Lee (S2W Inc.), Jin-Woo Chung (S2W Inc.), Seungwon Shin (KAIST), Xiaojing Liao (Indiana University)

Read More

CounterSEVeillance: Performance-Counter Attacks on AMD SEV-SNP

Stefan Gast (Graz University of Technology), Hannes Weissteiner (Graz University of Technology), Robin Leander Schröder (Fraunhofer SIT, Darmstadt, Germany and Fraunhofer Austria, Vienna, Austria), Daniel Gruss (Graz University of Technology)

Read More

The Kids Are All Right: Investigating the Susceptibility of...

Elijah Bouma-Sims (Carnegie Mellon University), Lily Klucinec (Carnegie Mellon University), Mandy Lanyon (Carnegie Mellon University), Julie Downs (Carnegie Mellon University), Lorrie Faith Cranor (Carnegie Mellon University)

Read More