Derui Wang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Bo Li (The University of Chicago), Seyit Camtepe (CSIRO's Data61), Liming Zhu (CSIRO's Data61)

The exploitation of publicly accessible data has led to escalating concerns regarding data privacy and intellectual property (IP) breaches in the age of artificial intelligence. To safeguard both data privacy and IP-related domain knowledge, efforts have been undertaken to render shared data unlearnable for unauthorized models in the wild. Existing methods apply empirically optimized perturbations to the data in the hope of disrupting the correlation between the inputs and the corresponding labels such that the data samples are converted into Unlearnable Examples (UEs). Nevertheless, the absence of mechanisms to verify the robustness of UEs against uncertainty in unauthorized models and their training procedures engenders several under-explored challenges. First, it is hard to quantify the unlearnability of UEs against unauthorized adversaries from different runs of training, leaving the soundness of the defense in obscurity. Particularly, as a prevailing evaluation metric, empirical test accuracy faces generalization errors and may not plausibly represent the quality of UEs. This also leaves room for attackers, as there is no rigid guarantee of the maximal test accuracy achievable by attackers. Furthermore, we find that a simple recovery attack can restore the clean-task performance of the classifiers trained on UEs by slightly perturbing the learned weights. To mitigate the aforementioned problems, in this paper, we propose a mechanism for certifying the so-called $(q, eta)$-Learnability of an unlearnable dataset via parametric smoothing. A lower certified $(q, eta)$-Learnability indicates a more robust and effective protection over the dataset. Concretely, we 1) improve the tightness of certified $(q, eta)$-Learnability and 2) design Provably Unlearnable Examples (PUEs) which have reduced $(q, eta)$-Learnability. According to experimental results, PUEs demonstrate both decreased certified $(q, eta)$-Learnability and enhanced empirical robustness compared to existing UEs. Compared to the competitors on classifiers with uncertainty in parameters, PUEs reduce at most $18.9%$ of certified $(q, eta)$-Learnability on ImageNet and $54.4%$ of the empirical test accuracy score on CIFAR-100. Our source code is available at https://github.com/NeuralSec/certified-data-learnability.

View More Papers

Dissecting Payload-based Transaction Phishing on Ethereum

Zhuo Chen (Zhejiang University), Yufeng Hu (Zhejiang University), Bowen He (Zhejiang University), Dong Luo (Zhejiang University), Lei Wu (Zhejiang University), Yajin Zhou (Zhejiang University)

Read More

EvoCrawl: Exploring Web Application Code and State using Evolutionary...

Xiangyu Guo (University of Toronto), Akshay Kawlay (University of Toronto), Eric Liu (University of Toronto), David Lie (University of Toronto)

Read More

BitShield: Defending Against Bit-Flip Attacks on DNN Executables

Yanzuo Chen (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)

Read More