Tian Dong (Shanghai Jiao Tong University), Shaofeng Li (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Haojin Zhu (Shanghai Jiao Tong University), Zhen Liu (Shanghai Jiao Tong University)

Identity plays an important role in responsible artificial intelligence (AI): it acts as a unique marker for deep learning (DL) models and can be used to trace those accountable for irresponsible use of models. Consequently, effective DL identity audit is fundamental for building responsible AI. Besides models, training datasets determine what features a model can learn, and thus should be paid equal attention in identity audit. In this work, we propose the first practical scheme, named RAI2, for responsible identity audit for both datasets and models. We develop our dataset and model similarity estimation methods that can work with black-box access to suspect models. The proposed methods can quantitatively determine the identity of datasets and models by estimating the similarity between the owner's and suspect's. Finally, we realize our responsible audit scheme based on the commitment scheme, enabling the owner to register datasets and models to a trusted third party (TTP) which is in charge of dataset and model regulation and forensics of copyright infringement. Extensive evaluation on 14 model architectures and 6 visual and textual datasets shows that our scheme can accurately identify the dataset and model with the proposed similarity estimation methods. We hope that our audit methodology will not only fill the gap in achieving identity arbitration but also ride on the wave of AI governance in this chaotic world.

View More Papers

The Power of Bamboo: On the Post-Compromise Security for...

Tianyang Chen (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Stjepan Picek (Radboud University), Bo Luo (The University of Kansas), Willy Susilo (University of Wollongong), Hai Jin (Huazhong University of Science and Technology), Kaitai Liang (TU Delft)

Read More

Fine-Grained Trackability in Protocol Executions

Ksenia Budykho (Surrey Centre for Cyber Security, University of Surrey, UK), Ioana Boureanu (Surrey Centre for Cyber Security, University of Surrey, UK), Steve Wesemeyer (Surrey Centre for Cyber Security, University of Surrey, UK), Daniel Romero (NCC Group), Matt Lewis (NCC Group), Yogaratnam Rahulan (5G/6G Innovation Centre - 5GIC/6GIC, University of Surrey, UK), Fortunat Rajaona (Surrey…

Read More

SynthDB: Synthesizing Database via Program Analysis for Security Testing...

An Chen (University of Georgia), Jiho Lee (University of Virginia), Basanta Chaulagain (University of Georgia), Yonghwi Kwon (University of Virginia), Kyu Hyung Lee (University of Georgia)

Read More

Focusing on Pinocchio's Nose: A Gradients Scrutinizer to Thwart...

Jiayun Fu (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Bin B. Zhu (Microsoft Research Asia), Pingyi Hu (Huazhong University of Science and Technology), Ruixin Zhao (Huazhong University of Science and Technology), Yaru Jia (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Hai…

Read More