Tian Dong (Shanghai Jiao Tong University), Shaofeng Li (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Haojin Zhu (Shanghai Jiao Tong University), Zhen Liu (Shanghai Jiao Tong University)

Identity plays an important role in responsible artificial intelligence (AI): it acts as a unique marker for deep learning (DL) models and can be used to trace those accountable for irresponsible use of models. Consequently, effective DL identity audit is fundamental for building responsible AI. Besides models, training datasets determine what features a model can learn, and thus should be paid equal attention in identity audit. In this work, we propose the first practical scheme, named RAI2, for responsible identity audit for both datasets and models. We develop our dataset and model similarity estimation methods that can work with black-box access to suspect models. The proposed methods can quantitatively determine the identity of datasets and models by estimating the similarity between the owner's and suspect's. Finally, we realize our responsible audit scheme based on the commitment scheme, enabling the owner to register datasets and models to a trusted third party (TTP) which is in charge of dataset and model regulation and forensics of copyright infringement. Extensive evaluation on 14 model architectures and 6 visual and textual datasets shows that our scheme can accurately identify the dataset and model with the proposed similarity estimation methods. We hope that our audit methodology will not only fill the gap in achieving identity arbitration but also ride on the wave of AI governance in this chaotic world.

View More Papers

CHKPLUG: Checking GDPR Compliance of WordPress Plugins via Cross-language...

Faysal Hossain Shezan (University of Virginia), Zihao Su (University of Virginia), Mingqing Kang (Johns Hopkins University), Nicholas Phair (University of Virginia), Patrick William Thomas (University of Virginia), Michelangelo van Dam (in2it), Yinzhi Cao (Johns Hopkins University), Yuan Tian (UCLA)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More

Cyber Threat Intelligence for SOC Analysts

Nidhi Rastogi, Md Tanvirul Alam (Rochester Institute of Technology)

Read More