Shang Wang (University of Technology Sydney, Australia), Tianqing Zhu (City University of Macau, Macau SAR, China), Dayong Ye (City University of Macau, Macau SAR, China), Hua Ma (Data61, CSIRO, Australia), Bo Liu (University of Technology Sydney, Australia), Ming Ding (Data61, CSIRO, Australia), Shengfang Zhai (National University of Singapore, Singapore), Yansong Gao (School of Cyber Science and Engineering, Southeast University, China)

In modern Data-as-a-Service (DaaS) ecosystems, data curators such as data brokerage companies aggregate high-quality data from many contributors and monetize it for deep learning model providers. However, malicious curators can sell valuable data but not inform their original contributors, which violates individual benefits and the law. Intrusive watermarking is one of the state-of-the-art (SOTA) techniques for protecting data copyright, and it detects whether a suspicious model carries the predefined pattern. However, these approaches face numerous limitations: struggle to work under low watermark injection rates (≤ 1.0%); performance degradation; false positives; not robust against watermarking cleansing.

This work proposes an innovative intrusive watermarking approach, dubbed DIP (Data Intelligence Probabilistic Watermarking), to support dataset ownership verification while addressing the limitations above. It applies a distribution-aware sample selection algorithm, embeds probabilistic associations between watermarked samples and multiple outputs, and adopts a two-fold verification framework that leverages both inference results and their distribution as watermark signals. Extensive experiments on 4 image and 5 text datasets demonstrate that DIP maintains the model’s performance, and achieves an average watermark success rate of 89.4% at a 1% injection budget. We further validate that DIP is orthogonal to various watermarked data designs and can seamlessly integrate their strengths. Moreover, DIP proves effective across diverse modalities (image and text) and tasks (regression), with strong performance on generation tasks in large language models. DIP exhibits robustness against various adversarial environments, including 3 based on data augmentation, 3 on data cleansing, 4 on robust training and 3 on collusion-based watermark removal, while existing SOTAs fail. The source code is released at https://github.com/SixLab6/DIP.

View More Papers

Discovering Blind-Trust Vulnerabilities in PLC Binaries via State Machine...

Fangzhou Dong (Arizona State University), Arvind S Raj (Arizona State University), Efrén López-Morales (New Mexico State University), Siyu Liu (Arizona State University), Yan Shoshitaishvili (Arizona State University), Tiffany Bao (Arizona State University), Adam Doupé (Arizona State University), Muslum Ozgur Ozmen (Arizona State University), Ruoyu Wang (Arizona State University)

Read More

Exploiting TLBs in Virtualized GPUs for Cross-VM Side-Channel Attacks

Hongyue Jin (Clemson University), Yanan Guo (University of Rochester), Zhenkai Zhang (Clemson University)

Read More

TYPEFUZZ: Type Coverage Directed JavaScript Engine Fuzzing (Registered Report)

Tobias Wienand (Ruhr-Universitat Bochum), Lukas Bernhard (Ruhr-Universitat Bochum), Flavio Toffalini (Ruhr-Universitat Bochum)

Read More