Shang Wang (University of Technology Sydney), Tianqing Zhu (City University of Macau), Dayong Ye (City University of Macau), Hua Ma (Data61, CSIRO), Bo Liu (University of Technology Sydney), Ming Ding (Data61, CSIRO), Shengfang Zhai (National University of Singapore), Yansong Gao (School of Cyber Science and Engineering, Southeast University)

In modern Data-as-a-Service (DaaS) ecosystems, data curators such as data brokerage companies aggregate high-quality data from many contributors and monetize it for deep learning model providers. However, malicious curators can sell valuable data but not inform their original contributors, which violates individual benefits and the law. Intrusive watermarking is one of the state-of-the-art (SOTA) techniques for protecting data copyright, and it detects whether a suspicious model carries the predefined pattern. However, these approaches face numerous limitations: struggle to work under low watermark injection rates ($le1.0%$); performance degradation; false positives; not robust against watermarking cleansing.

This work proposes an innovative intrusive watermarking approach, dubbed *DIP* (underline{D}ata underline{I}ntelligence underline{P}robabilistic Watermarking), to support dataset ownership verification while addressing the limitations above. It applies a distribution-aware sample selection algorithm, embeds probabilistic associations between watermarked samples and multiple outputs, and adopts a two-fold verification framework that leverages both inference results and their distribution as watermark signals. Extensive experiments on 4 image and 5 text datasets demonstrate that *DIP* maintains the model's performance, and achieves an average watermark success rate of 89.4% at a 1% injection budget. We further validate that *DIP* is orthogonal to various watermarked data designs and can seamlessly integrate their strengths. Moreover, *DIP* proves effective across diverse modalities (image and text) and tasks (regression), with strong performance on generation tasks in large language models. *DIP* exhibits robustness against various adversarial environments, including 3 based on data augmentation, 3 on data cleansing, 4 on robust training and 3 on collusion-based watermark removal, while existing SOTAs fail. The source code is released at https://github.com/SixLab6/DIP.

View More Papers

NOD: Uncovering intense attackers’ behavior through Nested Outlier Detection...

Ghazal Abdollahi, Hamid Asadi, Robert Ricci (The University of Utah)

Read More

NetCap: Data-Plane Capability-Based Defense Against Token Theft in Network...

Osama Bajaber (Virginia Tech), Bo Ji (Virginia Tech), Peng Gao (Virginia Tech)

Read More

Anchors of Trust: A Usability Study on User Awareness,...

Xin Zhang (Fudan University), Xiaohan Zhang (Fudan University), Huijun Zhou (Fudan University), Bo Zhao (Fudan University)

Read More