Yutong Wu (Nanyang Technological University), Jie Zhang (Centre for Frontier AI Research, Agency for Science, Technology and Research (A*STAR), Singapore), Florian Kerschbaum (University of Waterloo), Tianwei Zhang (Nanyang Technological University)

Personalization has become a crucial demand in the Generative AI technology. As the pre-trained generative model (*e.g.*, stable diffusion) has fixed and limited capability, it is desirable for users to customize the model to generate output with new or specific concepts. Fine-tuning the pre-trained model is not a promising solution, due to its high requirements of computation resources and data. Instead, the emerging personalization approaches make it feasible to augment the generative model in a lightweight manner. However, this also induces severe threats if such advanced techniques are misused by malicious users, such as spreading fake news or defaming individual reputations. Thus, it is necessary to regulate personalization models (*i.e.*, achieve *concept censorship*) for their development and advancement.

In this paper, we focus on the regulation of a popular personalization technique dubbed
textbf{Textual Inversion (TI)}, which can customize Text-to-Image (T2I) generative models with excellent performance. TI crafts the word embedding that contains detailed information about a specific object. Users can easily add the word embedding to their local T2I model, like the public Stable Diffusion (SD) model, to generate personalized images. The advent of TI has brought about a new business model, evidenced by the public platforms for sharing and selling word embeddings (*e.g.*, Civitai [1]). Unfortunately, such platforms also allow malicious users to misuse the word embeddings to generate unsafe content, causing damages to the concept owners.

We propose *THEMIS* to achieve the ***personalized concept censorship***. Its key idea is to leverage the backdoor technique for good by injecting positive backdoors into the TI embeddings. Briefly, the concept owner selects some sensitive words as triggers during the training of TI, which will be censored for normal use. In the subsequent generation stage, if a malicious user combines the sensitive words with the personalized embeddings as final prompts, the T2I model will output a pre-defined target image rather than images including the desired malicious content. To demonstrate the effectiveness of *THEMIS*, we conduct extensive experiments on the TI embeddings with Latent Diffusion and Stable Diffusion, two prevailing open-sourced T2I models.
The results demonstrate that *THEMIS* is capable of preventing Textual Inversion from cooperating with sensitive words meanwhile guaranteeing its pristine utility. Furthermore, *THEMIS* is general to different uses of sensitive words, including different locations, synonyms, and combinations of sensitive words. It can also resist different types of potential and adaptive attacks. Ablation studies are also conducted to verify our design.

View More Papers

Towards Anonymous Chatbots with (Un)Trustworthy Browser Proxies

Dzung Pham, Jade Sheffey, Chau Minh Pham, and Amir Houmansadr (University of Massachusetts Amherst)

Read More

Secret Spilling Drive: Leaking User Behavior through SSD Contention

Jonas Juffinger (Graz University of Technology), Fabian Rauscher (Graz University of Technology), Giuseppe La Manna (Amazon), Daniel Gruss (Graz University of Technology)

Read More

Enhancing Security in Third-Party Library Reuse – Comprehensive Detection...

Shangzhi Xu (The University of New South Wales), Jialiang Dong (The University of New South Wales), Weiting Cai (Delft University of Technology), Juanru Li (Feiyu Tech), Arash Shaghaghi (The University of New South Wales), Nan Sun (The University of New South Wales), Siqi Ma (The University of New South Wales)

Read More

Securing BGP ASAP: ASPA and other Post-ROV Defenses

Justin Furuness (University of Connecticut), Cameron Morris (University of Connecticut), Reynaldo Morillo (University of Connecticut), Arvind Kasiliya (University of Connecticut), Bing Wang (University of Connecticut), Amir Herzberg (University of Connecticut)

Read More