Wenjun Zhu (Zhejiang University), Yuan Sun (Zhejiang University), Jiani Liu (Zhejiang University), Yushi Cheng (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

The proliferation of images captured from millions of cameras and the advancement of facial recognition (FR) technology have made the abuse of FR a severe privacy threat. Existing works typically rely on obfuscation, synthesis, or adversarial examples to modify faces in images to achieve anti-facial recognition (AFR). However, the unmodified images captured by camera modules that contain sensitive personally identifiable information (PII) could still be leaked. In this paper, we propose a novel approach, ***CamPro***, to capture inborn AFR images. ***CamPro*** enables well-packed commodity camera modules to produce images that contain little PII and yet still contain enough information to support other non-sensitive vision applications, such as person detection. Specifically, ***CamPro*** tunes the configuration setup inside the camera image signal processor (ISP), i.e., color correction matrix and gamma correction, to achieve AFR, and designs an image enhancer to keep the image quality for possible human viewers. We implemented and validated ***CamPro*** on a proof-of-concept camera, and our experiments demonstrate its effectiveness on ten state-of-the-art black-box FR models. The results show that ***CamPro*** images can significantly reduce face identification accuracy to 0.3% while having little impact on the targeted non-sensitive vision application. Furthermore, we find that ***CamPro*** is resilient to adaptive attackers who have re-trained their FR models using images generated by ***CamPro***, even with full knowledge of privacy-preserving ISP parameters.

View More Papers

Securing Lidar Communication through Watermark-based Tampering Detection (Long)

Michele Marazzi, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

Read More

SOCs lead AI adoption: Transitioning Lessons to the C-Suite

Eric Dull, Drew Walsh, Scott Riede (Deloitte and Touche)

Read More

QUACK: Hindering Deserialization Attacks via Static Duck Typing

Yaniv David (Columbia University), Neophytos Christou (Brown University), Andreas D. Kellas (Columbia University), Vasileios P. Kemerlis (Brown University), Junfeng Yang (Columbia University)

Read More

Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio...

Rui Duan (University of South Florida), Zhe Qu (Central South University), Leah Ding (American University), Yao Liu (University of South Florida), Zhuo Lu (University of South Florida)

Read More