Jingwen Yan (Clemson University), Mohammed Aldeen (Clemson University), Jalil Harris (Clemson University), Kellen Grossenbacher (Clemson University), Aurore Munyaneza (Texas Tech University), Song Liao (Texas Tech University), Long Cheng (Clemson University)

As the number of mobile applications continues to grow, privacy labels (e.g. Apple’s Privacy Labels and Google’s Data Safety Section) emerge as a potential solution to help users understand how apps collect, use and share their data. However, it remains unclear whether these labels actually enhance user understanding to build trust in app developers or influence their download decisions. In this paper, we investigate user perceptions of privacy labels through a comprehensive analysis of online discussions and a structured user study. We first collect and analyze Reddit posts related to privacy labels, and manually analyze the discussions to understand users’ concerns and suggestions. Our analysis reveals that users are skeptical of self-reported privacy labels provided by developers and they struggle to interpret the terminology used in the labels. Users also expressed a desire for clearer explanations about why specific data is collected and emphasized the importance of third-party verification to ensure the accuracy of privacy labels. To complement our Reddit analysis, we conducted a user study with 50 participants recruited via Amazon Mechanical Turk and Qualtrics. The study revealed that 76% of the participants indicated that privacy labels influence their app download decisions and the amount of data practice in the privacy label is the most significant factor.

View More Papers

Evaluating the Strength and Availability of Multilingual Passphrase Authentication

Chi-en Amy Tai (University of Waterloo), Urs Hengartner (University of Waterloo), Alexander Wong (University of Waterloo)

Read More

Towards Understanding Unsafe Video Generation

Yan Pang (University of Virginia), Aiping Xiong (Penn State University), Yang Zhang (CISPA Helmholtz Center for Information Security), Tianhao Wang (University of Virginia)

Read More

Passive Inference Attacks on Split Learning via Adversarial Regularization

Xiaochen Zhu (National University of Singapore & Massachusetts Institute of Technology), Xinjian Luo (National University of Singapore & Mohamed bin Zayed University of Artificial Intelligence), Yuncheng Wu (Renmin University of China), Yangfan Jiang (National University of Singapore), Xiaokui Xiao (National University of Singapore), Beng Chin Ooi (National University of Singapore)

Read More

DUMPLING: Fine-grained Differential JavaScript Engine Fuzzing

Liam Wachter (EPFL), Julian Gremminger (EPFL), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Mathias Payer (EPFL), Flavio Toffalini (EPFL)

Read More