Kaiming Cheng (University of Washington), Mattea Sim (Indiana University), Tadayoshi Kohno (University of Washington), Franziska Roesner (University of Washington)

Augmented reality (AR) headsets are now commercially available, including major platforms like Microsoft’s Hololens 2, Meta’s Quest Pro, and Apple’s Vision Pro. Compared to currently widely deployed smartphone or web platforms, emerging AR headsets introduce new sensors that capture substantial and potentially privacy-invasive data about the users, including eye-tracking and hand-tracking sensors. As millions of users begin to explore AR for the very first time with the release of these headsets, it is crucial to understand the current technical landscape of these new sensing technologies and how end-users perceive and understand their associated privacy and utility implications. In this work, we investigate the current eye-tracking and hand-tracking permission models for three major platforms (HoloLens 2, Quest Pro, and Vision Pro): what is the granularity of eye-tracking and hand-tracking data made available to applications on these platforms, and what information is provided to users asked to grant these permissions (if at all)? We conducted a survey with 280 participants with no prior AR experience on Prolific to investigate (1) people’s comfort with the idea of granting eye- and hand-tracking permissions on these platforms, (2) their perceived and actual comprehension of the privacy and utility implications of granting these permissions, and (3) the self-reported factors that impact their willingness to try eye-tracking and hand-tracking enabled AR technologies in the future. Based on (mis)alignments we identify between comfort, perceived and actual comprehension, and decision factors, we discuss how future AR platforms can better communicate existing privacy protections, improve privacy-preserving designs, or better communicate risks.

View More Papers

In Control with no Control: Perceptions and Reality of...

Jason Morris, Ingolf Becker, Simon Parkin (University College London)

Read More

VoiceRadar: Voice Deepfake Detection using Micro-Frequency and Compositional Analysis

Kavita Kumari (Technical University of Darmstadt), Maryam Abbasihafshejani (University of Texas at San Antonio), Alessandro Pegoraro (Technical University of Darmstadt), Phillip Rieger (Technical University of Darmstadt), Kamyar Arshi (Technical University of Darmstadt), Murtuza Jadliwala (University of Texas at San Antonio), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Throwaway Accounts and Moderation on Reddit

Cheng Guo (Clemson University), Kelly Caine (Clemson University)

Read More

MineShark: Cryptomining Traffic Detection at Scale

Shaoke Xi (Zhejiang University), Tianyi Fu (Zhejiang University), Kai Bu (Zhejiang University), Chunling Yang (Zhejiang University), Zhihua Chang (Zhejiang University), Wenzhi Chen (Zhejiang University), Zhou Ma (Zhejiang University), Chongjie Chen (HANG ZHOU CITY BRAIN CO., LTD), Yongsheng Shen (HANG ZHOU CITY BRAIN CO., LTD), Kui Ren (Zhejiang University)

Read More