Shiqing Luo (George Mason University), Anh Nguyen (George Mason University), Hafsa Farooq (Georgia State University), Kun Sun (George Mason University), Zhisheng Yan (George Mason University)

Understanding the vulnerability of virtual reality (VR) is crucial for protecting sensitive data and building user trust in VR ecosystems. Previous attacks have demonstrated the feasibility of inferring VR keystrokes inside head-mounted displays (HMDs) by recording side-channel signals generated during user-HMD interactions. However, these attacks are heavily constrained by the physical layout or victim pose in the attack scenario since the recording device must be strictly positioned and oriented in a particular way with respect to the victim. In this paper, we unveil a placement-flexible keystroke inference attack in VR by eavesdropping the clicking sounds of the moving hand controller during keystrokes. The malicious recording smartphone can be placed anywhere surrounding the victim, making the attack more flexible and practical to deploy in VR environments. As the first acoustic attack in VR, our system, Heimdall, overcomes unique challenges unaddressed by previous acoustic attacks on physical keyboards and touchscreens. These challenges include differentiating sounds in a 3D space, adaptive mapping between keystroke sound and key in varying recording placement, and handling occasional hand rotations. Experiments with 30 participants show that Heimdall achieves key inference accuracy of 96.51% and top-5 accuracy of 85.14%-91.22% for inferring passwords with 4-8 characters. Heimdall is also robust under various practical impacts such as smartphone-user placement, attack environments, hardware models, and victim conditions.

View More Papers

MacOS versus Microsoft Windows: A Study on the Cybersecurity...

Cem Topcuoglu (Northeastern University), Andrea Martinez (Florida International University), Abbas Acar (Florida International University), Selcuk Uluagac (Florida International University), Engin Kirda (Northeastern University)

Read More

PANDORA: Jailbreak GPTs by Retrieval Augmented Generation Poisoning

Gelei Deng, Yi Liu (Nanyang Technological University), Yuekang Li (The University of New South Wales), Wang Kailong(Huazhong University of Science and Technology), Tianwei Zhang, Yang Liu (Nanyang Technological University)

Read More

From Hardware Fingerprint to Access Token: Enhancing the Authentication...

Yue Xiao (Wuhan University), Yi He (Tsinghua University), Xiaoli Zhang (Zhejiang University of Technology), Qian Wang (Wuhan University), Renjie Xie (Tsinghua University), Kun Sun (George Mason University), Ke Xu (Tsinghua University), Qi Li (Tsinghua University)

Read More

Aligning Confidential Computing with Cloud-native ML Platforms

Angelo Ruocco, Chris Porter, Claudio Carvalho, Daniele Buono, Derren Dunn, Hubertus Franke, James Bottomley, Marcio Silva, Mengmei Ye, Niteesh Dubey, Tobin Feldman-Fitzthum (IBM Research)

Read More