Shiqing Luo (George Mason University), Anh Nguyen (George Mason University), Hafsa Farooq (Georgia State University), Kun Sun (George Mason University), Zhisheng Yan (George Mason University)

Understanding the vulnerability of virtual reality (VR) is crucial for protecting sensitive data and building user trust in VR ecosystems. Previous attacks have demonstrated the feasibility of inferring VR keystrokes inside head-mounted displays (HMDs) by recording side-channel signals generated during user-HMD interactions. However, these attacks are heavily constrained by the physical layout or victim pose in the attack scenario since the recording device must be strictly positioned and oriented in a particular way with respect to the victim. In this paper, we unveil a placement-flexible keystroke inference attack in VR by eavesdropping the clicking sounds of the moving hand controller during keystrokes. The malicious recording smartphone can be placed anywhere surrounding the victim, making the attack more flexible and practical to deploy in VR environments. As the first acoustic attack in VR, our system, Heimdall, overcomes unique challenges unaddressed by previous acoustic attacks on physical keyboards and touchscreens. These challenges include differentiating sounds in a 3D space, adaptive mapping between keystroke sound and key in varying recording placement, and handling occasional hand rotations. Experiments with 30 participants show that Heimdall achieves key inference accuracy of 96.51% and top-5 accuracy of 85.14%-91.22% for inferring passwords with 4-8 characters. Heimdall is also robust under various practical impacts such as smartphone-user placement, attack environments, hardware models, and victim conditions.

View More Papers

SigmaDiff: Semantics-Aware Deep Graph Matching for Pseudocode Diffing

Lian Gao (University of California Riverside), Yu Qu (University of California Riverside), Sheng Yu (University of California, Riverside & Deepbits Technology Inc.), Yue Duan (Singapore Management University), Heng Yin (University of California, Riverside & Deepbits Technology Inc.)

Read More

K-LEAK: Towards Automating the Generation of Multi-Step Infoleak Exploits...

Zhengchuan Liang (UC Riverside), Xiaochen Zou (UC Riverside), Chengyu Song (UC Riverside), Zhiyun Qian (UC Riverside)

Read More

DeGPT: Optimizing Decompiler Output with LLM

Peiwei Hu (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Ruigang Liang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, China)

Read More

Investigating the Impact of Evasion Attacks Against Automotive Intrusion...

Paolo Cerracchio, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

Read More