Chang Yue (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Zhixiu Guo (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Jun Dai, Xiaoyan Sun (Department of Computer Science, Worcester Polytechnic Institute), Yi Yang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China)

The widespread use of mobile apps meets user needs but also raises security concerns. Current security analysis methods often fall short in addressing user concerns as they do not parse app behavior from the user's standpoint, leading to users not fully understanding the risks within the apps and unknowingly exposing themselves to privacy breaches. On one hand, their analysis and results are usually presented at the code level, which may not be comprehensible to users. On the other hand, they neglect to account for the users' perceptions of the app behavior. In this paper, we aim to extract user-related behaviors from apps and explain them to users in a comprehensible natural language form, enabling users to perceive the gap between their expectations and the app's actual behavior, and assess the risks within the inconsistencies independently. Through experiments, our tool emph{InconPreter} is shown to effectively extract inconsistent behaviors from apps and provide accurate and reasonable explanations. InconPreter achieves an inconsistency identification precision of 94.89% on our labeled dataset, and a risk analysis accuracy of 94.56% on widely used Android malware datasets. When applied to real-world (wild) apps, InconPreter identifies 1,664 risky inconsistent behaviors from 413 apps out of 10,878 apps crawled from Google Play, including the leakage of location, SMS, and contact information, as well as unauthorized audio recording, etc., potentially affecting millions of users. Moreover, InconPreter can detect some behaviors that are not identified by previous tools, such as unauthorized location disclosure in various scenarios (e.g. taking photos, chatting, and enabling mobile hotspots, etc.). We conduct a thorough analysis of the discovered behaviors to deepen the understanding of inconsistent behaviors, thereby helping users better manage their privacy and providing insights for privacy design in further app development.

View More Papers

SIGuard: Guarding Secure Inference with Post Data Privacy

Xinqian Wang (RMIT University), Xiaoning Liu (RMIT University), Shangqi Lai (CSIRO Data61), Xun Yi (RMIT University), Xingliang Yuan (University of Melbourne)

Read More

Wallbleed: A Memory Disclosure Vulnerability in the Great Firewall...

Shencha Fan (GFW Report), Jackson Sippe (University of Colorado Boulder), Sakamoto San (Shinonome Lab), Jade Sheffey (UMass Amherst), David Fifield (None), Amir Houmansadr (UMass Amherst), Elson Wedwards (None), Eric Wustrow (University of Colorado Boulder)

Read More

MTZK: Testing and Exploring Bugs in Zero-Knowledge (ZK) Compilers

Dongwei Xiao (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yiteng Peng (The Hong Kong University of Science and Technology), Shuai Wang (The Hong Kong University of Science and Technology)

Read More

Non-intrusive and Unconstrained Keystroke Inference in VR Platforms via...

Tao Ni (City University of Hong Kong), Yuefeng Du (City University of Hong Kong), Qingchuan Zhao (City University of Hong Kong), Cong Wang (City University of Hong Kong)

Read More