Qingchuan Zhao (The Ohio State University), Chaoshun Zuo (The Ohio State University), Giancarlo Pellegrino (CISPA, Saarland University; Stanford University), Zhiqiang Lin (The Ohio State University)

Increasingly, mobile application-based ride-hailing services have become a very popular means of transportation. Due to the handling of business logic, these services also contain a wealth of privacy-sensitive information such as GPS locations, car plates, driver licenses, and payment data. Unlike many of the mobile applications in which there is only one type of users, ride-hailing services face two types of users: riders and drivers. While most of the efforts had focused on the rider's privacy, unfortunately, we notice little has been done to protect drivers. To raise the awareness of the privacy issues with drivers, in this paper we perform the first systematic study of the drivers' sensitive data leakage in ride-hailing services. More specifically, we select $20$ popular ride-hailing apps including Uber and Lyft and focus on one particular feature, namely the nearby cars feature. Surprisingly, our experimental results show that large-scale data harvesting of drivers is possible for all of the ride-hailing services we studied. In particular, attackers can determine with high-precision the driver's privacy-sensitive information including mostly visited address (e.g., home) and daily driving behaviors. Meanwhile, attackers can also infer sensitive information about the business operations and performances of ride-hailing services such as the number of rides, utilization of cars, and presence on the territory. In addition to presenting the attacks, we also shed light on the countermeasures the service providers could take to protect the driver's sensitive information.

View More Papers

Practical Hidden Voice Attacks against Speech and Speaker Recognition...

Hadi Abdullah (University of Florida), Washington Garcia (University of Florida), Christian Peeters (University of Florida), Patrick Traynor (University of Florida), Kevin R. B. Butler (University of Florida), Joseph Wilson (University of Florida)

Read More

NIC: Detecting Adversarial Samples with Neural Network Invariant Checking

Shiqing Ma (Purdue University), Yingqi Liu (Purdue University), Guanhong Tao (Purdue University), Wen-Chuan Lee (Purdue University), Xiangyu Zhang (Purdue University)

Read More

Nearby Threats: Reversing, Analyzing, and Attacking Google’s ‘Nearby Connections’...

Daniele Antonioli (Singapore University of Technology and Design (SUTD)), Nils Ole Tippenhauer (CISPA), Kasper Rasmussen (University of Oxford)

Read More

CodeAlchemist: Semantics-Aware Code Generation to Find Vulnerabilities in JavaScript...

HyungSeok Han (KAIST), DongHyeon Oh (KAIST), Sang Kil Cha (KAIST)

Read More