NDSS Symposium 2018 Programme
Programme Outline
Sunday, 18 February 2018
Time | Session | Location |
---|---|---|
8:00am – 7:00pm |
Registration |
Kon Tiki Foyer |
8:30am – 5:30pm |
Rousseau |
|
8:30am – 5:30pm |
Toucan |
|
8:30am – 5:30pm |
Cockatoo |
|
8:30am – 5:30pm |
Macaw |
|
10:00am – 10:30am |
Morning Workshop Break |
Kon Tiki Foyer |
12:30pm – 1:30pm |
Workshop Lunch |
Kon Tiki Foyer |
3:00pm – 3:30pm |
Afternoon Workshop Break |
Kon Tiki Foyer |
6:00pm – 7:00pm |
Workshop and Symposium Welcome Reception |
Kon Tiki Foyer |
Monday, 19 February 2018
Time | Session | Location |
---|---|---|
7:30am |
Continental Breakfast |
Kon Tiki Foyer |
9:00am |
Welcome and Opening Remarks |
Kon Tiki Ballroom |
9:20am |
Keynote: Ari Juels – Beyond Smarts: Toward Correct, Private, Data-Rich Smart Contracts |
Kon Tiki Ballroom |
10:30am |
Morning Break |
Kon Tiki Foyer |
11:00am |
Kon Tiki Ballroom |
|
11:00am |
Aviary Ballroom |
|
12:20pm |
Lunch |
Beach |
2:00pm |
Kon Tiki Ballroom |
|
2:00pm |
Aviary Ballroom |
|
3:20pm |
Afternoon Break |
Kon Tiki Foyer |
3:50pm |
Kon Tiki Ballroom |
|
3:50pm |
Aviary Ballroom |
|
6:00pm |
Student Travel Grant Meet and Greet |
Kon Tiki Foyer |
7:00pm |
Boardroom |
Tuesday, 20 February 2018
Time | Session | Location |
---|---|---|
7:30am |
Continental Breakfast |
Kon Tiki Foyer |
9:00am |
Kon Tiki Ballroom |
|
9:00am |
Aviary Ballroom |
|
10:20am |
Morning Break |
Kon Tiki Foyer |
10:45am |
Kon Tiki Ballroom |
|
10:45am |
Aviary Ballroom |
|
12:25pm |
Lunch |
Beach |
2:00pm |
Kon Tiki Ballroom |
|
2:00pm |
Aviary Ballroom |
|
3:20pm |
Afternoon Break |
Kon Tiki Foyer |
3:50pm |
Kon Tiki Ballroom |
|
3:50pm |
Aviary Ballroom |
|
5:15pm |
Social Hour |
Kon Tiki Foyer |
Wednesday, 21 February 2018
Time | Session | Location |
---|---|---|
7:30am |
Continental Breakfast |
Kon Tiki Foyer |
9:00am |
Closing Remarks |
Kon Tiki Ballroom |
9:15am |
Closing Keynote: Parisa Tabriz – The Long Winding Road from Idea to Impact in Web Security |
Kon Tiki Ballroom |
10:15am |
Morning Break |
Kon Tiki Foyer |
10:45am |
Kon Tiki Ballroom |
|
12:05pm |
NDSS 25th Anniversary Awards Lunch |
Aviary Ballroom |
2:00pm |
Kon Tiki Ballroom |
|
3:20pm |
Afternoon Break |
Kon Tiki Foyer |
3:50pm |
Kon Tiki Ballroom |
|
4:50pm |
Closing Remarks |
Kon Tiki Ballroom |
Symposium Session Details
Session | Session Chair | Location |
---|---|---|
Session 1A: IoT |
Brendan Dolan-Gavitt |
Kon Tiki Ballroom |
IoTFuzzer: Discovering Memory Corruptions in IoT Through App-based Fuzzing. Jiongyi Chen (The Chinese University of Hong Kong), Wenrui Diao (Jinan University), Qingchuan Zhao (University of Texas at Dallas), Chaoshun Zuo (University of Texas at Dallas), Zhiqiang Lin (University of Texas at Dallas), XiaoFeng Wang (Indiana University Bloomington), Wing Cheong Lau (The Chinese University of Hong Kong), Menghan Sun (The Chinese University of Hong Kong), Ronghai Yang (The Chinese University of Hong Kong), and Kehuan Zhang (The Chinese University of Hong Kong). With more IoT devices entering the consumer market, it becomes imperative to detect their security vulnerabilities before an attacker does. Existing binary analysis based approaches only work on firmware, which is less accessible except for those equipped with special tools for extracting the code from the device. To address this challenge in IoT security analysis, we present in this paper a novel automatic fuzzing framework, called IOTFUZZER, which aims at finding memory corruption vulnerabilities in IoT devices without access to their firmware images. The key idea is based upon the observation that most IoT devices are controlled through their official mobile apps, and such an app often contains rich information about the protocol it uses to communicate with its device. Therefore, by identifying and reusing program-specific logic (e.g., encryption) to mutate the test case (particularly message fields), we are able to effectively probe IoT targets without relying on any knowledge about its protocol specifications. In our research, we implemented IOTFUZZER and evaluated 17 real-world IoT devices running on different protocols, and our approach successfully identified 15 memory corruption vulnerabilities (including 8 previously unknown ones).
Fear and Logging in the Internet of Things. Qi Wang (University of Illinois at Urbana-Champaign), Wajih Ul Hassan (University of Illinois at Urbana-Champaign), Adam Bates (University of Illinois at Urbana-Champaign), and Carl Gunter (University of Illinois at Urbana-Champaign). As the Internet of Things (IoT) continues to proliferate, diagnosing incorrect behavior within increasinglyautomated homes becomes considerably more difficult. Devices and apps may be chained together in long sequences of triggeraction rules to the point that from an observable symptom (e.g., an unlocked door) it may be impossible to identify the distantly removed root cause (e.g., a malicious app). This is because, at present, IoT audit logs are siloed on individual devices, and hence cannot be used to reconstruct the causal relationships of complex workflows. In this work, we present ProvThings, a platform-centric approach to centralized auditing in the Internet of Things. ProvThings performs efficient automated instrumentation of IoT apps and device APIs in order to generate data provenance that provides a holistic explanation of system activities, including malicious behaviors. We prototype ProvThings for the Samsung SmartThings platform, and benchmark the efficacy of our approach against a corpus of 26 IoT attacks. Through the introduction of a selective code instrumentation optimization, we demonstrate in evaluation that ProvThings imposes just 5% overhead on physical IoT devices while enabling real time querying of system behaviors, and further consider how ProvThings can be leveraged to meet the needs of a variety of stakeholders in the IoT ecosystem.
Decentralized Action Integrity for Trigger-Action IoT Platforms. Earlence Fernandes (University of Washington), Amir Rahmati (Samsung Research America and Stony Brook University), Jaeyeon Jung (Samsung), and Atul Prakash (University of Michigan). Trigger-Action platforms are web-based systems that enable users to create automation rules by stitching together online services representing digital and physical resources using OAuth tokens. Unfortunately, these platforms introduce a longrange large-scale security risk: If they are compromised, an attacker can misuse the OAuth tokens belonging to a large number of users to arbitrarily manipulate their devices and data. We introduce Decentralized Action Integrity, a security principle that prevents an untrusted trigger-action platform from misusing compromised OAuth tokens in ways that are inconsistent with any given user’s set of trigger-action rules. We present the design and evaluation of Decentralized Trigger-Action Platform (DTAP), a trigger-action platform that implements this principle by overcoming practical challenges. DTAP splits currently monolithic platform designs into an untrusted cloud service, and a set of user clients (each user only trusts their client). Our design introduces the concept of Transfer Tokens (XTokens) to practically use finegrained rule-specific tokens without increasing the number of OAuth permission prompts compared to current platforms. Our evaluation indicates that DTAP poses negligible overhead: it adds less than 15ms of latency to rule execution time, and reduces throughput by 2.5%.
What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices. Marius Muench (EURECOM), Jan Stijohann (Siemens AG and Ulm University), Frank Kargl (Ulm University), Aurelien Francillon (EURECOM), and Davide Balzarotti (EURECOM). As networked embedded systems are becoming more ubiquitous, their security is becoming critical to our daily life. While manual or automated large scale analysis of those systems regularly uncover new vulnerabilities, the way those systems are analyzed follows often the same approaches used on desktop systems. More specifically, traditional testing approaches relies on observable crashes of a program, and binary instrumentation techniques are used to improve the detection of those faulty states. In this paper, we demonstrate that memory corruptions, a common class of security vulnerabilities, often result in different behavior on embedded devices than on desktop systems. In particular, on embedded devices, effects of memory corruption are often less visible. This reduces significantly the effectiveness of traditional dynamic testing techniques in general, and fuzzing in particular. Additionally, we analyze those differences in several categories of embedded devices and show the resulting impact on firmware analysis. We further describe and evaluate relatively simple heuristics which can be applied at run time (on an execution trace or in an emulator), during the analysis of an embedded device to detect previously undetected memory corruptions.
|
||
Session 1B: Attacks and Vulnerabilities |
XiaoFeng Wang |
Aviary Ballroom |
Didn’t You Hear Me? – Towards More Successful Web Vulnerability Notifications. Ben Stock (CISPA, Saarland University), Giancarlo Pellegrino (CISPA, Saarland University and Stanford University), Frank Li (UC Berkeley), Michael Backes (CISPA, Saarland University), and Christian Rossow (CISPA, Saarland University). After treating the notification of vulnerable parties as mere side-notes in research, the security community has recently put more focus on how to conduct vulnerability disclosure at scale. The first works in this area have shown that while notifications are helpful to a significant fraction of operators, the vast majority of systems remain unpatched. In this paper, we build on these previous works, aiming to understand why the effects are not more significant. To that end, we report on a notification experiment targeting more than 24,000 domains, which allowed us to analyze what technical and human aspects are roadblocks to a successful campaign. As part of this experiment, we explored potential alternative notification channels beyond email, including social media and phone. In addition, we conducted an anonymous survey with the notified operators, investigating their perspectives on our notifications. We show the pitfalls of email-based communications, such as the impact of anti-spam filters, the lack of trust by recipients, and the hesitation in fixing vulnerabilities despite awareness. However, our exploration of alternative communication channels did not suggest a more promising medium. Seeing these results, we pinpoint future directions in improving security notifications.
Exposing Congestion Attack on Emerging Connected Vehicle based Traffic Signal Control. Qi Alfred Chen (University of Michigan), Yucheng Yin (University of Michigan), Yiheng Feng (University of Michigan), Z. Morley Mao (University of Michigan), and Henry X. Liu (University of Michigan). Connected vehicle (CV) technology will soon transform today’s transportation systems by connecting vehicles and the transportation infrastructure through wireless communication. Having demonstrated the potential to greatly improve transportation mobility efficiency, such dramatically increased connectivity also opens a new door for cyber attacks. In this work, we perform the first detailed security analysis of the nextgeneration CV-based transportation systems. As a first step, we target the USDOT (U.S. Department of Transportation) sponsored CV-based traffic control system, which has been tested and shown high effectiveness in real road intersections. In the analysis, we target a realistic threat, namely CV data spoofing from one single attack vehicle, with the attack goal of creating traffic congestion. We first analyze the system design and identify data spoofing strategies that can potentially influence the traffic control. Based on the strategies, we perform vulnerability analysis by exhaustively trying all the data spoofing options for these strategies to understand the upper bound of the attack effectiveness. For the highly effective cases, we analyze the causes and find that the current signal control algorithm design and implementation choices are highly vulnerable to data spoofing attacks from even a single attack vehicle. These vulnerabilities can be exploited to completely reverse the benefit of the CV-based signal control system by causing the traffic mobility to be 23.4% worse than that without adopting such system. We then construct practical exploits and evaluate them under real-world intersection settings. The evaluation results are consistent with our vulnerability analysis, and we find that the attacks can even cause a blocking effect to jam an entire approach. In the jamming period, 22% of the vehicles need to spend over 7 minutes for an original halfminute trip, which is 14 times higher. We also discuss defense directions leveraging the insights from our analysis.
Removing Secrets from Android’s TLS. Jaeho Lee (Rice University) and Dan S. Wallach (Rice University). Cryptographic libraries that implement Transport Layer Security (TLS) have a responsibility to delete cryptographic keys once they’re no longer in use. Any key that’s left in memory can potentially be recovered through the actions of an attacker, up to and including the physical capture and forensic analysis of a device’s memory. This paper describes an analysis of the TLS library stack used in recent Android distributions, combining a C language core (BoringSSL) with multiple layers of Java code (Conscrypt, OkHttp, and Java Secure Sockets). We first conducted a black-box analysis of virtual machine images, allowing us to discover keys that might remain recoverable. After identifying several such keys, we subsequently pinpointed undesirable interactions across these layers, where the higherlevel use of BoringSSL’s reference counting features, from Java code, prevented BoringSSL from cleaning up its keys. This interaction poses a threat to all Android applications built on standard HTTPS libraries, exposing master secrets to memory disclosure attacks. We found all versions we investigated from Android 4 to the latest Android 8 are vulnerable, showing that this problem has been long overlooked. The Android Chrome application is proven to be particularly problematic. We suggest modest changes to the Android codebase to mitigate these issues, and have reported these to Google to help them patch the vulnerability in future Android systems.
rtCaptcha: A Real-Time CAPTCHA Based Liveness Detection System. Erkam Uzun (Georgia Institute of Technology), Simon Pak Ho Chung (Georgia Institute of Technology), Irfan Essa (Georgia Institute of Technology), and Wenke Lee (Georgia Institute of Technology). Facial/voice-based authentication is becoming increasingly popular (e.g., already adopted by MasterCard and AliPay), because it is easy to use. In particular, users can now authenticate themselves to online services by using their mobile phone to show themselves performing simple tasks like blinking or smiling in front of its built-in camera. Our study shows that many of the publicly available facial/voice recognition services (e.g. Microsoft Cognitive Services or Amazon Rekognition) are vulnerable to even the most primitive attacks. Furthermore, recent work on modeling a person’s face/voice (e.g. Face2Face [1]) allows an adversary to create very authentic video/audio of any target victim to impersonate that target. All it takes to launch such attacks are a few pictures and voice samples of a victim, which can all be obtained by either abusing the camera and microphone of the victim’s phone, or through the victim’s social media account. In this work, we propose the Real Time Captcha (rtCaptcha) system, which stops/slows down such an attack by turning the adversary’s task from creating authentic video/audio of the target victim performing known authentication tasks (e.g., smile, blink) to figuring out what is the authentication task, which is encoded as a Captcha. Specifically, when a user tries to authenticate using rtCaptcha, they will be presented a Captcha and will be asked to take a “selfie” video while announcing the answer to the Captcha. As such, the security guarantee of our system comes from the strength of Captcha, and not how well we can distinguish real faces/voices from synthesized ones. To demonstrate the usability and security of rtCaptcha, we conducted a user study to measure human response times to the most popular Captcha schemes. Our experiments show that, thanks to the humans’ speed of solving Captchas, adversaries will have to solve Captchas in less than 2 seconds in order to appear live/human and defeat rtCaptcha, which is not possible for the best settings on the attack side.
|
||
Session 2A: Network Security/Cellular Networks |
Brad Reaves |
Kon Tiki Ballroom |
Automated Attack Discovery in TCP Congestion Control Using a Model-guided Approach. Samuel Jero (Purdue University), Endadul Hoque (Florida International University), David Choffnes (Northeastern University), Alan Mislove (Northeastern University), and Cristina Nita-Rotaru (Northeastern University). One of the most important goals of TCP is to ensure fairness and prevent congestion collapse by implementing congestion control. Various attacks against TCP congestion control have been reported over the years, most of which have been discovered through manual analysis. In this paper, we propose an automated method that combines the generality of implementation-agnostic fuzzing with the precision of runtime analysis to find attacks against implementations of TCP congestion control. It uses a model-guided approach to generate abstract attack strategies, by leveraging a state machine model of TCP congestion control to find vulnerable state machine paths that an attacker could exploit to increase or decrease the throughput of a connection to his advantage. These abstract strategies are then mapped to concrete attack strategies, which consist of sequences of actions such as injection or modification of acknowledgements and a logical time for injection. We design and implement a virtualized platform, TCPWN, that consists of a a proxy-based attack injector and a TCP congestion control state tracker that uses only network traffic to create and inject these concrete attack strategies. We evaluated 5 TCP implementations from 4 Linux distributions and Windows 8.1. Overall, we found 11 classes of attacks, of which 8 are new.
Preventing (Network) Time Travel with Chronos. Omer Deutsch (Hebrew University of Jerusalem), Neta Rozen Schiff (Hebrew University of Jerusalem), Danny Dolev (Hebrew University of Jerusalem), and Michael Schapira (Hebrew University of Jerusalem). The Network Time Protocol (NTP) synchronizes time across computer systems over the Internet. Unfortunately, NTP is highly vulnerable to “time shifting attacks”, in which the attacker’s goal is to shift forward/backward the local time at an NTP client. NTP’s security vulnerabilities have severe implications for time-sensitive applications and for security mechanisms, including TLS certificates, DNS and DNSSEC, RPKI, Kerberos, BitCoin, and beyond. While technically NTP supports cryptographic authentication, it is very rarely used in practice and, worse yet, timeshifting attacks on NTP are possible even if all NTP communications are encrypted and authenticated. We present Chronos, a new NTP client that achieves good synchronization even in the presence of powerful attackers who are in direct control of a large number of NTP servers. Importantly, Chronos is backwards compatible with legacy NTP and involves no changes whatsoever to NTP servers. Chronos leverages ideas from distributed computing literature on clock synchronization in the presence of adversarial (Byzantine) behavior. A Chronos client iteratively “crowdsources” time queries across multiple NTP servers and applies a provably secure algorithm for eliminating “suspicious” responses and averaging over the remaining responses. Chronos is carefully engineered to minimize communication overhead so as to avoid overloading NTP servers. We evaluate Chronos’ security and network efficiency guarantees via a combination of theoretical analyses and experiments with a prototype implementation. Our results indicate that to succeed in shifting time at a Chronos client by over 100ms from the UTC, even a powerful man-in-the-middle attacker requires over 20 years of effort in expectation.
LTEInspector: A Systematic Approach for Adversarial Testing of 4G LTE. Syed Rafiul Hussain (Purdue University), Omar Chowdhury (The University of Iowa), Shagufta Mehnaz (Purdue University), and Elisa Bertino (Purdue University). In this paper, we investigate the security and privacy of the three critical procedures of the 4G LTE protocol (i.e., attach, detach, and paging), and in the process, uncover potential design flaws of the protocol and unsafe practices employed by the stakeholders. For exposing vulnerabilities, we propose a modelbased testing approach LTEInspector which lazily combines a symbolic model checker and a cryptographic protocol verifier in the symbolic attacker model. Using LTEInspector, we have uncovered 10 new attacks along with 9 prior attacks, categorized into three abstract classes (i.e., security, user privacy, and disruption of service), in the three procedures of 4G LTE. Notable among our findings is the authentication relay attack that enables an adversary to spoof the location of a legitimate user to the core network without possessing appropriate credentials. To ensure that the exposed attacks pose real threats and are indeed realizable in practice, we have validated 8 of the 10 new attacks and their accompanying adversarial assumptions through experimentation in a real testbed.
GUTI Reallocation Demystified: Cellular Location Tracking with Changing Temporary Identifier. Byeongdo Hong (KAIST), Sangwook Bae (KAIST), and Yongdae Kim (KAIST). To keep subscribers’ identity confidential, a cellular network operator must use a temporary identifier instead of a permanent one according to the 3GPP standard. Temporary identifiers include Temporary Mobile Subscriber Identity (TMSI) and Globally Unique Temporary Identifier (GUTI) for GSM/3G and Long-Term Evolution (LTE) networks, respectively. Unfortunately, recent studies have shown that carriers fail to protect subscribers in both GSM/3G and LTE mainly because these identifiers have static and persistent values. These identifiers can be used to track subscribers’ locations. These studies have suggested that temporary identifiers must be reallocated frequently to solve this privacy problem. The only mechanism to update the temporary identifier in current LTE implementations is called GUTI reallocation. We investigate whether the current implementation of the GUTI reallocation mechanism can provide enough security to protect subscribers’ privacy. To do this, we collect data by performing GUTI reallocation more than 30,000 times with 28 carriers across 11 countries using 78 SIM cards. Then, we investigate whether (1) these reallocated GUTIs in each carrier show noticeable patterns and (2) if they do, these patterns are consistent among different SIM cards within each carrier. Among 28 carriers, 19 carriers have easily predictable and consistent patterns in their GUTI reallocation mechanisms. Among the remaining 9 carriers, we revisit 4 carriers to investigate them in greater detail. For all these 4 carriers, we could find interesting yet predictable patterns after invoking GUTI reallocation multiple times within a short time period. By using this predictability, we show that an adversary can track subscribers’ location as in previous studies. Finally, we present a lightweight and unpredictable GUTI reallocation mechanism as a solution.
|
||
Session 2B: Crypto |
Yinqian Zhang |
Aviary Ballroom |
Mind Your Keys? A Security Evaluation of Java Keystores. Riccardo Focardi (Universita Ca’ Foscari and Cryptosense), Francesco Palmarini (Universita Ca’ Foscari and Yarix), Marco Squarcina (Universita Ca’ Foscari and Cryptosense), Graham Steel (Cryptosense), and Mauro Tempesta (Universita Ca’ Foscari). Cryptography is complex and variegate and requires to combine different algorithms and mechanisms in nontrivial ways. This complexity is often source of vulnerabilities. Secure key management is one of the most critical aspects, since leaking a cryptographic key vanishes any advantage of using cryptography. In this paper we analyze Java keystores, the standard way to manage and securely store keys in Java applications. We consider seven keystore implementations from Oracle JDK and Bouncy Castle, a widespread cryptographic library. We describe, in detail, how the various keystores enforce confidentiality and integrity of the stored keys through passwordbased cryptography and we show that many of the implementations do not adhere to state-of-the-art cryptographic standards. We investigate the resistance to offline attacks and we show that, for non-compliant keystores, brute-forcing can be up to three orders of magnitude faster with respect to the most compliant keystore. Additionally, when an attacker can tamper with the keystore file, some implementations are vulnerable to denial of service attacks or, in the worst case, arbitrary code execution. Finally we discuss the fixes implemented by Oracle and Bouncy Castle developers following our responsible disclosure.
A Security Analysis of Honeywords. Ding Wang (Peking University), Haibo Cheng (Peking University), Ping Wang (Peking University), Jeff Yan (Linkoping University), and Xinyi Huang (Fujian Normal University). Honeywords are decoy passwords associated with each user account, and they contribute a promising approach to detecting password leakage. This approach was first proposed by Juels and Rivest at CCS’13, and has been covered by hundreds of medias and also adopted in various research domains. The idea of honeywords looks deceptively simple, but it is a deep and sophisticated challenge to automatically generate honeywords that are hard to differentiate from real passwords. In JuelsRivest’s work, four main honeyword-generation methods are suggested but only justified by heuristic security arguments. In this work, we for the first time develop a series of practical experiments using 10 large-scale datasets, a total of 104 million real-world passwords, to quantitatively evaluate the security that these four methods can provide. Our results reveal that they all fail to provide the expected security: real passwords can be distinguished with a success rate of 29.29%∼32.62% by our basic trawling-guessing attacker, but not the expected 5%, with just one guess (when each user account is associated with 19 honeywords as recommended). This figure reaches 34.21%∼49.02% under the advanced trawling-guessing attackers who make use of various state-of-the-art probabilistic password models. We further evaluate the security of Juels-Rivest’s methods under a targeted-guessing attacker who can exploit the victim’ personal information, and the results are even more alarming: 56.81%∼67.98%. Overall, our work resolves three open problems in honeyword research, as defined by Juels and Rivest.
Revisiting Private Stream Aggregation: Lattice-Based PSA. Daniela Becker (Robert Bosch LLC), Jorge Guajardo (Robert Bosch LLC), and Karl-Heinz Zimmermann (Hamburg University of Technology). In this age of massive data gathering for purposes of personalization, targeted ads, etc. there is an increased need for technology that allows for data analysis in a privacy-preserving manner. Private Stream Aggregation as introduced by Shi et al. (NDSS 2011) allows for the execution of aggregation operations over privacy-critical data from multiple data sources without placing trust in the aggregator and while maintaining differential privacy guarantees. We propose a generic PSA scheme, LaPS, based on the Learning With Error problem, which allows for a flexible choice of the utilized privacy-preserving mechanism while maintaining post-quantum security. We overcome the limitations of earlier schemes by relaxing previous assumptions in the security model and provide an efficient and compact scheme with high scalability. Our scheme is practical, for a plaintext space of 216 and 1000 participants we achieve a performance gain in decryption of roughly 150 times compared to previous results in Shi et al. (NDSS 2011).
ZeroTrace : Oblivious Memory Primitives from Intel SGX. Sajin Sasy (University of Waterloo), Sergey Gorbunov (University of Waterloo), and Christopher W. Fletcher (Nvidia/UIUC). We are witnessing a confluence between applied cryptography and secure hardware systems in enabling secure cloud computing. On one hand, work in applied cryptography has enabled efficient, oblivious data-structures and memory primitives. On the other, secure hardware and the emergence of Intel SGX has enabled a low-overhead and mass market mechanism for isolated execution. By themselves these technologies have their disadvantages. Oblivious memory primitives carry high performance overheads, especially when run non-interactively. Intel SGX, while more efficient, suffers from numerous softwarebased side-channel attacks, high context switching costs, and bounded memory size. In this work we build a new library of oblivious memory primitives, which we call ZeroTrace. ZeroTrace is designed to carefully combine state-of-the-art oblivious RAM techniques and SGX, while mitigating individual disadvantages of these technologies. To the best of our knowledge, ZeroTrace represents the first oblivious memory primitives running on a real secure hardware platform. ZeroTrace simultaneously enables a dramatic speed-up over pure cryptography and protection from softwarebased side-channel attacks. The core of our design is an efficient and flexible block-level memory controller that provides oblivious execution against any active software adversary, and across asynchronous SGX enclave terminations. Performance-wise, the memory controller can service requests for 4 B blocks in 1.2 ms and 1 KB blocks in 3.4 ms (given a 10 GB dataset). On top of our memory controller, we evaluate Set/Dictionary/List interfaces which can all perform basic operations (e.g., get/put/insert).
|
||
Session 3A: Deep Learning and Adversarial ML |
Chang Liu |
Kon Tiki Ballroom |
Automated Website Fingerprinting through Deep Learning. Vera Rimmer (imec-DistriNet, KU Leuven), Davy Preuveneers (imec-DistriNet, KU Leuven), Marc Juarez (imec-COSIC, ESAT, KU Leuven), Tom Van Goethem (imec-DistriNet, KU Leuven), and Wouter Joosen (imec-DistriNet, KU Leuven). Several studies have shown that the network traffic that is generated by a visit to a website over Tor reveals information specific to the website through the timing and sizes of network packets. By capturing traffic traces between users and their Tor entry guard, a network eavesdropper can leverage this meta-data to reveal which website Tor users are visiting. The success of such attacks heavily depends on the particular set of traffic features that are used to construct the fingerprint. Typically, these features are manually engineered and, as such, any change introduced to the Tor network can render these carefully constructed features ineffective. In this paper, we show that an adversary can automate the feature engineering process, and thus automatically deanonymize Tor traffic by applying our novel method based on deep learning. We collect a dataset comprised of more than three million network traces, which is the largest dataset of web traffic ever used for website fingerprinting, and find that the performance achieved by our deep learning approaches is comparable to known methods which include various research efforts spanning over multiple years. The obtained success rate exceeds 96% for a closed world of 100 websites and 94% for our biggest closed world of 900 classes. In our open world evaluation, the most performant deep learning model is 2% more accurate than the state-ofthe-art attack. Furthermore, we show that the implicit features automatically learned by our approach are far more resilient to dynamic changes of web content over time. We conclude that the ability to automatically construct the most relevant traffic features and perform accurate traffic recognition makes our deep learning based approach an efficient, flexible and robust technique for website fingerprinting.
VulDeePecker: A Deep Learning-Based System for Vulnerability Detection. Zhen Li (Huazhong University of Science and Technology and Hebei University), Deqing Zou (Huazhong University of Science and Technology and Shenzhen Huazhong University), Shouhuai Xu (University of Texas at San Antonio), Xinyu Ou (Huazhong University of Science and Technology), Hai Jin (Huazhong University of Science and Technology), Sujuan Wang (Huazhong University of Science and Technology), Zhijun Deng (Huazhong University of Science and Technology), and Yuyi Zhong (Huazhong University of Science and Technology). The automatic detection of software vulnerabilities is an important research problem. However, existing solutions to this problem rely on human experts to define features and often miss many vulnerabilities (i.e., incurring high false negative rate). In this paper, we initiate the study of using deep learning-based vulnerability detection to relieve human experts from the tedious and subjective task of manually defining features. Since deep learning is motivated to deal with problems that are very different from the problem of vulnerability detection, we need some guiding principles for applying deep learning to vulnerability detection. In particular, we need to find representations of software programs that are suitable for deep learning. For this purpose, we propose using code gadgets to represent programs and then transform them into vectors, where a code gadget is a number of (not necessarily consecutive) lines of code that are semantically related to each other. This leads to the design and implementation of a deep learning-based vulnerability detection system, called Vulnerability Deep Pecker (VulDeePecker). In order to evaluate VulDeePecker, we present the first vulnerability dataset for deep learning approaches. Experimental results show that VulDeePecker can achieve much fewer false negatives (with reasonable false positives) than other approaches. We further apply VulDeePecker to 3 software products (namely Xen, Seamonkey, and Libav) and detect 4 vulnerabilities, which are not reported in the National Vulnerability Database but were “silently” patched by the vendors when releasing later versions of these products; in contrast, these vulnerabilities are almost entirely missed by the other vulnerability detection systems we experimented with.
Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. Yisroel Mirsky (Ben-Gurion University of the Negev), Tomer Doitshman (Ben-Gurion University of the Negev), Yuval Elovici (Ben-Gurion University of the Negev), and Asaf Shabtai (Ben-Gurion University of the Negev). Neural networks have become an increasingly popular solution for network intrusion detection systems (NIDS). Their capability of learning complex patterns and behaviors make them a suitable solution for differentiating between normal traffic and network attacks. However, a drawback of neural networks is the amount of resources needed to train them. Many network gateways and routers devices, which could potentially host an NIDS, simply do not have the memory or processing power to train and sometimes even execute such models. More importantly, the existing neural network solutions are trained in a supervised manner. Meaning that an expert must label the network traffic and update the model manually from time to time. In this paper, we present Kitsune: a plug and play NIDS which can learn to detect attacks on the local network, without supervision, and in an efficient online manner. Kitsune’s core algorithm (KitNET) uses an ensemble of neural networks called autoencoders to collectively differentiate between normal and abnormal traffic patterns. KitNET is supported by a feature extraction framework which efficiently tracks the patterns of every network channel. Our evaluations show that Kitsune can detect various attacks with a performance comparable to offline anomaly detectors, even on a Raspberry PI. This demonstrates that Kitsune can be a practical and economic NIDS.
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. Weilin Xu (University of Virginia), David Evans (University of Virginia), and Yanjun Qi (University of Virginia). Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model’s prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
Trojaning Attack on Neural Networks. Yingqi Liu (Purdue University), Shiqing Ma (Purdue University), Yousra Aafer (Purdue University), Wen-Chuan Lee (Purdue University), Juan Zhai (Nanjing University), Weihang Wang (Purdue University), and Xiangyu Zhang (Purdue University). With the fast spread of machine learning techniques, sharing and adopting public machine learning models become very popular. This gives attackers many new opportunities. In this paper, we propose a trojaning attack on neural networks. As the models are not intuitive for human to understand, the attack features stealthiness. Deploying trojaned models can cause various severe consequences including endangering human lives (in applications like autonomous driving). We first inverse the neural network to generate a general trojan trigger, and then retrain the model with reversed engineered training data to inject malicious behaviors to the model. The malicious behaviors are only activated by inputs stamped with the trojan trigger. In our attack, we do not need to tamper with the original training process, which usually takes weeks to months. Instead, it takes minutes to hours to apply our attack. Also, we do not require the datasets that are used to train the model. In practice, the datasets are usually not shared due to privacy or copyright concerns. We use five different applications to demonstrate the power of our attack, and perform a deep analysis on the possible factors that affect the attack. The results show that our attack is highly effective and efficient. The trojaned behaviors can be successfully triggered (with nearly 100% possibility) without affecting its test accuracy for normal input and even with better accuracy on public dataset. Also, it only takes a small amount of time to attack a complex neuron network model. In the end, we also discuss possible defense against such attacks.
|
||
Session 3B: Authentication |
Adam Aviv |
Aviary Ballroom |
Broken Fingers: On the Usage of the Fingerprint API in Android. Antonio Bianchi (University of California, Santa Barbara), Yanick Fratantonio (University of California, Santa Barbara, EURECOM), Aravind Machiry (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara), Simon Pak Ho Chung (Georgia Institute of Technology), and Wenke Lee (Georgia Institute of Technology). Smartphones are increasingly used for very important tasks such as mobile payments. Correspondingly, new technologies are emerging to provide better security on smartphones. One of the most recent and most interesting is the ability to recognize fingerprints, which enables mobile apps to use biometric-based authentication and authorization to protect security-sensitive operations. In this paper, we present the first systematic analysis of the fingerprint API in Android, and we show that this API is not well understood and often misused by app developers. To make things worse, there is currently confusion about which threat model the fingerprint API should be resilient against. For example, although there is no official reference, we argue that the fingerprint API is designed to protect from attackers that can completely compromise the untrusted OS. After introducing several relevant threat models, we identify common API usage patterns and show how inappropriate choices can make apps vulnerable to multiple attacks. We then design and implement a new static analysis tool to automatically analyze the usage of the fingerprint API in Android apps. Using this tool, we perform the first systematic study on how the fingerprint API is used. The results are worrisome: Our tool indicates that 53.69% of the analyzed apps do not use any cryptographic check to ensure that the user actually touched the fingerprint sensor. Depending on the specific use case scenario of a given app, it is not always possible to make use of cryptographic checks. However, a manual investigation on a subset of these apps revealed that 80% of them could have done so, preventing multiple attacks. Furthermore, the tool indicates that only the 1.80% of the analyzed apps use this API in the most secure way possible, while many others, including extremely popular apps such as Google Play Store and Square Cash, use it in weaker ways. To make things worse, we find issues and inconsistencies even in the samples provided by the official Google documentation. We end this work by suggesting various improvements to the fingerprint API to prevent some of these problematic attacks.
K-means++ vs. Behavioral Biometrics: One Loop to Rule Them All. Parimarjan Negi (Stanford University), Prafull Sharma (Stanford University), Vivek sanjay Jain (Stanford University), and Bahman Bahmani (Stanford University). Behavioral biometrics, a field that studies patterns in an individual’s unique behavior, has been researched actively as a means of authentication for decades. Recently, it has even been adopted in many real world scenarios. In this paper, we study keystroke dynamics, the most researched of such behavioral biometrics, from the perspective of an adversary. We designed two adversarial agents with a standard accuracy convenience tradeoff: Targeted K-means++, which is an expensive, but extremely effective adversarial agent, and Indiscriminate K-means++, which is slightly less powerful, but adds no overhead cost to the attacker. With Targeted K-means++ we could compromise the security of 40-70% of users within ten tries. In contrast, with Indiscriminate K-means++, the security of 30-50% of users was compromised. Therefore, we conclude that while keystroke dynamics has potential, it is not ready for security critical applications yet. Future keystroke dynamics research should use such adversaries to benchmark the performance of the detection algorithms, and design better algorithms to foil these. Finally, we show that the K-means++ adversarial agent generalizes well to even other types of behavioral biometrics data by applying it on a dataset of touchscreen swipes.
ABC: Enabling Smartphone Authentication with Built-in Camera. Zhongjie Ba (University at Buffalo, State University of New York), Sixu Piao (University at Buffalo, State University of New York), Xinwen Fu (University of Central Florida), Dimitrios Koutsonikolas (University at Buffalo, State University of New York), Aziz Mohaisen (University of Central Florida), and Kui Ren (University at Buffalo, State University of New York). Reliably identifying and authenticating smartphones is critical in our daily life since they are increasingly being used to manage sensitive data such as private messages and financial data. Recent researches on hardware fingerprinting show that each smartphone, regardless of the manufacturer or make, possesses a variety of hardware fingerprints that are unique, robust, and physically unclonable. There is a growing interest in designing and implementing hardware-rooted smartphone authentication which authenticates smartphones through verifying the hardware fingerprints of their built-in sensors. Unfortunately, previous fingerprinting methods either involve large registration overhead or suffer from fingerprint forgery attacks, rendering them infeasible in authentication systems. In this paper, we propose ABC, a real-time smartphone Authentication protocol utilizing the photo-response non-uniformity (PRNU) of the Built-in Camera. In contrast to previous works that require tens of images to build reliable PRNU features for conventional cameras, we are the first to observe that one image alone can uniquely identify a smartphone due to the unique PRNU of a smartphone image sensor. This new discovery makes the use of PRNU practical for smartphone authentication. While most existing hardware fingerprints are vulnerable against forgery attacks, ABC defeats forgery attacks by verifying a smartphone’s PRNU identity through a challenge response protocol using a visible light communication channel. A user captures two time-variant QR codes and sends the two images to a server, which verifies the identity by fingerprint and image content matching. The time-variant QR codes can also defeat replay attacks. Our experiments with 16,000 images over 40 smartphones show that ABC can efficiently authenticate user devices with an error rate less than 0.5%.
Device Pairing at the Touch of an Electrode. Marc Roeschlin (University of Oxford), Ivan Martinovic (University of Oxford), and Kasper B. Rasmussen (University of Oxford). Device pairing is the problem of having two devices securely establish a key that can be used to secure subsequent communication. The problem arises every time two devices that do not already share a secret need to bootstrap a secure communication channel. Many solutions exist, all suited to different situations, and all with their own strengths and weaknesses. In this paper, we propose a novel approach to device pairing that applies whenever a user wants to pair two devises that can be physically touched at the same time. The pairing process is easy to perform, even for novice users. A central problem for a device (Alice) running a device pairing protocol, is determining whether the other party (Bob) is in fact the device that we are supposed to establish a key with. Our scheme is based on the idea that two devices can perform device pairing, if they are physically held by the same person (at the same time). In order to pair two devices, a person touches a conductive surface on each device. While the person is in contact with both devices, the human body acts as a transmission medium for intra-body communication and the two devices can communicate through the body. This body channel is used as part of a pairing protocol which allows the devices to agree on a mutual secret and, at the same time, extract physical features to verify that they are being held by the same person. We prove that our device pairing protocol is secure in our threat model and we build a proof of concept set-up and conduct experiments with 15 people to verify the idea in practice.
Face Flashing: a Secure Liveness Detection Protocol based on Light Reflections. Di Tang (Chinese University of Hong Kong), Zhe Zhou (Fudan University), Yinqian Zhang (Ohio State University), and Kehuan Zhang (Chinese University of Hong Kong). Face authentication systems are becoming increasingly prevalent, especially with the rapid development of Deep Learning technologies. However, human facial information is easy to be captured and reproduced, which makes face authentication systems vulnerable to various attacks. Liveness detection is an important defense technique to prevent such attacks, but existing solutions did not provide clear and strong security guarantees, especially in terms of time. To overcome these limitations, we propose a new liveness detection protocol called Face Flashing that significantly increases the bar for launching successful attacks on face authentication systems. By randomly flashing well-designed pictures on a screen and analyzing the reflected light, our protocol has leveraged physical characteristics of human faces: reflection processing at the speed of light, unique textual features, and uneven 3D shapes. Cooperating with working mechanism of the screen and digital cameras, our protocol is able to detect subtle traces left by an attacking process. To demonstrate the effectiveness of Face Flashing, we implemented a prototype and performed thorough evaluations with large data set collected from real-world scenarios. The results show that our Timing Verification can effectively detect the time gap between legitimate authentications and malicious cases. Our Face Verification can also differentiate 2D plain from 3D objects accurately. The overall accuracy of our liveness detection system is 98.8%, and its robustness was evaluated in different scenarios. In the worst case, our system’s accuracy decreased to a still-high 97.3%.
|
||
Session 4A: Measurements |
Xiaojing Liao |
Kon Tiki Ballroom |
A Large-scale Analysis of Content Modification by Open HTTP Proxies. Giorgos Tsirantonakis (FORTH), Panagiotis Ilia (FORTH), Sotiris Ioannidis (FORTH), Elias Athanasopoulos (University of Cyprus), and Michalis Polychronakis (Stony Brook University). Open HTTP proxies offer a quick and convenient solution for routing web traffic towards a destination. In contrast to more elaborate relaying systems, such as anonymity networks or VPN services, users can freely connect to an open HTTP proxy without the need to install any special software. Therefore, open HTTP proxies are an attractive option for bypassing IPbased filters and geo-location restrictions, circumventing content blocking and censorship, and in general, hiding the client’s IP address when accessing a web server. Nevertheless, the consequences of routing traffic through an untrusted third party can be severe, while the operating incentives of the thousands of publicly available HTTP proxies are questionable. In this paper, we present the results of a large-scale analysis of open HTTP proxies, focusing on determining the extent to which user traffic is manipulated while being relayed. We have designed a methodology for detecting proxies that, instead of passively relaying traffic, actively modify the relayed content. Beyond simple detection, our framework is capable of macroscopically attributing certain traffic modifications at the network level to well-defined malicious actions, such as ad injection, user fingerprinting, and redirection to malware landing pages. We have applied our methodology on a large set of publicly available HTTP proxies, which we monitored for a period of two months, and identified that 38% of them perform some form of content modification. The majority of these proxies can be considered benign, as they do not perform any harmful content modification. However, 5.15% of the tested proxies were found to perform modification or injection that can be considered as malicious or unwanted. Specifically, 47% of the malicious proxies injected ads, 39% injected code for collecting user information that can be used for tracking and fingerprinting, and 12% attempted to redirect the user to pages that contain malware. Our study reveals the true incentives of many of the publicly available web proxies. Our findings raise several concerns, as we uncover multiple cases where users can be severely affected by connecting to an open proxy. As a step towards protecting users against unwanted content modification, we built a service that leverages our methodology to automatically collect and probe public proxies, and generates a list of safe proxies that do not perform any content modification, on a daily basis.
Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis. Shitong Zhu (University of California, Riverside), Xunchao Hu (Syracuse University), Zhiyun Qian (University of California, Riverside), Zubair Shafiq (University of Iowa), and Heng Yin (University of California, Riverside). Millions of people use adblockers to remove intrusive and malicious ads as well as protect themselves against tracking and pervasive surveillance. Online publishers consider adblockers a major threat to the ad-powered “free” Web. They have started to retaliate against adblockers by employing antiadblockers which can detect and stop adblock users. To counter this retaliation, adblockers in turn try to detect and filter anti-adblocking scripts. This back and forth has prompted an escalating arms race between adblockers and anti-adblockers. We want to develop a comprehensive understanding of antiadblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. At a high level, we collect execution traces by visiting a website with and without adblockers. Through differential execution analysis, we are able to pinpoint the conditions that lead to the differences caused by anti-adblocking code. Using our system, we detect anti-adblockers on 30.5% of the Alexa top10K websites which is 5-52 times more than reported in prior literature. Unlike prior work which is limited to detecting visible reactions (e.g., warning messages) by anti-adblockers, our system can discover attempts to detect adblockers even when there is no visible reaction. From manually checking one third of the detected websites, we find that the websites that have no visible reactions constitute over 90% of the cases, completely dominating the ones that have visible warning messages. Finally, based on our findings, we further develop JavaScript rewriting and API hooking based solutions (the latter implemented as a Chrome extension) to help adblockers bypass state-of-the-art anti-adblockers.
Towards Measuring the Effectiveness of Telephony Blacklists. Sharbani Pandit (Georgia Institute of Technology), Roberto Perdisci (University of Georgia), Mustaque Ahamad (Georgia Institute of Technology), and Payas Gupta (Pindrop). The convergence of telephony with the Internet has led to numerous new attacks that make use of phone calls to defraud victims. In response to the increasing number of unwanted or fraudulent phone calls, a number of call blocking applications have appeared on smartphone app stores, including a recent update to the default Android phone app that alerts users of suspected spam calls. However, little is known about the methods used by these apps to identify malicious numbers, and how effective these methods are in practice. In this paper, we are the first to systematically investigate multiple data sources that may be leveraged to automatically learn phone blacklists, and to explore the potential effectiveness of such blacklists by measuring their ability to block future unwanted phone calls. Specifically, we consider four different data sources: user-reported call complaints submitted to the Federal Trade Commission (FTC), complaints collected via crowd-sourced efforts (e.g., 800notes.com), call detail records (CDR) from a large telephony honeypot [1], and honeypot-based phone call audio recordings. Overall, our results show that phone blacklists are capable of blocking a significant fraction of future unwanted calls (e.g., more than 55%). Also, they have a very low false positive rate of only 0.01% for phone numbers of legitimate businesses. We also propose an unsupervised learning method to identify prevalent spam campaigns from different data sources, and show how effective blacklists may be as a defense against such campaigns.
Things You May Not Know About Android (Un)Packers: A Systematic Study based on Whole-System Emulation. Yue Duan (University of California, Riverside), Mu Zhang (Cornell University), Abhishek Vasisht Bhaskar (Grammatech. Inc.), Heng Yin (University of California, Riverside), Xiaorui Pan (Indiana University Bloomington), Tongxin Li (Peking University), Xueqiang Wang (Indiana University Bloomington), and XiaoFeng Wang (Indiana University Bloomington). The prevalent usage of runtime packers has complicated Android malware analysis, as both legitimate and malicious apps are leveraging packing mechanisms to protect themselves against reverse engineer. Although recent efforts have been made to analyze particular packing techniques, little has been done to study the unique characteristics of Android packers. In this paper, we report the first systematic study on mainstream Android packers, in an attempt to understand their security implications. For this purpose, we developed DROIDUNPACK, a whole-system emulation based Android packing analysis framework, which compared with existing tools, relies on intrinsic characteristics of Android runtime (rather than heuristics), and further enables virtual machine inspection to precisely recover hidden code and reveal packing behaviors. Running our tool on 6 major commercial packers, 93,910 Android malware samples and 3 existing state-of-the-art unpackers, we found that not only are commercial packing services abused to encrypt malicious or plagiarized contents, they themselves also introduce securitycritical vulnerabilities to the apps being packed. Our study further reveals the prevalence and rapid evolution of custom packers used by malware authors, which cannot be defended against using existing techniques, due to their design weaknesses.
|
||
Session 4B: Software Attacks and Secure Architectures |
Zhou Li |
Aviary Ballroom |
KeyDrown: Eliminating Software-Based Keystroke Timing Side-Channel Attacks. Michael Schwarz (Graz University of Technology), Moritz Lipp (Graz University of Technology), Daniel Gruss (Graz University of Technology), Samuel Weiser (Graz University of Technology), Clementine Maurice (Univ. Rennes, CNRS, IRISA), Raphael Spreitzer (Graz University of Technology), and Stefan Mangard (Graz University of Technology). Besides cryptographic secrets, software-based sidechannel attacks also leak sensitive user input. The most accurate attacks exploit cache timings or interrupt information to monitor keystroke timings and subsequently infer typed words and sentences. These attacks have also been demonstrated in JavaScript embedded in websites by a remote attacker. We extend the stateof-the-art with a new interrupt-based attack and the first Prime+ Probe attack on kernel interrupt handlers. Previously proposed countermeasures fail to prevent software-based keystroke timing attacks as they do not protect keystroke processing through the entire software stack. We close this gap with KeyDrown, a new defense mechanism against software-based keystroke timing attacks. KeyDrown injects a large number of fake keystrokes in the kernel, making the keystroke interrupt density uniform over time, i.e., independent of the real keystrokes. All keystrokes, including fake keystrokes, are carefully propagated through the shared library to make them indistinguishable by exploiting the specific properties of software-based side channels. We show that attackers cannot distinguish fake keystrokes from real keystrokes anymore and we evaluate KeyDrown on a commodity notebook as well as on Android smartphones. We show that KeyDrown eliminates any advantage an attacker can gain from using software-based sidechannel attacks.
Securing Real-Time Microcontroller Systems through Customized Memory View Switching. Chung Hwan Kim (NEC Laboratories America), Taegyu Kim (Purdue University), Hongjun Choi (Purdue University), Zhongshu Gu (IBM T.J. Watson Research Center), Byoungyoung Lee (Purdue University), Xiangyu Zhang (Purdue University), and Dongyan Xu (Purdue University). Real-time microcontrollers have been widely adopted in cyber-physical systems that require both real-time and security guarantees. Unfortunately, security is sometimes traded for real-time performance in such systems. Notably, memory isolation, which is one of the most established security features in modern computer systems, is typically not available in many real-time microcontroller systems due to its negative impacts on performance and violation of real-time constraints. As such, the memory space of these systems has created an open, monolithic attack surface that attackers can target to subvert the entire systems. In this paper, we present MINION, a security architecture that intends to virtually partition the memory space and enforce memory access control of a real-time microcontroller. MINION can automatically identify the reachable memory regions of realtime processes through off-line static analysis on the system’s firmware and conduct run-time memory access control through hardware-based enforcement. Our evaluation results demonstrate that, by significantly reducing the memory space that each process can access, MINION can effectively protect a microcontroller from various attacks that were previously viable. In addition, unlike conventional memory isolation mechanisms that might incur substantial performance overhead, the lightweight design of MINION is able to maintain the real-time properties of the microcontroller.
Automated Generation of Event-Oriented Exploits in Android Hybrid Apps. Guangliang Yang (Texas A&M University), Jeff Huang (Texas A&M University), and Guofei Gu (Texas A&M University). Recently more and more Android apps integrate the embedded browser, known as “WebView”, to render web pages and run JavaScript code without leaving these apps. WebView provides a powerful feature that allows event handlers defined in the native context (i.e., Java in Android) to handle web events that occur in WebView. However, as shown in prior work, this feature suffers from remote attacks, which we generalize as EventOriented Exploit (EOE) in this paper, such that adversaries may remotely access local critical functionalities through event handlers in WebView without any permission or authentication. In this paper, we propose a novel approach, EOEDroid, which can automatically vet event handlers in a given hybrid app using selective symbolic execution and static analysis. If a vulnerability is found, EOEDroid also automatically generates exploit code to help developers and analysts verify the vulnerability. To support exploit code generation, we also systematically study web events, event handlers and their trigger constraints. We evaluated our approach on 3,652 most popular apps. The result showed that our approach found 97 total vulnerabilities in 58 apps, including 2 cross-frame DOM manipulation, 53 phishing, 30 sensitive information leakage, 1 local resources access, and 11 Intent abuse vulnerabilities. We also found a potential backdoor in a high profile app that could be used to steal users’ sensitive information, such as IMEI. Even though developers attempted to close it, EOEDroid found that adversaries were still able to exploit it by triggering two events together and feeding event handlers with well designed input.
Tipped Off by Your Memory Allocator: Device-Wide User Activity Sequencing from Android Memory Images. Rohit Bhatia (Purdue University), Brendan Saltaformaggio (Georgia Institute of Technology), Seung Jei Yang (The Affiliated Institute of ETRI), Aisha Ali-Gombe (Towson University), Xiangyu Zhang (Purdue University), Dongyan Xu (Purdue University), and Golden G. Richard III (Louisiana State University). An essential forensic capability is to infer the sequence of actions performed by a suspect in the commission of a crime. Unfortunately, for cyber investigations, user activity timeline reconstruction remains an open research challenge, currently requiring manual identification of datable artifacts/logs and heuristic-based temporal inference. In this paper, we propose a memory forensics capability to address this challenge. We present Timeliner, a forensics technique capable of automatically inferring the timeline of user actions on an Android device across all apps, from a single memory image acquired from the device. Timeliner is inspired by the observation that Android app Activity launches leave behind key self-identifying data structures. More importantly, this collection of data structures can be temporally ordered, owing to the predictable manner in which they were allocated and distributed in memory. Based on these observations, Timeliner is designed to (1) identify and recover these residual data structures, (2) infer the user-induced transitions between their corresponding Activities, and (3) reconstruct the devicewide, cross-app Activity timeline. Timeliner is designed to leverage the memory image of Android’s centralized ActivityManager service. Hence, it is able to sequence Activity launches across all apps — even those which have terminated. Our evaluation shows that Timeliner can reveal substantial evidence (up to an hour) across a variety of apps on different Android platforms.
|
||
Session 5A: Software Security |
Long Lu |
Kon Tiki Ballroom |
K-Miner: Uncovering Memory Corruption in Linux. David Gens (CYSEC/Technische Universitat Darmstadt), Simon Schmitt (CYSEC/Technische Universitat Darmstadt), Lucas Davi (CYSEC/Technische Universitat Darmstadt), and Ahmad-Reza Sadeghi (CYSEC/Technische Universitat Darmstadt). Operating system kernels are appealing attack targets: compromising the kernel usually allows attackers to bypass all deployed security mechanisms and take control over the entire system. Commodity kernels, like Linux, are written in low-level programming languages that offer only limited type and memory-safety guarantees, enabling adversaries to launch sophisticated run-time attacks against the kernel by exploiting memory-corruption vulnerabilities. Many defenses have been proposed to protect operating systems at run time, such as control-flow integrity (CFI). However, the goal of these run-time monitors is to prevent exploitation as a symptom of memory corruption, rather than eliminating the underlying root cause, i.e., bugs in the kernel code. While finding bugs can be automated, e.g., using static analysis, all existing approaches are limited to local, intra-procedural checks, and face severe scalability challenges due to the large kernel code base. Consequently, there currently exist no tools for conducting global static analysis of operating system kernels. In this paper, we present K-Miner, a new framework to efficiently analyze large, commodity operating system kernels like Linux. Our novel approach exploits the highly standardized interface structure of the kernel code to enable scalable pointer analysis and conduct global, context-sensitive analysis. Through our inter-procedural analysis we show that K-Miner systematically and reliably uncovers several different classes of memory-corruption vulnerabilities, such as dangling pointers, user-after-free, double-free, and double-lock vulnerabilities. We thoroughly evaluate our extensible analysis framework, which leverages the popular and widely used LLVM compiler suite, for the current Linux kernel and demonstrate its effectiveness by reporting several memory-corruption vulnerabilities.
CFIXX: Object Type Integrity for C++. Nathan Burow (Purdue University), Derrick McKee (Purdue University), Scott A. Carr (Purdue University), and Mathias Payer (Purdue University). C++ relies on object type information for dynamic dispatch and casting. The association of type information to an object is implemented via the virtual table pointer, which is stored in the object itself. As C++ has neither memory nor type safety, adversaries may therefore overwrite an object’s type. If the corrupted type is used for dynamic dispatch, the attacker has hijacked the application’s control flow. This vulnerability is widespread and commonly exploited. Firefox, Chrome, and other major C++ applications are network facing, commonly attacked, and make significant use of dynamic dispatch. ControlFlow Integrity (CFI) is the state of the art policy for efficient mitigation of control-flow hijacking attacks. CFI mechanisms determine statically (i.e., at compile time) the set of functions that are valid at a given call site, based on C++ semantics. We propose an orthogonal policy, Object Type Integrity (OTI), that dynamically tracks object types. Consequently, instead of allowing a set of targets for each dynamic dispatch on an object, only the single, correct target for the object’s type is allowed. To show the efficacy of OTI, we present CFIXX, which enforces OTI. CFIXX enforces OTI by dynamically tracking the type of each object and enforcing its integrity against arbitrary writes. CFIXX has minimal overhead on CPU bound applications such as SPEC CPU2006 — 4.98%. On key applications like Chromium, CFIXX has negligible overhead on JavaScript benchmarks: 2.03% on Octane, 1.99% on Kraken, and 2.80% on JetStream. We show that CFIXX can be deployed in conjunction with CFI, providing a significant security improvement.
Back To The Epilogue: Evading Control Flow Guard via Unaligned Targets. Andrea Biondo (University of Padua), Mauro Conti (University of Padua), and Daniele Lain (University of Padua). Attackers use memory corruption vulnerabilities to compromise systems by hijacking control flow towards attacker-controlled code. Over time, researchers proposed several countermeasures, such as Address Space Layout Randomization, Write XOR Execute and Control Flow Integrity (CFI). CFI is one of the most promising solutions, enforcing control flow to adhere to statically determined valid execution paths. To trade with the execution and storage overhead, practical CFI implementations enforce coarser version of CFI. One of the most widely deployed implementations of CFI is the one proposed by Microsoft, named Control Flow Guard (CFG). CFG is currently in place on all Windows operating systems, from Windows 8.1 to the most recent update of Windows 10 (at the time of writing), accounting for more than 500 million machines. In this paper, we show a significant design vulnerability in Windows CFG and propose a specific attack to exploit it: the Back to The Epilogue (BATE) attack. We show that with BATE an attacker can completely evade from CFG and transfer control to any location, thus obtaining arbitrary code execution. BATE leverages the tradeoff of CFG between precision, performance, and backwards compatibility; in particular, the latter one motivates 16-byte address granularity in some circumstances. This vulnerability, inherent to the CFG design, allows us to call portions of code (gadgets) that should not be allowed, and that we can chain together to escape CFG. These gadgets are very common: we ran a thorough evaluation of Windows system libraries, and found many high value targets – exploitable gadgets in code loaded by almost all the applications on 32-bit systems and by web browsers on 64-bit. We also demonstrate the real-world feasibility of our attack by using it to build a remote code execution exploit against the Microsoft Edge web browser running on 64-bit Windows 10. Finally, we discuss possible countermeasures to BATE.
Superset Disassembly: Statically Rewriting x86 Binaries Without Heuristics. Erick Bauman (University of Texas at Dallas), Zhiqiang Lin (University of Texas at Dallas), and Kevin Hamlen (University of Texas at Dallas). Static binary rewriting is a core technology for many systems and security applications, including profiling, optimization, and software fault isolation. While many static binary rewriters have been developed over the past few decades, most make various assumptions about the binary, such as requiring correct disassembly, cooperation from compilers, or access to debugging symbols or relocation entries. This paper presents MULTIVERSE, a new binary rewriter that is able to rewrite Intel CISC binaries without these assumptions. Two fundamental techniques are developed to achieve this: (1) a superset disassembly that completely disassembles the binary code into a superset of instructions in which all legal instructions fall, and (2) an instruction rewriter that is able to relocate all instructions to any other location by mediating all indirect control flow transfers and redirecting them to the correct new addresses. A prototype implementation of MULTIVERSE and evaluation on SPECint 2006 benchmarks shows that MULTIVERSE is able to rewrite all of the testing binaries with a reasonable runtime overhead for the new rewritten binaries. Simple static instrumentation using MULTIVERSE and its comparison with dynamic instrumentation shows that the approach achieves better average performance. Finally, the security applications of MULTIVERSE are exhibited by using it to implement a shadow stack.
Enhancing Memory Error Detection for Large-Scale Applications and Fuzz Testing. Wookhyun Han (KAIST), Byunggill Joe (KAIST), Byoungyoung Lee (Purdue University), Chengyu Song (University of California, Riverside), and Insik Shin (KAIST). Memory errors are one of the most common vulnerabilities for the popularity of memory unsafe languages including C and C++. Once exploited, it can easily lead to system crash (i.e., denial-of-service attacks) or allow adversaries to fully compromise the victim system. This paper proposes MEDS, a practical memory error detector. MEDS signi cantly enhances its detection capability by approximating two ideal properties, called an in nite gap and an in nite heap. The approximated in nite gap of MEDS setups large inaccessible memory region between objects (i.e., 4 MB), and the approximated in nite heap allows MEDS to fully utilize virtual address space (i.e., 45-bits memory space). The key idea of MEDS in achieving these properties is a novel user-space memory allocation mechanism, MEDSALLOC. MEDSALLOC leverages a page aliasing mechanism, which allows MEDS to maximize the virtual memory space utilization but minimize the physical memory uses. To highlight the detection capability and practical impacts of MEDS, we evaluated and then compared to Google’s state-of-the-art detection tool, AddressSanitizer. MEDS showed three times better detection rates on four real-world vulnerabilities in Chrome and Firefox. More importantly, when used for a fuzz testing, MEDS was able to identify 68.3% more memory errors than AddressSanitizer for the same amount of a testing time, highlighting its practical aspects in the software testing area. In terms of performance overhead, MEDS slowed down 108% and 86% compared to native execution and AddressSanitizer, respectively, on real-world applications including Chrome, Firefox, Apache, Nginx, and OpenSSL.
|
||
Session 5B: Privacy in Mobile |
Dongyan Xu |
Aviary Ballroom |
Finding Clues for Your Secrets: Semantics-Driven, Learning-Based Privacy Discovery in Mobile Apps. Yuhong Nan (Fudan University), Zhemin Yang (Fudan University and Shanghai Institute of Intelligent Electronics & Systems), Xiaofeng Wang (Indiana University Bloomington), Yuan Zhang (Fudan University), Donglai Zhu (Fudan University), and Min Yang (Fudan University and Shanghai Institute for Advanced Communication and Data Science). A long-standing challenge in analyzing information leaks within mobile apps is to automatically identify the code operating on sensitive data. With all existing solutions relying on System APIs (e.g., IMEI, GPS location) or features of user interfaces (UI), the content from app servers, like user’s Facebook profile, payment history, fall through the crack. Finding such content is important given the fact that most apps today are web applications, whose critical data are often on the server side. In the meantime, operations on the data within mobile apps are often hard to capture, since all server-side information is delivered to the app in the same way, sensitive or not. A unique observation of our research is that in modern apps, a program is essentially a semantics-rich documentation carrying meaningful program elements such as method names, variables and constants that reveal the sensitive data involved, even when the program is under moderate obfuscation. Leveraging this observation, we develop a novel semantics-driven solution for automatic discovery of sensitive user data, including those from the server side. Our approach utilizes natural language processing (NLP) to automatically locate the program elements (variables, methods, etc.) of interest, and then performs a learning-based program structure analysis to accurately identify those indeed carrying sensitive content. Using this new technique, we analyzed 445,668 popular apps, an unprecedented scale for this type of research. Our work brings to light the pervasiveness of information leaks, and the channels through which the leaks happen, including unintentional over-sharing across libraries and aggressive data acquisition behaviors. Further we found that many high-profile apps and libraries are involved in such leaks. Our findings contribute to a better understanding of the privacy risk in mobile apps and also highlight the importance of data protection in today’s software composition.
Bug Fixes, Improvements, … and Privacy Leaks – A Longitudinal Study of PII Leaks Across Android App Versions. Jingjing Ren (Northeastern University), Martina Lindorfer (UC Santa Barbara), Daniel J. Dubois (Northeastern University), Ashwin Rao (University of Helsinki), David Choffnes (Northeastern University), and Narseo Vallina-Rodriguez (IMDEA Networks Institute and ICSI). Is mobile privacy getting better or worse over time? In this paper, we address this question by studying privacy leaks from historical and current versions of 512 popular Android apps, covering 7,665 app releases over 8 years of app version history. Through automated and scripted interaction with apps and analysis of the network traffic they generate on real mobile devices, we identify how privacy changes over time for individual apps and in aggregate. We find several trends that include increased collection of personally identifiable information (PII) across app versions, slow adoption of HTTPS to secure the information sent to other parties, and a large number of third parties being able to link user activity and locations across apps. Interestingly, while privacy is getting worse in aggregate, we find that the privacy risk of individual apps varies greatly over time, and a substantial fraction of apps see little change or even improvement in privacy. Given these trends, we propose metrics for quantifying privacy risk and for providing this risk assessment proactively to help users balance the risks and benefits of installing new versions of apps.
Apps, Trackers, Privacy, and Regulators: A Global Study of the Mobile Tracking Ecosystem. Abbas Razaghpanah (Stony Brook University), Rishab Nithyanand (Data & Society Research Institute), Narseo Vallina-Rodriguez (IMDEA Networks and ICSI), Srikanth Sundaresan (Princeton University), Mark Allman (ICSI), Christian Kreibich (Corelight and ICSI), and Phillipa Gill (University of Massachusetts Amherst). Third-party services form an integral part of the mobile ecosystem: they ease application development and enable features such as analytics, social network integration, and app monetization through ads. However, aided by the general opacity of mobile systems, such services are also largely invisible to users. This has negative consequences for user privacy as third-party services can potentially track users without their consent, even across multiple applications. Using real-world mobile traffic data gathered by the Lumen Privacy Monitor (Lumen), a privacyenhancing app with the ability to analyze network traffic on mobile devices in user space, we present insights into the mobile advertising and tracking ecosystem and its stakeholders. In this study, we develop automated methods to detect third-party advertising and tracking services at the traffic level. Using this technique we identify 2,121 such services, of which 233 were previously unknown to other popular advertising and tracking blacklists. We then uncover the business relationships between the providers of these services and characterize them by their prevalence in the mobile and Web ecosystem. Our analysis of the privacy policies of the largest advertising and tracking service providers shows that sharing harvested data with subsidiaries and third-party affiliates is the norm. Finally, we seek to identify the services likely to be most impacted by privacy regulations such as the European General Data Protection Regulation (GDPR) and ePrivacy directives.
OS-level Side Channels without Procfs: Exploring Cross-App Information Leakage on iOS. Xiaokuan Zhang (The Ohio State University), Xueqiang Wang (Indiana University at Bloomington), Xiaolong Bai (Tsinghua University), Yinqian Zhang (The Ohio State University), and XiaoFeng Wang (Indiana University at Bloomington). It has been demonstrated in numerous previous studies that Android and its underlying Linux operating systems do not properly isolate mobile apps to prevent cross-app sidechannel attacks. Cross-app information leakage enables malicious Android apps to infer sensitive user data (e.g., passwords), or private user information (e.g., identity or location) without requiring specific permissions. Nevertheless, no prior work has ever studied these side-channel attacks on iOS-based mobile devices. One reason is that iOS does not implement procfs— the most popular side-channel attack vector; hence the previously known attacks are not feasible. In this paper, we present the first study of OS-level sidechannel attacks on iOS. Specifically, we identified several new side-channel attack vectors (i.e., iOS APIs that enable cross-app information leakage); developed machine learning frameworks (i.e., classification and pattern matching) that combine multiple attack vectors to improve the accuracy of the inference attacks; demonstrated three categories of attacks that exploit these vectors and frameworks to exfiltrate sensitive user information. We have reported our findings to Apple and proposed mitigations to the attacks. Apple has incorporated some of our suggested countermeasures into iOS 11 and MacOS High Sierra 10.13 and later versions.
Knock Knock, Who’s There? Membership Inference on Aggregate Location Data. Apostolos Pyrgelis (University College London), Carmela Troncoso (IMDEA Software Institute), and Emiliano De Cristofaro (University College London). Aggregate location data is often used to support smart services and applications, e.g., generating live traffic maps or predicting visits to businesses. In this paper, we present the first study on the feasibility of membership inference attacks on aggregate location time-series. We introduce a game-based definition of the adversarial task, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates. We empirically evaluate the power of these attacks on both raw and differentially private aggregates using two mobility datasets. We find that membership inference is a serious privacy threat, and show how its effectiveness depends on the adversary’s prior knowledge, the characteristics of the underlying location data, as well as the number of users and the timeframe on which aggregation is performed. Although differentially private mechanisms can indeed reduce the extent of the attacks, they also yield a significant loss in utility. Moreover, a strategic adversary mimicking the behavior of the defense mechanism can greatly limit the protection they provide. Overall, our work presents a novel methodology geared to evaluate membership inference on aggregate location data in real-world settings and can be used by providers to assess the quality of privacy protection before data release or by regulators to detect violations.
|
||
Session 6A: Cloud |
Tom Moyer |
Kon Tiki Ballroom |
Reduced Cooling Redundancy: A New Security Vulnerability in a Hot Data Center. Xing Gao (University of Delaware and College of William and Mary), Zhang Xu (College of William and Mary), Haining Wang (University of Delaware), Li Li (Ohio State University), and Xiaorui Wang (Ohio State University). Data centers have been growing rapidly in recent years to meet the surging demand of cloud services. However, the expanding scale and powerful servers generate a great amount of heat, resulting in significant cooling costs. A trend in modern data centers is to raise the temperature and maintain all servers in a relatively hot environment. While this can save on cooling costs given benign workloads running in servers, the hot environment increases the risk of cooling failure. In this paper, we unveil a new vulnerability of existing data centers with aggressive cooling energy saving policies. Such a vulnerability might be exploited to launch thermal attacks that could severely worsen the thermal conditions in a data center. Specifically, we conduct thermal measurements and uncover effective thermal attack vectors at the server, rack, and data center levels. We also present damage assessments of thermal attacks. Our results demonstrate that thermal attacks can (1) largely increase the temperature of victim servers degrading their performance and reliability, (2) negatively impact on thermal conditions of neighboring servers causing local hotspots, (3) raise the cooling cost, and (4) even lead to cooling failures. Finally, we propose effective defenses to prevent thermal attacks from becoming a serious security threat to data centers.
OBLIVIATE: A Data Oblivious Filesystem for Intel SGX. Adil Ahmad (Purdue University), Kyungtae Kim (Purdue University), Muhammad Ihsanulhaq Sarfaraz (Purdue University), and Byoungyoung Lee (Purdue University). Intel SGX provides con dentiality and integrity of a program running within the con nes of an enclave, and is expected to enable valuable security applications such as private information retrieval. This paper is concerned with the security aspects of SGX in accessing a key system resource, les. Through concrete attack scenarios, we show that all existing SGX lesystems are vulnerable to either system call snooping, page fault, or cache based side-channel attacks. To address this security limitations in current SGX lesystems, we present OBLIVIATE, a data oblivious lesystem for Intel SGX. The key idea behind OBLIVIATE is in adapting the ORAM protocol to read and write data from a le within an SGX enclave. OBLIVIATE redesigns the conceptual components of ORAM for SGX environments, and it seamlessly supports an SGX program without requiring any changes in the application layer. OBLIVIATE also employs SGX-speci c defenses and optimizations in order to ensure complete security with acceptable overhead. The evaluation of the prototype of OBLIVIATE demonstrated its practical effectiveness in running popular server applications such as SQLite and Lighttpd, while also achieving a throughput improvement of 2×- 8× over a baseline ORAM-based solution, and less than 2× overhead over an in-memory SGX lesystem.
Microarchitectural Minefields: 4K-Aliasing Covert Channel and Multi-Tenant Detection in Iaas Clouds. Dean Sullivan (University of Florida), Orlando Arias (University of Central Florida), Travis Meade (University of Central Florida), and Yier Jin (University of Florida). We introduce a new microarchitectural timing covert channel using the processor memory order buffer (MOB). Specifically, we show how an adversary can infer the state of a spy process on the Intel 64 and IA-32 architectures when predicting dependent loads through the store buffer, called 4K-aliasing. The 4K-aliasing event is a side-effect of memory disambiguation misprediction while handling write-after-read data hazards wherein the lower 12-bits of a load address will falsely match with store addresses resident in the MOB. In this work, we extensively analyze 4K-aliasing and demonstrate a new timing channel measureable across processes when executed as hyperthreads. We then use 4K-aliasing to build a robust covert communication channel on both the Amazon EC2 and Google Compute Engine capable of communicating at speeds of 1.28 Mbps and 1.49 Mbps, respectively. In addition, we show that 4K-aliasing can also be used to reliably detect multi-tenancy.
Cloud Strife: Mitigating the Security Risks of Domain-Validated Certificates. Kevin Borgolte (UC Santa Barbara), Tobias Fiebig (TU Delft), Shuang Hao (UT Dallas), Christopher Kruegel (UC Santa Barbara), and Giovanni Vigna (UC Santa Barbara). Infrastructure-as-a-Service (IaaS), and more generally the “cloud,” like Amazon Web Services (AWS) or Microsoft Azure, have changed the landscape of system operations on the Internet. Their elasticity allows operators to rapidly allocate and use resources as needed, from virtual machines, to storage, to bandwidth, and even to IP addresses, which is what made them popular and spurred innovation. In this paper, we show that the dynamic component paired with recent developments in trust-based ecosystems (e.g., SSL certificates) creates so far unknown attack vectors. Specifically, we discover a substantial number of stale DNS records that point to available IP addresses in clouds, yet, are still actively attempted to be accessed. Often, these records belong to discontinued services that were previously hosted in the cloud. We demonstrate that it is practical, and time and cost efficient for attackers to allocate IP addresses to which stale DNS records point. Considering the ubiquity of domain validation in trust ecosystems, like SSL certificates, an attacker can impersonate the service using a valid certificate trusted by all major operating systems and browsers. The attacker can then also exploit residual trust in the domain name for phishing, receiving and sending emails, or possibly distribute code to clients that load remote code from the domain (e.g., loading of native code by mobile apps, or JavaScript libraries by websites). Even worse, an aggressive attacker could execute the attack in less than 70 seconds, well below common time-to-live (TTL) for DNS records. In turn, it means an attacker could exploit normal service migrations in the cloud to obtain a valid SSL certificate for domains owned and managed by others, and, worse, that she might not actually be bound by DNS records being (temporarily) stale, but that she can exploit caching instead. We introduce a new authentication method for trust-based domain validation that mitigates staleness issues without incurring additional certificate requester effort by incorporating existing trust of a name into the validation process. Furthermore, we provide recommendations for domain name owners and cloud operators to reduce their and their clients’ exposure to DNS staleness issues and the resulting domain takeover attacks.
|
||
Session 6B: Privacy and De-Anonymization |
Prateek Mittal |
Aviary Ballroom |
Consensual and Privacy-Preserving Sharing of Multi-Subject and Interdependent Data. Alexandra-Mihaela Olteanu (EPFL and UNIL-HEC Lausanne), Kevin Huguenin (UNIL-HEC Lausanne), Italo Dacosta (EPFL), and Jean-Pierre Hubaux (EPFL). Individuals share increasing amounts of personal data online. This data often involves–or at least has privacy implications for–data subjects other than the individual who shares it (e.g., photos, genomic data) and the data is shared without their consent. A popular example, with dramatic consequences, is revenge pornography. In this paper, we propose ConsenShare, a system for sharing, in a consensual (wrt the data subjects) and privacy-preserving (wrt both service providers and other individuals) way, data involving subjects other than the uploader. We describe a complete design and implementation of ConsenShare for photos, which relies on image processing and cryptographic techniques, as well as on a two-tier architecture (one entity for detecting the data subjects and contacting them; one entity for hosting the data and for collecting consent). We benchmark the performance (CPU and bandwidth) of ConsenShare by using a dataset of 20k photos from Flickr. We also conduct a survey targeted at Facebook users (N = 321). Our results are quite encouraging: The experimental results demonstrate the feasibility of our approach (i.e., acceptable overheads) and the survey results demonstrate potential interest from the users.
When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries. Aylin Caliskan (Princeton University), Fabian Yamaguchi (Shiftleft Inc), Edwin Dauber (Drexel University), Richard Harang (Sophos), Konrad Rieck (TU Braunschweig), Rachel Greenstadt (Drexel University), and Arvind Narayanan (Princeton University). The ability to identify authors of computer programs based on their coding style is a direct threat to the privacy and anonymity of programmers. While recent work found that source code can be attributed to authors with high accuracy, attribution of executable binaries appears to be much more difficult. Many distinguishing features present in source code, e.g. variable names, are removed in the compilation process, and compiler optimization may alter the structure of a program, further obscuring features that are known to be useful in determining authorship. We examine programmer de-anonymization from the standpoint of machine learning, using a novel set of features that include ones obtained by decompiling the executable binary to source code. We adapt a powerful set of techniques from the domain of source code authorship attribution along with stylistic representations embedded in assembly, resulting in successful deanonymization of a large set of programmers. We evaluate our approach on data from the Google Code Jam, obtaining attribution accuracy of up to 96% with 100 and 83% with 600 candidate programmers. We present an executable binary authorship attribution approach, for the first time, that is robust to basic obfuscations, a range of compiler optimization settings, and binaries that have been stripped of their symbol tables. We perform programmer de-anonymization using both obfuscated binaries, and real-world code found “in the wild” in single-author GitHub repositories and the recently leaked Nulled.IO hacker forum. We show that programmers who would like to remain anonymous need to take extreme countermeasures to protect their privacy.
De-anonymization of Mobility Trajectories: Dissecting the Gaps between Theory and Practice. Huandong Wang (Tsinghua University), Chen Gao (Tsinghua University), Yong Li (Tsinghua University), Gang Wang (Virginia Tech), Depeng Jin (Tsinghua University), and Jingbo Sun (China Telecom Beijing Research Institute). Human mobility trajectories are increasingly collected by ISPs to assist academic research and commercial applications. Meanwhile, there is a growing concern that individual trajectories can be de-anonymized when the data is shared, using information from external sources (e.g. online social networks). To understand this risk, prior works either estimate the theoretical privacy bound or simulate de-anonymization attacks on synthetically created (small) datasets. However, it is not clear how well the theoretical estimations are preserved in practice. In this paper, we collected a large-scale ground-truth trajectory dataset from 2,161,500 users of a cellular network, and two matched external trajectory datasets from a large social network (56,683 users) and a check-in/review service (45,790 users) on the same user population. The two sets of large ground-truth data provide a rare opportunity to extensively evaluate a variety of de-anonymization algorithms (7 in total). We find that their performance in the real-world dataset is far from the theoretical bound. Further analysis shows that most algorithms have underestimated the impact of spatio-temporal mismatches between the data from different sources, and the high sparsity of user generated data also contributes to the underperformance. Based on these insights, we propose 4 new algorithms that are specially designed to tolerate spatial or temporal mismatches (or both) and model user behavior. Extensive evaluations show that our algorithms achieve more than 17% performance gain over the best existing algorithms, confirming our insights.
Veil: Private Browsing Semantics Without Browser-side Assistance. Frank Wang (MIT CSAIL), James Mickens (Harvard University), and Nickolai Zeldovich (MIT CSAIL). All popular web browsers offer a “private browsing mode.” After a private session terminates, the browser is supposed to remove client-side evidence that the session occurred. Unfortunately, browsers still leak information through the file system, the browser cache, the DNS cache, and on-disk reflections of RAM such as the swap file. Veil is a new deployment framework that allows web developers to prevent these information leaks, or at least reduce their likelihood. Veil leverages the fact that, even though developers do not control the client-side browser implementation, developers do control 1) the content that is sent to those browsers, and 2) the servers which deliver that content. Veil web sites collectively store their content on Veil’s blinding servers instead of on individual, site-specific servers. To publish a new page, developers pass their HTML, CSS, and JavaScript files to Veil’s compiler; the compiler transforms the URLs in the content so that, when the page loads on a user’s browser, URLs are derived from a secret user key. The blinding service and the Veil page exchange encrypted data that is also protected by the user’s key. The result is that Veil pages can safely store encrypted content in the browser cache; furthermore, the URLs exposed to system interfaces like the DNS cache are unintelligible to attackers who do not possess the user’s key. To protect against post-session inspection of swap file artifacts, Veil uses heap walking (which minimizes the likelihood that secret data is paged out), content mutation (which garbles in-memory artifacts if they do get swapped out), and DOM hiding (which prevents the browser from learning site-specific HTML, CSS, and JavaScript content in the first place). Veil pages load on unmodified commodity browsers, allowing developers to provide stronger semantics for private browsing without forcing users to install or reconfigure their machines. Veil provides these guarantees even if the user does not visit a page using a browser’s native privacy mode; indeed, Veil’s protections are stronger than what the browser alone can provide.
|
||
Session 7A: Web Security |
Adam Doupé |
Kon Tiki Ballroom |
Game of Missuggestions: Semantic Analysis of Search-Autocomplete Manipulations. Peng Wang (Indiana University Bloomington), Xianghang Mi (Indiana University Bloomington), Xiaojing Liao (William and Mary), XiaoFeng Wang (Indiana University Bloomington), Kan Yuan (Indiana University Bloomington), Feng Qian (Indiana University Bloomington), and Raheem Beyah (Georgia Institute of Technology). As a new type of blackhat Search Engine Optimization (SEO), autocomplete manipulations are increasingly utilized by miscreants and promotion companies alike to advertise desired suggestion terms when related trigger terms are entered by the user into a search engine. Like other illicit SEO, such activities game the search engine, mislead the querier, and in some cases, spread harmful content. However, little has been done to understand this new threat, in terms of its scope, impact and techniques, not to mention any serious effort to detect such manipulated terms on a large scale. Systematic analysis of autocomplete manipulation is challenging, due to the scale of the problem (tens or even hundreds of millions suggestion terms and their search results) and the heavy burdens it puts on the search engines. In this paper, we report the first technique that addresses these challenges, making a step toward better understanding and ultimately eliminating this new threat. Our technique, called Sacabuche, takes a semantics-based, two-step approach to minimize its performance impact: it utilizes Natural Language Processing (NLP) to analyze a large number of trigger and suggestion combinations, without querying search engines, to filter out the vast majority of legitimate suggestion terms; only a small set of suspicious suggestions are run against the search engines to get query results for identifying truly abused terms. This approach achieves a 96.23% precision and 95.63% recall, and its scalability enables us to perform a measurement study on 114 millions of suggestion terms, an unprecedented scale for this type of studies. The findings of the study bring to light the magnitude of the threat (0.48% Google suggestion terms we collected manipulated), and its significant security implications never reported before (e.g., exceedingly long lifetime of campaigns, sophisticated techniques and channels for spreading malware and phishing content).
SYNODE: Understanding and Automatically Preventing Injection Attacks on NODE.JS. Cristian-Alexandru Staicu (TU Darmstadt), Michael Pradel (TU Darmstadt), and Benjamin Livshits (Imperial College London). The Node.js ecosystem has lead to the creation of many modern applications, such as serverside web applications and desktop applications. Unlike client-side JavaScript code, Node.js applications can interact freely with the operating system without the benefits of a security sandbox. As a result, command injection attacks can cause significant harm, which is compounded by the fact that independently developed Node.js modules interact in uncontrolled ways. This paper presents a large-scale study across 235,850 Node.js modules to explore injection vulnerabilities. We show that injection vulnerabilities are prevalent in practice, both due to eval, which was previously studied for browser code, and due to the powerful exec API introduced in Node.js. Our study suggests that thousands of modules may be vulnerable to command injection attacks and that fixing them takes a long time, even for popular projects. Motivated by these findings, we present Synode, an automatic mitigation technique that combines static analysis and runtime enforcement of security policies to use vulnerable modules in a safe way. The key idea is to statically compute a template of values passed to APIs that are prone to injections, and to synthesize a grammar-based runtime policy from these templates. Our mechanism is easy to deploy: it does not require any modification of the Node.js platform, it is fast (sub-millisecond runtime overhead), and it protects against attacks of vulnerable modules, while inducing very few false positives (less than 10%).
JavaScript Zero: Real JavaScript and Zero Side-Channel Attacks. Michael Schwarz (Graz University of Technology), Moritz Lipp (Graz University of Technology), and Daniel Gruss (Graz University of Technology). Modern web browsers are ubiquitously used by billions of users, connecting them to the world wide web. From the other side, web browsers do not only provide a unified interface for businesses to reach customers, but they also provide a unified interface for malicious actors to reach users. The highly optimized scripting language JavaScript plays an important role in the modern web, as well as for browser-based attacks. These attacks include microarchitectural attacks, which exploit the design of the underlying hardware. In contrast to software bugs, there is often no easy fix for microarchitectural attacks. We propose JavaScript Zero, a highly practical and generic fine-grained permission model in JavaScript to reduce the attack surface in modern browsers. JavaScript Zero facilitates advanced features of the JavaScript language to dynamically deflect usage of dangerous JavaScript features. To implement JavaScript Zero in practice, we overcame a series of challenges to protect potentially dangerous features, guarantee the completeness of our solution, and provide full compatibility with all websites. We demonstrate that our proof-of-concept browser extension Chrome Zero protects against 11 unfixed state-of-the-art microarchitectural and sidechannel attacks. As a side effect, Chrome Zero also protects against 50% of the published JavaScript 0-day exploits since Chrome 49. Chrome Zero has a performance overhead of 1.82% on average. In a user study, we found that for 24 websites in the Alexa Top 25, users could not distinguish browsers with and without Chrome Zero correctly, showing that Chrome Zero has no perceivable effect on most websites. Hence, JavaScript Zero is a practical solution to mitigate JavaScript-based state-of-the-art microarchitectural and side-channel attacks.
Riding out DOMsday: Towards Detecting and Preventing DOM Cross-Site Scripting. William Melicher (Carnegie Mellon University), Anupam Das (Carnegie Mellon University), Mahmood Sharif (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University), and Limin Jia (Carnegie Mellon University). Cross-site scripting (XSS) vulnerabilities are the most frequently reported web application vulnerability. As complex JavaScript applications become more widespread, DOM (Document Object Model) XSS vulnerabilities—a type of XSS vulnerability where the vulnerability is located in client-side JavaScript, rather than server-side code—are becoming more common. As the first contribution of this work, we empirically assess the impact of DOM XSS on the web using a browser with taint tracking embedded in the JavaScript engine. Building on the methodology used in a previous study that crawled popular websites, we collect a current dataset of potential DOM XSS vulnerabilities. We improve on the methodology for confirming XSS vulnerabilities, and using this improved methodology, we find 83% more vulnerabilities than previous methodology applied to the same dataset. As a second contribution, we identify the causes of and discuss how to prevent DOM XSS vulnerabilities. One example of our findings is that custom HTML templating designs—a design pattern that could prevent DOM XSS vulnerabilities analogous to parameterized SQL—can be buggy in practice, allowing DOM XSS attacks. As our third contribution, we evaluate the error rates of three static-analysis tools to detect DOM XSS vulnerabilities found with dynamic analysis techniques using in-the-wild examples. We find static-analysis tools to miss 90% of bugs found by our dynamic analysis, though some tools can have very few false positives and at the same time find vulnerabilities not found using the dynamic analysis.
|
||
Session 7B: Audit Logs |
Adam Bates |
Aviary Ballroom |
Towards Scalable Cluster Auditing through Grammatical Inference over Provenance Graphs. Wajih Ul Hassan (University of Illinois at Urbana-Champaign), Mark Lemay (Boston University), Nuraini Aguse (University of Illinois at Urbana-Champaign), Adam Bates (University of Illinois at Urbana-Champaign), and Thomas Moyer (UNC at Charlotte). Investigating the nature of system intrusions in large distributed systems remains a notoriously difficult challenge. While monitoring tools (e.g., Firewalls, IDS) provide preliminary alerts through easy-to-use administrative interfaces, attack reconstruction still requires that administrators sift through gigabytes of system audit logs stored locally on hundreds of machines. At present, two fundamental obstacles prevent synergy between system-layer auditing and modern cluster monitoring tools: 1) the sheer volume of audit data generated in a data center is prohibitively costly to transmit to a central node, and 2) systemlayer auditing poses a “needle-in-a-haystack” problem, such that hundreds of employee hours may be required to diagnose a single intrusion. This paper presents Winnower, a scalable system for auditbased cluster monitoring that addresses these challenges. Our key insight is that, for tasks that are replicated across nodes in a distributed application, a model can be defined over audit logs to succinctly summarize the behavior of many nodes, thus eliminating the need to transmit redundant audit records to a central monitoring node. Specifically, Winnower parses audit records into provenance graphs that describe the actions of individual nodes, then performs grammatical inference over individual graphs using a novel adaptation of Deterministic Finite Automata (DFA) Learning to produce a behavioral model of many nodes at once. This provenance model can be efficiently transmitted to a central node and used to identify anomalous events in the cluster. We have implemented Winnower for Docker Swarm container clusters and evaluate our system against real-world applications and attacks. We show that Winnower dramatically reduces storage and network overhead associated with aggregating system audit logs, by as much as 98%, without sacrificing the important information needed for attack investigation. Winnower thus represents a significant step forward for security monitoring in distributed systems.
MCI : Modeling-based Causality Inference in Audit Logging for Attack Investigation. Yonghwi Kwon (Purdue University), Fei Wang (Purdue University), Weihang Wang (Purdue University), Kyu Hyung Lee (University of Georgia), Wen-Chuan Lee (Purdue University), Shiqing Ma (Purdue University), Xiangyu Zhang (Purdue University), Dongyan Xu (Purdue University), Somesh Jha (University of Wisconsin-Madison), Gabriela Ciocarlie (SRI International), Ashish Gehani (SRI International), and Vinod Yegneswaran (SRI International). In this paper, we develop a model based causality inference technique for audit logging that does not require any application instrumentation or kernel modification. It leverages a recent dynamic analysis, dual execution (LDX), that can infer precise causality between system calls but unfortunately requires doubling the resource consumption such as CPU time and memory consumption. For each application, we use LDX to acquire precise causal models for a set of primitive operations. Each model is a sequence of system calls that have inter-dependences, some of them caused by memory operations and hence implicit at the system call level. These models are described by a language that supports various complexity such as regular, context-free, and even context-sensitive. In production run, a novel parser is deployed to parse audit logs (without any enhancement) to model instances and hence derive causality. Our evaluation on a set of real-world programs shows that the technique is highly effective. The generated models can recover causality with 0% false-positives (FP) and false-negatives (FN) for most programs and only 8.3% FP and 5.2% FN in the worst cases. The models also feature excellent composibility, meaning that the models derived from primitive operations can be composed together to describe causality for large and complex real world missions. Applying our technique to attack investigation shows that the system-wide attack causal graphs are highly precise and concise, having better quality than the state-of-the-art.
Towards a Timely Causality Analysis for Enterprise Security. Yushan Liu (Princeton University), Mu Zhang (Cornell University), Ding Li (NEC Labs America), Kangkook Jee (NEC Labs America), Zhichun Li (NEC Labs America), Zhenyu Wu (NEC Labs America), Junghwan Rhee (NEC Labs America), and Prateek Mittal (Princeton University). The increasingly sophisticated Advanced Persistent Threat (APT) attacks have become a serious challenge for enterprise IT security. Attack causality analysis, which tracks multi-hop causal relationships between files and processes to diagnose attack provenances and consequences, is the first step towards understanding APT attacks and taking appropriate responses. Since attack causality analysis is a time-critical mission, it is essential to design causality tracking systems that extract useful attack information in a timely manner. However, prior work is limited in serving this need. Existing approaches have largely focused on pruning causal dependencies totally irrelevant to the attack, but fail to differentiate and prioritize abnormal events from numerous relevant, yet benign and complicated system operations, resulting in long investigation time and slow responses. To address this problem, we propose PRIOTRACKER, a backward and forward causality tracker that automatically prioritizes the investigation of abnormal causal dependencies in the tracking process. Specifically, to assess the priority of a system event, we consider its rareness and topological features in the causality graph. To distinguish unusual operations from normal system events, we quantify the rareness of each event by developing a reference model which records common routine activities in corporate computer systems. We implement PRIOTRACKER, in 20K lines of Java code, and a reference model builder in 10K lines of Java code. We evaluate our tool by deploying both systems in a real enterprise IT environment, where we collect 1TB of 2.5 billion OS events from 150 machines in one week. Experimental results show that PRIOTRACKER can capture attack traces that are missed by existing trackers and reduce the analysis time by up to two orders of magnitude.
JSgraph: Enabling Reconstruction of Web Attacks via Efficient Tracking of Live In-Browser JavaScript Executions. Bo Li (University of Georgia), Phani Vadrevu (University of Georgia), Kyu Hyung Lee (University of Georgia), and Roberto Perdisci (University of Georgia). In this paper, we propose JSgraph, a forensic engine that is able to efficiently record fine-grained details pertaining to the execution of JavaScript (JS) programs within the browser, with particular focus on JS-driven DOM modifications. JSgraph’s main goal is to enable a detailed, post-mortem reconstruction of ephemeral JS-based web attacks experienced by real network users. In particular, we aim to enable the reconstruction of social engineering attacks that result in the download of malicious executable files or browser extensions, among other attacks. We implement JSgraph by instrumenting Chromium’s code base at the interface between Blink and V8, the rendering and JavaScript engines. We design JSgraph to be lightweight, highly portable, and to require low storage capacity for its fine-grained audit logs. Using a variety of both in-the-wild and lab-reproduced web attacks, we demonstrate how JSgraph can aid the forensic investigation process. We then show that JSgraph introduces acceptable overhead, with a median overhead on popular website page loads between 3.2% and 3.9%.
|
||
Session 8: Android |
Brendan Saltaformaggio |
Kon Tiki Ballroom |
AceDroid: Normalizing Diverse Android Access Control Checks for Inconsistency Detection. Yousra Aafer (Purdue University), Jianjun Huang (Purdue University), Yi Sun (Purdue University), Xiangyu Zhang (Purdue University), Ninghui Li (Purdue University), and Chen Tian (Futurewei Technologies). The Android framework has raised increased security concerns with regards to its access control enforcement. Particularly, existing research efforts successfully demonstrate that framework security checks are not always consistent across appaccessible APIs. However, existing efforts fall short in addressing peculiarities that characterize the complex Android access control and the diversity introduced by the heavy vendor customization. In this paper, we develop a new analysis framework AceDroid that models Android access control in a path-sensitive manner and normalizes diverse checks to a canonical form. We applied our proposed modeling to perform inconsistency analysis for 12 images. Our tool proved to be quite effective, enabling to detect a significant number of inconsistencies introduced by various vendors and to suppress substantial false alarms. Through investigating the results, we uncovered high impact attacks enabling to write a key logger, send premium sms messages, bypass user restrictions, perform a major denial of services and other critical operations.
InstaGuard: Instantly Deployable Hot-patches for Vulnerable System Programs on Android. Yaohui Chen (Northeastern University), Yuping Li (University of South Florida), Long Lu (Northeastern University), Yueh-Hsun Lin (JD Research Center), Hayawardh Vijayakumar (Samsung Research America), Zhi Wang (Florida State University), and Xinming Ou (University of South Florida). Hot-patches, easier to develop and faster to deploy than permanent patches, are used to timely (and temporarily) block exploits of newly discovered vulnerabilities while permanent patches are being developed and tested. Researchers recently proposed to apply hot-patching techniques to system programs on Android as a quick mitigation against critical vulnerabilities. However, existing hot-patching techniques, though widely used in conventional computers, are rarely adopted by Android OS or device vendors in reality. Our study uncovers a major hurdle that prevents existing hot-patching methods from being effective on mobile devices: after being developed, hot-patches for mobile devices have to go through lengthy compatibility tests that Android device partners impose on all system code updates. This testing and release process can take months, and therefore, erase the key benefit of hot-patches (i.e., quickly deployable). We propose InstaGuard, a new approach to hot-patch for mobile devices that allows for instant deployment of patches (i.e., “carrier-passthrough”) and fast patch development for device vendors. Unlike existing hot-patching techniques, InstaGuard avoids injecting new code to programs being patched. Instead, it enforces instantly updatable rules that contain no code (i.e., no carrier test is needed) to block exploits of unpatched vulnerabilities in a timely fashion. When designing InstaGuard, we overcame two major challenges that previous hot-patching methods did not face. First, since no code addition is allowed, InstaGuard needs a rule language that is expressive enough to mitigate various kinds of vulnerabilities and efficient to be enforced on mobile devices. Second, rule generation cannot require special skills or much efforts from human users. We designed a new language for hot-patches and an enforcement mechanism based on the basic debugging primitives supported by ARM CPUs. We also built RuleMaker, a tool for automatically generating rules for InstaGuard based on high-level, easy-to-write vulnerability descriptions. We have implemented InstaGuard on Google Nexus 5X phones. To demonstrate the coverage of InstaGuard, we show that InstaGuard can handle all critical CVEs from Android Security Bulletins reported in 2016. We also conduct unit tests using critical vulnerabilities from 4 different categories. On average, InstaGuard increases program memory footprint by 1.69% and slows down program execution by 2.70%, which are unnoticeable to device users in practice.
BreakApp: Automated, Flexible Application Compartmentalization. Nikos Vasilakis (University of Pennsylvania), Ben Karel (University of Pennsylvania), Nick Roessler (University of Pennsylvania), Nathan Dautenhahn (University of Pennsylvania), Andre DeHon (University of Pennsylvania), and Jonathan M. Smith (University of Pennsylvania). Developers of large-scale software systems may use third-party modules to reduce costs and accelerate release cycles, at some risk to safety and security. BREAKAPP exploits module boundaries to automate compartmentalization of systems and enforce security policies, enhancing reliability and security. BREAKAPP transparently spawns modules in protected compartments while preserving their original behavior. Optional high-level policies decouple security assumptions made during development from requirements imposed for module composition and use. These policies allow fine-tuning trade-offs such as security and performance based on changing threat models or load patterns. Evaluation of BREAKAPP with a prototype implementation for JavaScript demonstrates feasibility by enabling simplified security hardening of existing systems with low performance overhead.
Resolving the Predicament of Android Custom Permissions. Guliz Seray Tuncay (University of Illinois at Urbana-Champaign), Soteris Demetriou (University of Illinois at Urbana-Champaign), Karan Ganju (University of Illinois at Urbana-Champaign), and Carl A. Gunter (University of Illinois at Urbana-Champaign). Android leverages a set of system permissions to protect platform resources. At the same time, it allows untrusted third-party applications to declare their own custom permissions to regulate access to app components. However, Android treats custom permissions the same way as system permissions even though they are declared by entities of different trust levels. In this work, we describe two new classes of vulnerabilities that arise from the ‘predicament’ created by mixing system and custom permissions in Android. These have been acknowledged as serious security flaws by Google and we demonstrate how they can be exploited in practice to gain unauthorized access to platform resources and to compromise popular Android apps. To address the shortcomings of the system, we propose a new modular design called Cusper for the Android permission model. Cusper separates the management of system and custom permissions and introduces a backward-compatible naming convention for custom permissions to prevent custom permission spoofing. We validate the correctness of Cusper by 1) introducing the first formal model of Android runtime permissions, 2) extending it to describe Cusper, and 3) formally showing that key security properties that can be violated in the current permission model are always satisfied in Cusper. To demonstrate Cusper’s practicality, we implemented it in the Android platform and showed that it is both effective and efficient.
|
||
Session 9: Blockchain and Smart Contracts |
Aziz Mohaisen |
Kon Tiki Ballroom |
ZEUS: Analyzing Safety of Smart Contracts. Sukrit Kalra (IBM Research), Seep Goel (IBM Research), Mohan Dhawan (IBM Research), and Subodh Sharma (IIT Delhi). A smart contract is hard to patch for bugs once it is deployed, irrespective of the money it holds. A recent bug caused losses worth around $50 million of cryptocurrency. We present ZEUS—a framework to verify the correctness and validate the fairness of smart contracts. We consider correctness as adherence to safe programming practices, while fairness is adherence to agreed upon higher-level business logic. ZEUS leverages both abstract interpretation and symbolic model checking, along with the power of constrained horn clauses to quickly verify contracts for safety. We have built a prototype of ZEUS for Ethereum and Fabric blockchain platforms, and evaluated it with over 22.4K smart contracts. Our evaluation indicates that about 94.6% of contracts (containing cryptocurrency worth more than $0.5 billion) are vulnerable. ZEUS is sound with zero false negatives and has a low false positive rate, with an order of magnitude improvement in analysis time as compared to prior art.
Chainspace: A Sharded Smart Contracts Platform. Mustafa Al-Bassam (University College London), Alberto Sonnino (University College London), Shehar Bano (University College London), Dave Hrycyszyn (constructiveproof.com), and George Danezis (University College London). Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely highauditability, non-repudiation and ‘blockchain’ techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance.
Settling Payments Fast and Private: Efficient Decentralized Routing for Path-Based Transactions. Stefanie Roos (University of Waterloo), Pedro Moreno-Sanchez (Purdue University), Aniket Kate (Purdue University), and Ian Goldberg (University of Waterloo). Decentralized path-based transaction (PBT) networks maintain local payment channels between participants. Pairs of users leverage these channels to settle payments via a path of intermediaries without the need to record all transactions in a global blockchain. PBT networks such as Bitcoin’s Lightning Network and Ethereum’s Raiden Network are the most prominent examples of this emergent area of research. Both networks overcome scalability issues of widely used cryptocurrencies by replacing expensive and slow on-chain blockchain operations with inexpensive and fast off-chain transfers. At the core of a decentralized PBT network is a routing algorithm that discovers transaction paths between sender and receiver. In recent years, a number of routing algorithms have been proposed, including landmark routing, utilized in the decentralized IOU credit network SilentWhispers, and Flare, a link state algorithm for the Lightning Network. However, the existing efforts lack either efficiency or privacy, as well as the comprehensive analysis that is indispensable to ensure the success of PBT networks in practice. In this work, we first identify several efficiency concerns in existing routing algorithms for decentralized PBT networks. Armed with this knowledge, we design and evaluate SpeedyMurmurs, a novel routing algorithm for decentralized PBT networks using efficient and flexible embedding-based path discovery and on-demand efficient stabilization to handle the dynamics of a PBT network. Our simulation study, based on real-world data from the currently deployed Ripple credit network, indicates that SpeedyMurmurs reduces the overhead of stabilization by up to two orders of magnitude and the overhead of routing a transaction by more than a factor of two. Furthermore, using SpeedyMurmurs maintains at least the same success ratio as decentralized landmark routing, while providing lower delays. Finally, SpeedyMurmurs achieves key privacy goals for routing in decentralized PBT networks.
TLS-N: Non-repudiation over TLS Enablign Ubiquitous Content Signing. Hubert Ritzdorf (ETH Zurich), Karl Wust (ETH Zurich), Arthur Gervais (Imperial College London), Guillaume Felley (ETH Zurich), and Srdjan Capkun (ETH Zurich). An internet user wanting to share observed content is typically restricted to primitive techniques such as screenshots, web caches or share button-like solutions. These acclaimed proofs, however, are either trivial to falsify or require trust in centralized entities (e.g., search engine caches). This motivates the need for a seamless and standardized internet-wide non-repudiation mechanism, allowing users to share data from news sources, social websites or financial data feeds in a provably secure manner. Additionally, blockchain oracles that enable data-rich smart contracts typically rely on a trusted third party (e.g., TLSNotary or Intel SGX). A decentralized method to transfer webbased content into a permissionless blockchain without additional trusted third party would allow for smart contract applications to flourish. In this work, we present TLS-N, the first TLS extension that provides secure non-repudiation and solves both of the mentioned challenges. TLS-N generates non-interactive proofs about the content of a TLS session that can be efficiently verified by third parties and blockchain based smart contracts. As such, TLS-N increases the accountability for content provided on the web and enables a practical and decentralized blockchain oracle for web content. TLS-N is compatible with TLS 1.3 and adds a minor overhead to a typical TLS session. When a proof is generated, parts of the TLS session (e.g., passwords, cookies) can be hidden for privacy reasons, while the remaining content can be verified.
|
||
Session 10: Social Networks and Anonymity |
Aniket Kate |
Kon Tiki Ballroom |
Investigating Ad Transparency Mechanisms in Social Media: A Case Study of Facebooks Explanations. Athanasios Andreou (EURECOM), Giridhari Venkatadri (Northeastern University), Oana Goga (Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, and LIG), Krishna P. Gummadi (Max Planck Institute for Software Systems), Patrick Loiseau (Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG, and Max Planck Institute for Software Systems), and Alan Mislove (Northeastern University). Targeted advertising has been subject to many privacy complaints from both users and policy makers. Despite this attention, users still have little understanding of what data the advertising platforms have about them and why they are shown particular ads. To address such concerns, Facebook recently introduced two transparency mechanisms: a “Why am I seeing this?” button that provides users with an explanation of why they were shown a particular ad (ad explanations), and an Ad Preferences Page that provides users with a list of attributes Facebook has inferred about them and how (data explanations). In this paper, we investigate the level of transparency provided by these two mechanisms. We first define a number of key properties of explanations and then evaluate empirically whether Facebook’s explanations satisfy them. For our experiments, we develop a browser extension that collects the ads users receive every time they browse Facebook, their respective explanations, and the attributes listed on the Ad Preferences Page; we then use controlled experiments where we create our own ad campaigns and target the users that installed our extension. Our results show that ad explanations are often incomplete and sometimes misleading while data explanations are often incomplete and vague. Taken together, our findings have significant implications for users, policy makers, and regulators as social media advertising services mature.
Inside Job: Applying Traffic Analysis to Measure Tor from Within. Rob Jansen (U.S. Naval Research Laboratory), Marc Juarez (imec-COSIC KU Leuven), Rafa Galvez (imec-COSIC KU Leuven), Tariq Elahi (imec-COSIC KU Leuven), and Claudia Diaz (imec-KOSIC KU Leuven). In this paper, we explore traffic analysis attacks on Tor that are conducted solely with middle relays rather than with relays from the entry or exit positions. We create a methodology to apply novel Tor circuit and website fingerprinting from middle relays to detect onion service usage; that is, we are able to identify websites with hidden network addresses by their traffic patterns. We also carry out the first privacypreserving popularity measurement of a single social networking website hosted as an onion service by deploying our novel circuit and website fingerprinting techniques in the wild. Our results show: (i) that the middle position enables wide-scale monitoring and measurement not possible from a comparable resource deployment in other relay positions, (ii) that traffic fingerprinting techniques are as effective from the middle relay position as prior works show from a guard relay, and (iii) that an adversary can use our fingerprinting methodology to discover the popularity of onion services, or as a filter to target specific nodes in the network, such as particular guard relays.
Smoke Screener or Straight Shooter: Detecting Elite Sybil Attacks in User-Review Social Networks. Haizhong Zheng (Shanghai Jiao Tong University), Minhui Xue (NYU Shanghai), Hao Lu (Shanghai Jiao Tong University), Shuang Hao (University of Texas at Dallas), Haojin Zhu (Shanghai Jiao Tong University), Xiaohui Liang (University of Massachusetts Boston), and Keith Ross (NYU and NYU Shanghai). Popular User-Review Social Networks (URSNs)— such as Dianping, Yelp, and Amazon—are often the targets of reputation attacks in which fake reviews are posted in order to boost or diminish the ratings of listed products and services. These attacks often emanate from a collection of accounts, called Sybils, which are collectively managed by a group of real users. A new advanced scheme, which we term elite Sybil attacks, recruits organically highly-rated accounts to generate seeminglytrustworthy and realistic-looking reviews. These elite Sybil accounts taken together form a large-scale sparsely-knit Sybil network for which existing Sybil fake-review defense systems are unlikely to succeed. In this paper, we conduct the first study to define, characterize, and detect elite Sybil attacks. We show that contemporary elite Sybil attacks have a hybrid architecture, with the first tier recruiting elite Sybil workers and distributing tasks by Sybil organizers, and with the second tier posting fake reviews for profit by elite Sybil workers. We design ELSIEDET, a three-stage Sybil detection scheme, which first separates out suspicious groups of users, then identifies the campaign windows, and finally identifies elite Sybil users participating in the campaigns. We perform a large-scale empirical study on ten million reviews from Dianping, by far the most popular URSN service in China. Our results show that reviews from elite Sybil users are more spread out temporally, craft more convincing reviews, and have higher filter bypass rates. We also measure the impact of Sybil campaigns on various industries (such as cinemas, hotels, restaurants) as well as chain stores, and demonstrate that monitoring elite Sybil users over time can provide valuable early alerts against Sybil campaigns.
|