8:00 am – 7:00 pmRegistration
8:00 am – 9:00 am


12:30 pm – 1:30 pm

Continental Breakfast


Lunch, Rousseau Room (first floor)

9:00 am – 5:30 pmDNS Privacy Workshop


Toucan Room

9:00 am – 5:30 pmUsable Security (USEC) Workshop


Macaw Room

6:00 pm – 7:00 pmWelcome Reception


(open to all Workshop and Symposium attendees)

7:30 am – 5:00 pmRegistration
7:30 am – 9:00 amContinental Breakfast
9:00 am– 9:20 amWelcome and Opening Remarks
9:20 am – 10:20 amKeynote – J. Alex Halderman


Recount 2016:  A Security Audit of the Presidential Election

10:20 am – 10:45 amBreak
10:45 am – 12:25 pmSession 1: Applied Crypto and Cryptocurrencies
12:25 am – 2:00 pmLunch, Beach North (outside)
2:00 pm – 3:20 pmSession 2A:  Virtualization and SDN
 Session 2B:  Web Security
3:20 pm – 3:50 pmBreak
3:50 pm – 5:30 pmSession 3A:  User Authentication
 Session 3B:  Malware
6:30 pm – 7:00 pmStudent Travel Grant Meet and Greet
7:00 pm – 9:00 pmPoster Reception, The Boardroom and Foyer
7:30 am – 5:00 pmRegistration
7:30 am – 9:00 amContinental Breakfast
9:00 am – 10:20 amSession 4A:  TLS et al.
 Session 4B:  Secure Computation
10:20 am – 10:45 amBreak
10:45 am – 12:25 pmSession 5A:  Mobile Privacy and Security
 Session 5B:  Software and System Security (Part I)
12:25 pm – 2:00 pmLunch, Beach North (outside)
2:00 pm – 3:20 pmSession 6A:  Cloud and Potpourri
 Session 6B:  Tor
3:20 pm – 3:50 pmBreak
3:50 pm – 5:30 pmSession 7:  Trusted Execution Environments
6:30 pm – 7:00 pmCocktails
7:00 pm – 9:00 pmSymposium Dinner, Aviary Ballroom
7:30 am – 12:00 pmRegistration
7:30 am – 9:00 amContinental Breakfast
9:00 am – 9:20 amAwards and Acknowledgements
9:20 am – 10:20 amKeynote – Trent Adams


Securing the Ecosystem – Collaborating Inside and Out

10:20 am – 10:45 amBreak
10:45 am – 12:25 pmSession 8:  Cyberphysical Security
12:25 pm – 2:00 pmLunch, Beach North (outside)
2:00 pm – 3:20 pmSession 9:  Attacks
3:20 pm – 3:50 pmBreak
3:50 pm – 5:30 pmSession 10:  Software and System Security (Part II)
5:30 pm – 5:40 pmClosing Remarks

Session 1: Applied Crypto and Cryptocurrencies

Session Chair:  Nadia Heninger

IO-DSSE: Scaling Dynamic Searchable Encryption to Millions of Indexes By Improving Locality

Free cloud-based services are powerful candidates for deploying ubiquitous encryption for messaging. In the case of email and increasingly chat, users expect the ability to store and search their messages persistently. Using data from a major mail provider, we confirm that for a searchable encryption scheme to scale to millions of users, it should be highly IO-efficient (locality) and handle a very dynamic message corpi. We observe that existing solutions fail to achieve both properties simultaneously. We then design, build, and evaluate a provably secure Dynamic Searchable Symmetric Encryption (DSSE) scheme with significant reduction in IO cost compared to preceding works when used for email or other highly dynamic material.

Ian Miers (John Hopkins University)
Payman Mohassel (Visa Research)

ObliviSync: Practical Oblivious File Backup and Synchronization

Oblivious RAM (ORAM) protocols are powerful techniques that hide a client   s data as well as access patterns from untrusted service providers. We present an oblivious cloud storage system, ObliviSync, that specifically targets one of the most widely-used personal cloud storage paradigms: synchronization and backup services, popular examples of which are Dropbox, iCloud Drive, and Google Drive. This setting provides a unique opportunity because the above privacy properties can be achieved with a simpler form of ORAM called write-only ORAM, which allows for dramatically increased efficiency compared to related work. Our solution is asymptotically optimal and practically efficient, with a small constant overhead of approximately 4x compared with non-private file storage, depending only on the total data size and parameters chosen according to the usage rate, and not on the number or size of individual files. Our construction also offers protection against timing-channel attacks, which has not been previously considered in ORAM protocols. We built and evaluated a full implementation of ObliviSync that supports multiple simultaneous read-only clients and a single concurrent read/write client whose edits automatically and seamlessly propagate to the readers. We show that our system functions under high work loads, with realistic file size distributions, and with small additional latency (as compared to a baseline encrypted file system) when paired with Dropbox as the synchronization service.

Adam J. Aviv (United States Naval Academy)
Seung Geol Choi (United States Naval Academy)
Travis Mayberry (United States Naval Academy)
Daniel S. Roche (United States Naval Academy)

TumbleBit: An Untrusted Bitcoin-Compatible Anonymous Payment Hub

This paper presents TumbleBit, a new unidirectional unlinkable payment hub that is fully compatible with today   s Bitcoin protocol. TumbleBit allows parties to make fast, anonymous, off-blockchain payments through an untrusted intermediary called the Tumbler. TumbleBit   s anonymity properties are similar to classic Chaumian eCash: no one, not even the Tumbler, can link a payment from its payer to its payee. Every payment made via TumbleBit is backed by bitcoins, and comes with a guarantee that Tumbler can neither violate anonymity, nor steal bitcoins, nor    print money    by issuing payments to itself. We prove the security of TumbleBit using the real/ideal world paradigm and the random oracle model. Security follows from the standard RSA assumption and ECDSA unforgeability. We implement TumbleBit, mix payments from 800 users and show that TumbleBit   s offblockchain payments can complete in seconds.

Ethan Heilman (Boston University)
Leen Alshenibr (Boston University)
Foteini Baldimtsi (George Mason University)
Alessandra Scafuro (North Carolina State University)
Sharon Goldberg (Boston University)

P2P Mixing and Unlinkable Bitcoin Transactions

Starting with Dining Cryptographers networks (DC-nets), several peer-to-peer (P2P) anonymous communication protocols have been proposed. However, despite their strong anonymity guarantees, none of them have been employed in practice so far: Most protocols fail to simultaneously address the crucial problems of slot collisions and disruption by malicious peers, while the remaining ones handle f malicious peers with O(f2) communication rounds. We conceptualize these P2P anonymous communication protocols as P2P mixing, and present a novel P2P mixing protocol, DiceMix, that needs only four communication rounds in the best case, and 4 + 2f rounds in the worst case with f malicious peers. As every individual malicious peer can force a restart of a P2P mixing protocol by simply omitting his messages, we find DiceMix with its worst-case complexity of O(f) rounds to be an optimal P2P mixing solution.

On the application side, we employ DiceMix to improve anonymity in crypto-currencies such as Bitcoin. The public verifiability of their pseudonymous transactions through publicly available ledgers (or blockchains) makes these systems highly vulnerable to a variety of linkability and deanonymization attacks. We use DiceMix to define CoinShuffle++, a coin mixing protocol that enables pseudonymous peers to perform unlinkable transactions in a manner fully compatible with the current Bitcoin system. Moreover, we demonstrate the efficiency of our protocols with a proof-of-concept implementation. In our evaluation, DiceMix requires less than eight seconds to mix 50 messages (160 bits, i.e., Bitcoin addresses), while the best protocol in the literature requires almost three minutes in the same setting.

Finally, we present a deanonymization attack on existing P2P mixing protocols that guarantee termination in the presence of disruptive peers. We generalize the attack to demonstrate that no P2P mixing protocol simultaneously supports arbitrary input messages, provides anonymity, and terminates in the presence of disruptive peers. DiceMix resists this attack by requiring fresh input messages, e.g., cryptographic keys never used before.

Tim Ruffing (CISPA, Saarland University)
Pedro Moreno-Sanchez (Purdue University)
Aniket Kate (Purdue University)

SilentWhispers: Enforcing Security and Privacy in Decentralized Credit Networks

Credit networks model transitive trust (or credit) between users in a distributed environment and have recently seen a rapid increase of popularity due to their flexible design and robustness against intrusion. They serve today as a backbone of real-world IOweYou transaction settlement networks such as Ripple and Stellar, which are deployed by various banks worldwide, as well as several other systems, such as spamresistant communication protocols and Sybil-tolerant social networks. Current solutions, however, raise serious privacy concerns, as the network topology as well as the credit value of the links are made public for apparent transparency purposes and any changes are logged. In payment scenarios, for instance, this means that all transactions have to be public and everybody knows who paid what to whom.

In this work, we question the necessity of a privacy-invasive transaction ledger. In particular, we present SilentWhispers, the first distributed, privacy-preserving credit network that does not require any ledger to protect the integrity of transactions. Yet, SilentWhispers guarantees integrity and privacy of link values and transactions even in the presence of distrustful users and malicious neighbors, whose misbehavior in changing link values is detected and such users can be held accountable. We formalize these properties as ideal functionalities in the universal composability framework and present a secure realization based on a novel combination of secret-sharing-based multiparty computation and digital signature chains. SilentWhispers can handle network churn, and it is efficient as demonstrated with a prototype implementation evaluated using payments data extracted from the currently deployed Ripple payment system.

Giulio Malavolta (Saarland University, CISPA)
Pedro Moreno-Sanchez (Purdue University)
Aniket Kate (Purdue University
Matteo Maffei (TU Vienna)

Session 2A: Virtualization and SDN

Session Chair:  Juan Caballero

DELTA: A Security Assessment Framework for Software-Defined Networks

Developing a systematic understanding of the attack surface of emergent networks, such as software-defined networks (SDNs), is necessary and arguably the starting point toward making it more secure. Prior studies have largely relied on ad hoc empirical methods to evaluate the security of various SDN elements from different perspectives. However, they have stopped short of converging on a systematic methodology or developing automated systems to rigorously test for security flaws in SDNs. Thus, conducting security assessments of new SDN software remains a non-replicable and unregimented process. This paper makes the case for automating and standardizing the vulnerability identification process in SDNs. As a first step, we developed a security assessment framework, DELTA, that reinstantiates published SDN attacks in diverse test environments. Next, we enhanced our tool with a protocol-aware fuzzing module to automatically discover new vulnerabilities. In our evaluation, DELTA successfully reproduced 20 known attack scenarios across diverse SDN controller environments and discovered seven novel SDN application mislead attacks.

Seungsoo Lee (KAIST)
Changhoon Yoon (KAIST)
Chanhee Lee (KAIST)
Seungwon Shin (KAIST)
Vinod Yegneswaran (SRI International)
Phillip Porras (SRI International)

PSI: Precise Security Instrumentation for Enterprise Networks

Despite soaring investments in IT infrastructure, the state of operational network security continues to be abysmal. We argue that this is because existing enterprise security approaches fundamentally lack precision in one or more dimensions: (1) isolation to ensure that the enforcement mechanism does not induce interference across different principals; (2) context to customize policies for different devices; and (3) agility to rapidly change the security posture in response to events. To address these shortcomings, we present PSI, a new enterprise network security architecture that addresses these pain points. PSI enables fine-grained and dynamic security postures for different network devices. These are implemented in isolated enclaves and thus provides precise instrumentation on these above dimensions by construction. To this end, PSI leverages recent advances in software-defined networking (SDN) and network functions virtualization (NFV). We design expressive policy abstractions and scalable orchestration mechanisms to implement the security postures. We implement PSI using an industry-grade SDN controller (OpenDaylight) and integrate several commonly used enforcement tools (e.g., Snort, Bro, Squid). We show that PSI is scalable and is an enabler for new detection and prevention capabilities that would be difficult to realize with existing solutions.

Tianlong Yu (CMU)
Seyed K. Fayaz (CMU)
Michael Collins (RedJack)
Vyas Sekar (CMU)
Srinivasan Seshan (CMU)

On the Safety and Efficiency of Virtual Firewall Elasticity Control

Traditional hardware-based firewall appliances are placed at fixed locations with fixed capacity. Such nature makes them difficult to protect today   s prevailing virtualized environments. Two emerging networking paradigms, Network Function Virtualization (NFV) and Software-Defined Networking (SDN), offer the potential to address these limitations. NFV envisions to implement firewall function as software instance (a.k.a virtual firewall). Virtual firewalls provide great flexibility and elasticity, which are necessary to protect virtualized environments. In this paper, we propose to build an innovative virtual firewall controller, VFW Controller, to enable safe, efficient and costeffective virtual firewall elasticity control. VFW Controller addresses four key challenges with respect to semantic consistency, correct flow update, buffer overflow avoidance, and optimal scaling in virtual firewall scaling. To demonstrate the feasibility of our approach, we implement the core components of VFW Controller on top of NFV and SDN environments. Our experimental results demonstrate that VFW Controller is efficient to provide safe elasticity control of virtual firewalls.

Juan Deng (Clemson University)
Hongda Li (Clemson University)
Hongxin Hu (Clemson University)
Kuang-Ching Wang (Clemson University)
Gail-Joon Ahn (Arizona State University)
Ziming Zhao (Arizona State University)
Wonkyu Han (Arizona State University)

Deconstructing Xen

Hypervisors have quickly become essential but are vulnerable to attack. Unfortunately, efficiently hardening hypervisors is challenging because they lack a privileged security monitor and decomposition strategies. In this work we systematically analyze the 191 Xen hypervisor vulnerabilities from Xen Security Advisories, revealing that the majority (144) are in the core hypervisor not Dom0. We then use the analysis to provide a novel deconstruction of Xen, called Nexen, into a security monitor, a shared service domain, and per-VM Xen slices that are isolated by a least-privileged sandboxing framework. We implement Nexen using the Nested Kernel architecture, efficiently nesting itself within the Xen address space, and extend the Nested Kernel design by adding services for arbitrarily many protection domains along with dynamic allocators, data isolation, and cross-domain control-flow integrity. The effect is that Nexen confines VM-based hypervisor compromises to single Xen VM instances, thwarts 74% (107/144) of known Xen vulnerabilities, and enforces Xen code integrity (defending against all code injection compromises) while observing negligible overhead (1.2% on average). Overall, we believe that Nexen is uniquely positioned to provide a fundamental need for hypervisor hardening at minimal performance and implementation costs.

Lei Shi  (Shanghai Key Laboratory of Scalable Computing and Systems & Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University)
Yuming Wu (Shanghai Key Laboratory of Scalable Computing and Systems & Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University)
Yubin Xia (Shanghai Key Laboratory of Scalable Computing and Systems & Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University)
Nathan Dautenhahn (University of Pennsylvania)
Haibo Chen (Shanghai Key Laboratory of Scalable Computing and Systems & Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University)
Binyu Zang (Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University)Haibing Guan (Shanghai Key Laboratory of Scalable Computing and Systems & Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University)
Jinming Li (Huawei)

Session 2B: Web Security

Session Chair:  David Choffnes

Thou Shalt Not Depend on Me: Analysing the Use of Outdated JavaScript Libraries on the Web

Web developers routinely rely on third-party Java- Script libraries such as jQuery to enhance the functionality of their sites. However, if not properly maintained, such dependencies can create attack vectors allowing a site to be compromised.

In this paper, we conduct the first comprehensive study of client-side JavaScript library usage and the resulting security implications across the Web. Using data from over 133 k websites, we show that 37% of them include at least one library with a known vulnerability; the time lag behind the newest release of a library is measured in the order of years. In order to better understand why websites use so many vulnerable or outdated libraries, we track causal inclusion relationships and quantify different scenarios. We observe sites including libraries in ad hoc and often transitive ways, which can lead to different versions of the same library being loaded into the same document at the same time. Furthermore, we find that libraries included transitively, or via ad and tracking code, are more likely to be vulnerable. This demonstrates that not only website administrators, but also the dynamic architecture and developers of third-party services are to blame for the Web   s poor state of library management.

The results of our work underline the need for more thorough approaches to dependency management, code maintenance and third-party code inclusion on the Web.

Tobias Lauinger (Nothereastern University)
Abdelberi Chaabane (Nothereastern University)
Sajjad Arshad (Nothereastern University)
William Robertson (Nothereastern University)
Christo Wilson (Nothereastern University)
Engin Kirda (Nothereastern University)

Enabling Reconstruction of Attacks on Users via Efficient Browsing Snapshots

In this paper, we present ChromePic, a web browser equipped with a novel forensic engine that aims to greatly enhance the browser’s logging capabilities. ChromePic’s main goal is to enable a fine-grained post-mortem reconstruction and trace-back of web attacks without incurring the high overhead of record-andreplay systems. In particular, we aim to enable the reconstruction of attacks that target users and have a significant visual component, such as social engineering and phishing attacks. To this end, ChromePic records a detailed snapshot of the state of a web page, including a screenshot of how the page is rendered and a deep DOM snapshot, at every significant interaction between the user and the page. If an attack is later suspected, these finegrained logs can be used to reconstruct the attack and trace back the sequence of steps the user followed to reach the attack page.

We develop ChromePic by implementing several careful modifications and optimizations to the Chromium code base, to minimize overhead and make always-on logging practical. We then demonstrate that ChromePic can successfully capture and aid the reconstruction of attacks on users. Our evaluation includes the analysis of an in-the-wild social engineering download attack on Android, a phishing attack, and two different clickjacking attacks, as well as a user study aimed at accurately measuring the overhead introduced by our forensic engine. The experimental results show that browsing snapshots can be logged very efficiently, making the logging events practically unnoticeable to users.

Phani Vadrevu (University of Georgia)
Jienan Liu (University of Georgia)
Bo Li (University of Georgia)
Babak Rahbarinia (Auburn University, Montgomery)
Kyu Hyung Lee (University of Georgia)
Roberto Perdisci (University of Georgia)

(Cross-)Browser Fingerprinting via OS and Hardware Level Features

In this paper, we propose a browser fingerprinting technique that can track users not only within a single browser but also across different browsers on the same machine. Specifically, our approach utilizes many novel OS and hardware level features, such as those from graphics cards, CPU, and installed writing scripts. We extract these features by asking browsers to perform tasks that rely on corresponding OS and hardware functionalities.

Our evaluation shows that our approach can successfully identify 99.24% of users as opposed to 90.84% for state of the art on single-browser fingerprinting against the same dataset. Further, our approach can achieve higher uniqueness rate than the only cross-browser approach in the literature with similar stability.

Yinzhi Cao (Lehigh University)
Song Li (Lehigh University)
Erik Wijmans (Washington University in St. Louis)

Fake Co-visitation Injection Attacks to Recommender Systems

Recommender systems have become an essential component in a wide range of web services. It is believed that recommender systems recommend a user items (e.g., videos on YouTube, products on Amazon) that match the user   s preference. In this work, we propose new attacks to recommender systems. Our attacks exploit fundamental vulnerabilities of recommender systems and can spoof a recommender system to make recommendations as an attacker desires. Our key idea is to inject fake co-visitations to the system. Given a bounded number of fake co-visitations that an attacker can inject, two key challenges are 1) which items the attacker should inject fake co-visitations to, and 2) how many fake co-visitations an attacker should inject to each item. We address these challenges via modelling our attacks as constrained linear optimization problems, by solving which the attacker can perform attacks with maximal threats. We demonstrate the feasibility and effectiveness of our attacks via evaluations on both synthetic data and real-world recommender systems on several popular web services including YouTube, eBay, Amazon, Yelp, and LinkedIn. We also discuss strategies to mitigate our attacks.

Guolei Yang (Iowa State University)
Neil Zhenqiang Gong (Iowa State University)
Ying Cai (Iowa State University)

Session 3A: User Authentication

Broken Hearted: How To Attack ECG Biometrics

In this work we present a systematic presentation attack against ECG biometrics. We demonstrate the attack’s effectiveness using the Nymi Band, a wrist band that uses electrocardiography (ECG) as a biometric to authenticate the wearer. We instantiate the attack using a hardware-based Arbitrary Waveform Generator (AWG), an AWG software using a computer sound card, and the playback of ECG signals encoded as .wav files using an off-the-shelf audio player. In two sets of experiments we collect data from a total of 41 participants using a variety of ECG monitors, including a medical monitor, a smartphone-based mobile monitor and the Nymi Band itself. X

We use the first dataset to understand the statistical differences in biometric features that arise from using different measurement devices and modes. Such differences are addressed through the automated derivation of so-called mapping functions, whose purpose is to transform ECG signals from any device in order to resemble the morphology of the signals recorded with the Nymi Band.

As part of our second dataset, we enroll users into the Nymi Band and test whether data from any of our sources can be used for a signal injection attack. Using data collected directly on the Nymi Band we achieve a success rate of 81%. When only using data gathered on other devices, this rate decreases to 43% when using raw data, and 62% after applying the mapping function. While we demonstrate the attack on the Nymi Band, we expect other ECG-based authentication systems to most likely suffer from the same, fundamental weaknesses.

Simon Eberz (University of Oxford)
Nicola Paoletti (University of Oxford)
Marc Roeschlin (University of Oxford)
Andrea Patané (University of Oxford)
Marta Kwiatkowska (University of Oxford)
Ivan Martinovic (University of Oxford)

Towards Implicit Visual Memory-Based Authentication

Selecting and remembering secure passwords puts a high cognitive burden on the user, which has adverse effects on usability and security. Authentication schemes based on implicit memory can relieve the user of the burden of actively remembering a secure password. In this paper, we propose a new authentication scheme (MooneyAuth) that relies on implicitly remembering the content of previously seen Mooney images. These images are thresholded two-tone images derived from images containing single objects. Our scheme has two phases: In the enrollment phase, a user is presented with Mooney images, their corresponding original images, and labels. This creates an implicit link between the Mooney image and the object in the user   s memory that serves as the authentication secret. In the authentication phase, the user has to label a set of Mooney images, a task that gets performed with substantially fewer mistakes if the images have been seen in the enrollment phase. We applied an information-theoretical approach to compute the eligibility of the user, based on which images were labeled correctly. This new dynamic scoring is substantially better than previously proposed static scoring by considering the surprisal of the observed events. We built a prototype and performed three experiments with 230 and 70 participants over the course of 264 and 21 days, respectively. We show that MooneyAuth outperforms current implicit memory-based schemes, and demonstrates a promising new approach for fallback authentication procedures on the Web.

Claude Castelluccia (Inria Grenoble)
Markus Dürmuth (Ruhr-University Bochum)
Maximilian Golla(Ruhr-University Bochum)
Fatma Deniz (University of California, Berkeley)

KEH-Gait: Towards a Mobile Healthcare User Authentication System by Kinetic Energy Harvesting

Recommender systems have become an essential component in a wide range of web services. It is believed that recommender systems recommend a user items (e.g., videos on YouTube, products on Amazon) that match the user   s preference. In this work, we propose new attacks to recommender systems. Our attacks exploit fundamental vulnerabilities of recommender systems and can spoof a recommender system to make recommendations as an attacker desires. Our key idea is to inject fake co-visitations to the system. Given a bounded number of fake co-visitations that an attacker can inject, two key challenges are 1) which items the attacker should inject fake co-visitations to, and 2) how many fake co-visitations an attacker should inject to each item. We address these challenges via modelling our attacks as constrained linear optimization problems, by solving which the attacker can perform attacks with maximal threats. We demonstrate the feasibility and effectiveness of our attacks via evaluations on both synthetic data and real-world recommender systems on several popular web services including YouTube, eBay, Amazon, Yelp, and LinkedIn. We also discuss strategies to mitigate our attacks.

Weitao Xu (School of Information Technology and Electrical Engineering, University of Queensland)
Guohao Lan (School of Computer Science and Engineering, University of New South Wales)
Qi Lin (School of Computer Science and Engineering, University of New South Wales)
Sara Khalifa (School of Computer Science and Engineering, University of New South Wales)
Neil Bergmann (School of Information Technology and Electrical Engineering, University of Queensland)
Mahbub Hassan (School of Computer Science and Engineering, University of New South Wales)
Wen Hu (School of Computer Science and Engineering, University of New South Wales)

A Large-scale Analysis of the Mnemonic Password Advice

How to choose a strong but still easily memorable password? An often recommended advice is to memorize a random sentence (the mnemonic) and to concatenate the words initials: a so-called mnemonic password. The paper in hand analyzes the effectiveness of this advice   in terms of the obtained password strength   and sheds light on various related aspects. While it is infeasible to obtain a sufficiently large sample of human-chosen mnemonics, the password strength depends only on the distribution of certain character probabilities. We provide several pieces of evidence that these character probabilities are approximately the same for human-chosen mnemonics and sentences from a web crawl and exploit this connection for our analyses. The presented analyses are independent of cracking software, avoid privacy concerns, and allow full control over the details of how passwords are generated from sentences. In particular, the paper introduces the following original research contributions: (1) construction of one of the largest corpora of human-chosen mnemonics, (2) construction of two web sentence corpora from the 27.3 TB ClueWeb12 web crawl, (3) demonstration of the suitability of web sentences as substitutes for mnemonics in password strength analyses, (4) improved estimation of password probabilities by position-dependent language models, and (5) analysis of the obtained password strength using web sentence samples of different sentence complexity and using 18 generation rules for mnemonic password construction.

Our findings include both expected and less expected results, among others: mnemonic passwords from lowercase letters only provide comparable strength to mnemonic passwords that exploit the 7-bit visible ASCII character set, less complex mnemonics reduce password strength in offline scenarios by less than expected, and longer mnemonic passwords provide more security in an offline but not necessarily in an online scenario. When compared to passwords generated by uniform sampling from a dictionary, distributions of mnemonic passwords can reach the same strength against offline attacks with less characters.

Johannes Kiesel (Bauhaus-Universität Weimar)
Benno Stein(Bauhaus-Universität Weimar)
Stefan Lucks (Bauhaus-Universität Weimar)

Cracking Android Pattern Lock in Five Attempts

Pattern lock is widely used as a mechanism for authentication and authorization on Android devices. In this paper, we demonstrate a novel video-based attack to reconstruct Android lock patterns from video footage filmed using a mobile phone camera. Unlike prior attacks on pattern lock, our approach does not require the video to capture any content displayed on the screen. Instead, we employ a computer vision algorithm to track the fingertip movements to infer the pattern. Using the geometry information extracted from the tracked fingertip motions, our approach is able to accurately identify a small number of (often one) candidate patterns to be tested by an adversary. We thoroughly evaluated our approach using 120 unique patterns collected from 215 independent users, by applying it to reconstruct patterns from video footage filmed using smartphone cameras. Experimental results show that our approach can break over 95% of the patterns in five attempts before the device is automatically locked by the Android system. We discovered that, in contrast to many peoples belief, complex patterns do not offer stronger protection under our attacking scenarios. This is demonstrated by the fact that we are able to break all but one complex patterns (with a 97.5% success rate) as opposed to 60% of the simple patterns in the first attempt. Since our threat model is common in day-to-day lives, our work calls for the community to revisit the risks of using Android pattern lock to protect sensitive information.

Guixin Ye (School of Information Science and Technology, Northwest University)
Zhanyong Tang (School of Information Science and Technology, Northwest University)
Dingyi Fang (School of Information Science and Technology, Northwest University)
Xiaojiang Chen (School of Information Science and Technology, Northwest University)
Kwang In Kim(Department of Computer Science, University of Bath)
Ben Taylor (School of Computing and Communications, Lancaster University)
Zheng Wang (School of Computing and Communications, Lancaster University)

Session 3B: Malware

Session Chair:  Marcus Peinado

Dial One for Scam: A Large-Scale Analysis of Technical Support Scams

In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then    diagnose the problem   , before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web.

In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.

Najmeh Miramirkhani (Stony Brook University)
Oleksii Starov (Stony Brook University)
Nick Nikiforakis (Stony Brook University)

Automated Synthesis of Semantic Malware Signatures using Maximum Satisfiability

This paper proposes a technique for automatically learning semantic malware signatures for Android from very few samples of a malware family. The key idea underlying our technique is to look for a maximally suspicious common subgraph (MSCS) that is shared between all known instances of a malware family. An MSCS describes the shared functionality between multiple Android applications in terms of inter-component call relations and their semantic metadata (e.g., data-flow properties). Our approach identifies such maximally suspicious common subgraphs by reducing the problem to maximum satisfiability. Once a semantic signature is learned, our approach uses a combination of static analysis and a new approximate signature matching algorithm to determine whether an Android application matches the semantic signature characterizing a given malware family.

We have implemented our approach in a tool called ASTROID and show that it has a number of advantages over state-of-theart malware detection techniques. First, we compare the semantic malware signatures automatically synthesized by ASTROID with manually-written signatures used in previous work and show that the signatures learned by ASTROID perform better in terms of accuracy as well as precision. Second, we compare ASTROID against two state-of-the-art malware detection tools and demonstrate its advantages in terms of interpretability and accuracy. Finally, we demonstrate that ASTROID   s approximate signature matching algorithm is resistant to behavioral obfuscation and that it can be used to detect zero-day malware. In particular, we were able to find 22 instances of zero-day malware in Google Play that are not reported as malware by existing tools.

Yu Feng (UT Austin)
Osbert Bastani (Stanford University)
Ruben Martins (UT Austin)
Isil Dillig (UT Austin)
Saswat Anand (Google)

MaMaDroid: Detecting Android Malware by Building Markov Chains of Behavioral Models

The rise in popularity of the Android platform has resulted in an explosion of malware threats targeting it. As both Android malware and the operating system itself constantly evolve, it is very challenging to design robust malware mitigation techniques that can operate for long periods of time without the need for modifications or costly re-training. In this paper, we present MAMADROID, an Android malware detection system that relies on app behavior. MAMADROID builds a behavioral model, in the form of a Markov chain, from the sequence of abstracted API calls performed by an app, and uses it to extract features and perform classification. By abstracting calls to their packages or families, MAMADROID maintains resilience to API changes and keeps the feature set size manageable. We evaluate its accuracy on a dataset of 8.5K benign and 35.5K malicious apps collected over a period of six years, showing that it not only effectively detects malware (with up to 99% F-measure), but also that the model built by the system keeps its detection capabilities for long periods of time (on average, 86% and 75% F-measure, respectively, one and two years after training). Finally, we compare against DROIDAPIMINER, a state-of-the-art system that relies on the frequency of API calls performed by apps, showing that MAMADROID significantly outperforms it.

Enrico Mariconti (University College London)
Lucky Onwuzurike (University College London)
Panagiotis Andriotis (University College London)
Emiliano De Cristofaro (University College London)
Gordon Ross (University College London)
Gianluca Stringhini (University College London)

A Broad View of the Ecosystem of Socially Engineered Exploit Documents

Our understanding of exploit documents as a vector to deliver targeted malware is limited to a handful of studies done in collaboration with the Tibetans, Uyghurs, and political dissidents in the Middle East. In this measurement study, we present a complementary methodology relying only on publicly available data to capture and analyze targeted attacks with both greater scale and depth. In particular, we detect exploit documents uploaded over one year to a large anti-virus aggregator (VirusTotal) and then mine the social engineering information they embed to infer their likely targets and contextual information of the attacks. We identify attacks against two ethnic groups (Tibet and Uyghur) as well as 12 countries spanning America, Asia, and Europe. We then analyze the exploit documents dynamically in sandboxes to correlate and compare the exploited vulnerabilities and malware families targeting different groups. Finally, we use machine learning to infer the role of the uploaders of these documents to VirusTotal (i.e., attacker, targeted victim, or third-party), which enables their classification based only on their metadata, without any dynamic analysis. We make our datasets available to the academic community.

Stevens Le Blond (MPI-SWS)
Cedric Gilbert (MPI-SWS)
Utkarsh Upadhyay (MPI-SWS)
Manuel Gomez Rodriguez (MPI-SWS)
David Choffnes (Northwestern University)

Catching Worms, Trojan Horses and PUPs: Unsupervised Detection of Silent Delivery Campaigns

The growing commoditization of the underground economy has given rise to malware delivery networks, which charge fees for quickly delivering malware or unwanted software to a large number of hosts. A key method to provide this service is through the orchestration of silent delivery campaigns. These campaigns involve a group of downloaders that receive remote commands and then deliver their payloads without any user interaction. These campaigns can evade detection by relying on inconspicuous downloaders on the client side and on disposable domain names on the server side.

We describe Beewolf, a system for detecting silent delivery campaigns from Internet-wide records of download events. The key observation behind our system is that the downloaders involved in these campaigns frequently retrieve payloads in lockstep. Beewolf identifies such locksteps in an unsupervised and deterministic manner, and can operate on streaming data. We utilize Beewolf to study silent delivery campaigns at scale, on a data set of 33.3 million download events. This investigation yields novel findings, e.g. malware distributed through compromised software update channels, a substantial overlap between the delivery ecosystems for malware and unwanted software, and several types of business relationships within these ecosystems. Beewolf achieves over 92% true positives and fewer than 5% false positives. Moreover, Beewolf can detect suspicious downloaders a median of 165 days ahead of existing anti-virus products and payload-hosting domains a median of 196 days ahead of existing blacklists.

Bum Jun Kwon (University of Maryland, College Park)
Virinchi Srinivas (University of Maryland, College Park)
Amol Deshpande (University of Maryland, College Park)
Tudor Dumitras (University of Maryland, College Park)

Session 4A: TLS et al.

Session Chair:  Johanna AmanN

Measuring small subgroup attacks against Diffie-Hellman

Several recent standards, including NIST SP 800- 56A and RFC 5114, advocate the use of    DSA    parameters for Diffie-Hellman key exchange. While it is possible to use such parameters securely, additional validation checks are necessary to prevent well-known and potentially devastating attacks. In this paper, we observe that many Diffie-Hellman implementations do not properly validate key exchange inputs. Combined with other protocol properties and implementation choices, this can radically decrease security. We measure the prevalence of these parameter choices in the wild for HTTPS, POP3S, SMTP with STARTTLS, SSH, IKEv1, and IKEv2, finding millions of hosts using DSA and other non-   safe    primes for Diffie-Hellman key exchange, many of them in combination with potentially vulnerable behaviors. We examine over 20 open-source cryptographic libraries and applications and observe that until January 2016, not a single one validated subgroup orders by default. We found feasible full or partial key recovery vulnerabilities in OpenSSL, the Exim mail server, the Unbound DNS client, and Amazon   s load balancer, as well as susceptibility to weaker attacks in many other applications.

Luke Valenta (University of Pennsylvania)
David Adrian (University of Michigan)
Antonio Sanso (Adobe)
Shaanan Cohney (University of Pennsylvania)
Joshua Fried (University of Pennsylvania)
Marcella Hastings (University of Pennsylvania)
J. Alex Halderman (University of Michigan)
Nadia Heninger (University of Pennsylvania)

Indiscreet Logs: Diffie-Hellman Backdoors in TLS

Software implementations of discrete logarithm based cryptosystems over finite fields typically make the assumption that any domain parameters they encounter define cyclic groups for which the discrete logarithm problem is assumed to be hard. In this paper we explore this trust assumption and examine situations where it may not be justified. In particular we focus on groups for which the order is unknown and not easily determined, and explore the scenario in which the modulus is trapdoored to make computing discrete logarithms efficient for an entity with knowledge of the trapdoor, while simultaneously leaving its very existence as matter of speculation to everyone else.

We conducted an investigation of discrete logarithm domain parameters in use across the Internet and discovered a multitude of instances of groups of unknown order in use in TLS and STARTTLS spanning numerous countries, organizations, and implementations. Although our disclosures resulted in a number of organizations taking down their suspicious parameters, none were able or willing to rule out the possibility that their parameters were trapdoors, and obtaining conclusive evidence in each case could be as hard as factoring an RSA modulus, highlighting a key feature of this attack method   deniability.

Kristen Dorey (Western University)
Nicholas Chang-Fong (Western University)
Aleksander Essex (Western University)

WireGuard: Next Generation Kernel Network Tunnel

WireGuard is a secure network tunnel, operating at layer 3, implemented as a kernel virtual network interface for Linux, which aims to replace both IPsec for most use cases, as well as popular user space and/or TLS-based solutions like OpenVPN, while being more secure, more performant, and easier to use. The virtual tunnel interface is based on a proposed fundamental principle of secure tunnels: an association between a peer public key and a tunnel source IP address. It uses a single round trip key exchange, based on NoiseIK, and handles all session creation transparently to the user using a novel timer state machine mechanism. Short pre-shared static keys   Curve25519 points    are used for mutual authentication in the style of OpenSSH. The protocol provides strong perfect forward secrecy in addition to a high degree of identity hiding. Transport speed is accomplished using ChaCha20Poly1305 authenticated-encryption for encapsulation of packets in UDP. An improved take on IP-binding cookies is used for mitigating denial of service attacks, improving greatly on IKEv2 and DTLS   s cookie mechanisms to add encryption and authentication. The overall design allows for allocating no resources in response to received packets, and from a systems perspective, there are multiple interesting Linux implementation techniques for queues and parallelism. Finally, WireGuard can be simply implemented for Linux in less than 4,000 lines of code, making it easily audited and verified.

Authors:     Jason A. Donenfeld

The Security Impact of HTTPS Interception

As HTTPS deployment grows, middlebox and antivirus products are increasingly intercepting TLS connections to retain visibility into network traffic. In this work, we present a comprehensive study on the prevalence and impact of HTTPS interception. First, we show that web servers can detect interception by identifying a mismatch between the HTTP User-Agent header and TLS client behavior. We characterize the TLS handshakes of major browsers and popular interception products, which we use to build a set of heuristics to detect interception and identify the responsible product. We deploy these heuristics at three large network providers: (1) Mozilla Firefox update servers, (2) a set of popular e-commerce sites, and (3) the Cloudflare content distribution network. We find more than an order of magnitude more interception than previously estimated and with dramatic impact on connection security. To understand why security suffers, we investigate popular middleboxes and clientside security software, finding that nearly all reduce connection security and many introduce severe vulnerabilities. Drawing on our measurements, we conclude with a discussion on recent proposals to safely monitor HTTPS and recommendations for the security community.

Zakir Durumeric (University of Michigan, ICSI)
Zane Ma (University of Illinois Urbana-Champaign)
Drew Springall (University of Michigan)
Richard Barnes (Mozilla)
Nick Sullivan (CloudFlare)
Elie Bursztein (Google)
Michael Bailey (Univ. of Illinois Urbana-Champaign)
J. Alex Halderman (University of Michigan)
Vern Paxson (UC Berkeley, ICSI)

Session 4B: Secure Computation

Session Chair:  Patrick Traynor

Fast Actively Secure OT Extension for Short Secrets

Oblivious Transfer (OT) is one of the most fundamental cryptographic primitives with wide-spread application in general secure multi-party computation (MPC) as well as in a number of tailored and special-purpose problems of interest such as private set intersection (PSI), private information retrieval (PIR), contract signing to name a few. Often the instantiations of OT require prohibitive communication and computation complexity. OT extension protocols are introduced to compute a very large number of OTs referred as extended OTs at the cost of a small number of OTs referred as seed OTs.

We present a fast OT extension protocol for small secrets in active setting. Our protocol when used to produce 1-out-of- n OTs outperforms all the known actively secure OT extensions. Our protocol is built on the semi-honest secure extension protocol of Kolesnikov and Kumaresan of CRYPTO   13 (referred as KK13 protocol henceforth) which is the best known OT extension for short secrets. At the heart of our protocol lies an efficient consistency checking mechanism that relies on the linearity of Walsh-Hadamard (WH) codes. Asymptotically, our protocol adds a communication overhead of O(  log  ) bits over KK13 protocol irrespective of the number of extended OTs, where   and   refer to computational and statistical security parameter respectively. Concretely, our protocol when used to generate a large enough number of OTs adds only 0:011-0:028% communication overhead and 4-6% runtime overhead both in LAN and WAN over KK13 extension. The runtime overheads drop below 2% when in addition the number of inputs of the sender in the extended OTs is large enough. X

As an application of our proposed extension protocol, we show that it can be used to obtain the most efficient PSI protocol secure against a malicious receiver and a semi-honest sender.

Arpita Patra (Indian Institute of Science)
Pratik Sarkar (Indian Institute of Science)
Ajith Suresh (Indian Institute of Science)

Constant Round Maliciously Secure 2PC with Function-independent Preprocessing using LEGO

asymptotically more efficient than on the circuit level. Since then the LEGO approach has been improved upon in several theoretical works, but never implemented. In this paper we describe further concrete improvements and provide the first implementation of a protocol from the LEGO family. Our protocol has a constant number of rounds and is optimized for the offline/online setting with function-independent preprocessing. We have benchmarked our prototype and find that our protocol can compete with all existing implementations and that it is often more efficient. As an example, in a LAN setting we can evaluate an AES-128 circuit with online latency down to 1:13ms, while if evaluating 128 AES-128 circuits in parallel the amortized cost is 0:09ms per AES-128. This online performance does not come at the price of offline inefficiency as we achieve comparable performance to previous, less general protocols, and significantly better if we ignore the cost of the function-independent preprocessing. Also, as our protocol has an optimal 2-round online phase it is significantly more efficient than previous protocols when considering a high latency network.

Jesper Buus Nielsen (Aarhus University)
Thomas Schneider (Technische Universität Darmstadt)
Roberto Trifiletti (Aarhus University)

Pushing the Communication Barrier in Secure Computation using Lookup Tables

Secure two-party computation (S2PC) allows two parties to compute a function on their joint inputs while leaking only the output of the function. At TCC 2009 Orlandi and Nielsen proposed the LEGO protocol for maliciously secure 2PC based on cut-and-choose of Yao   s garbled circuits at the gate level and showed that this is Secure two-party computation has witnessed significant efficiency improvements in the recent years. Current implementations of protocols with security against passive adversaries generate and process data much faster than it can be sent over the network, even with a single thread. This paper introduces novel methods to further reduce the communication bottleneck and round complexity of semi-honest secure two-party computation. Our new methodology creates a trade-off between communication and computation, and we show that the added computing cost for each party is still feasible and practicable in light of the new communication savings. We first improve communication for Boolean circuits with 2-input gates by factor 1.9x when evaluated with the protocol of Goldreich-Micali-Wigderson (GMW). As a further step, we change the conventional Boolean circuit representation from 2-input gates to multi-input/multioutput lookup tables (LUTs) which can be programmed to realize arbitrary functions. We construct two protocols for evaluating LUTs offering a trade-off between online communication and total communication. Our most efficient LUT-based protocol reduces the communication and round complexity by a factor 2-4x for several basic and complex operations. Our proposed scheme results in a significant overall runtime decrease of up to a factor of 3x on several benchmark functions.

Ghada Dessouky (TU Darmstadt)
Farinaz Koushanfar (Univ. of California, San Diego)
Ahmad-Reza Sadeghi (TU Darmstadt)
Thomas Schneider (TU Darmstadt)
Shaza Zeitouni (TU Darmstadt)
Michael Zohner (TU Darmstadt)

Using Fully Homomorphic Encryption for Statistical Analysis of Categorical, Ordinal and Numerical Data

In recent years, there has been a growing trend towards outsourcing of computational tasks with the development of cloud services. The Gentry   s pioneering work of fully homomorphic encryption (FHE) and successive works have opened a new vista for secure and practical cloud computing. In this paper, we consider performing statistical analysis on encrypted data. To improve the efficiency of the computations, we take advantage of the batched computation based on the Chinese- Remainder-Theorem. We propose two building blocks that work with FHE: a novel batch greater-than primitive, and matrix primitive for encrypted matrices. With these building blocks, we construct secure procedures and protocols for different types of statistics including the histogram (count), contingency table (with cell suppression) for categorical data; k-percentile for ordinal data; and principal component analysis and linear regression for numerical data. To demonstrate the effectiveness of our methods, we ran experiments in five real datasets. For instance, we can compute a contingency table with more than 50 cells from 4000 of data in just 5 minutes, and we can train a linear regression model with more than 40k of data and dimension as high as 6 within 15 minutes. We show that the FHE is not as slow as commonly believed and it becomes feasible to perform a broad range of statistical analysis on thousands of encrypted data.

Wen-jie Lu (University of Tsukuba)
Shohei Kawasaki (University of Tsukuba)
Jun Sakuma (University of Tsukuba)

Session 5A: Mobile Privacy and Security

Session Chair:  Manuel Egele

Dark Hazard: Learning-based, Large-Scale Discovery of Hidden Sensitive Operations in Android Apps

Hidden sensitive operations (HSO) such as stealing privacy user data upon receiving an SMS message are increasingly utilized by mobile malware and other potentially-harmful apps (PHAs) to evade detection. Identification of such behaviors is hard, due to the challenge in triggering them during an app   s runtime. Current static approaches rely on the trigger conditions or hidden behaviors known beforehand and therefore cannot capture previously unknown HSO activities. Also these techniques tend to be computationally intensive and therefore less suitable for analyzing a large number of apps. As a result, our understanding of real-world HSO today is still limited, not to mention effective means to mitigate this threat.

In this paper, we present HSOMINER, an innovative machinelearning based program analysis technique that enables a largescale discovery of unknown HSO activities. Our approach leverages a set of program features that characterize an HSO branch1 and can be relatively easy to extract from an app. These features summarize a set of unique observations about an HSO condition, its paths and the relations between them, and are designed to be general for finding hidden suspicious behaviors. Particularly, we found that a trigger condition is less likely to relate to the path of its branch through data flows or shared resources, compared with a legitimate branch. Also, the behaviors exhibited by the two paths of an HSO branch tend to be conspicuously different (innocent on one side and sinister on the other). Most importantly, even though these individual features are not sufficiently accurate for capturing HSO on their own, collectively they are shown to be highly effective in identifying such behaviors. This differentiating power is harnessed by HSOMINER to classify Android apps, which achieves a high precision (98%) and coverage (94%), and is also efficient as discovered in our experiments. The new tool was further used in a measurement study involving 338,354 realworld apps, the largest one ever conducted on suspicious hidden operations. Our research brought to light the pervasiveness of HSO activities, which are present in 18.7% of the apps we analyzed, surprising trigger conditions (e.g., click on a certain region of a view) and behaviors (e.g., hiding operations in a dynamically generated receiver), which help better understand the problem and contribute to more effective defense against this new threat to the mobile platform.

Xiaorui Pan (Indiana University Bloomington)
Xueqiang Wang (Indiana University Bloomington)
Yue Duan (University of California, Riverside)
XiaoFeng Wang (Indiana University Bloomington)
Heng Yin (University of California, Riverside)

Show Me the Money! Finding Flawed Implementations of Third-party In-app Payment in Android Apps

The massive growth of transaction via third-party cashier has attracted numerous mobile apps to embed in-app payment functionality. Although this feature makes the payment easy within apps, transactions via current third-party in-app payment involve more sophisticated interactions between multiple participants compared to those using traditional payments. The implementations in mobile apps also lack security considerations. Therefore, such transaction exposes new attack vectors and could be exploited more easily, leading to serious deceptions such as payment forging.

To investigate current third-party mobile payment ecosystem and find potential security threats, we conduct an in-depth analysis on world   s largest mobile payment market   China   s mobile payment market. We study four mainstream third-party mobile payment cashiers, and conclude unified security rules that must be regulated by both cashier and merchant. We also illustrate the serious consequences of violating these security rules, which may cause up to four types of attacks against online and offline transactions. Besides, we detect the seven security rule violations to the payment in Android apps. Our detection result shows not only the prevalence of third-party in-app payment, but also the awful status quo of its security. Over 37% Android apps with at least 100,000 users embed third-party payment functionality. Hundreds of them violate security rule(s) and face with various potential security risks, allowing an attacker to consume almost every aspect of commodities or services in life without actually purchasing them or deceiving others to pay for them. Our further investigation reveals that the cashiers not only have improperly designed SDK, which may expand the attack effects, but also release ambiguous documents and even vulnerable sample codes, directly leading to the mistakes committed by merchants. Besides the cashiers    ignorance for security, our successful exploits to several apps show that these flawed implementations can cause financial loss in real world. We have reported these findings to all the related parties and received positive feedbacks.

Wenbo Yang (Shanghai Jiao Tong University)
Yuanyuan Zhang (Shanghai Jiao Tong University)
Juanru Li (Shanghai Jiao Tong University)
Hui Liu (Shanghai Jiao Tong University)
Qing Wang (Shanghai Jiao Tong University)
Yueheng Zhang (Shanghai Jiao Tong University)
Dawu Gu (Shanghai Jiao Tong University)

WindowGuard: Systematic Protection of GUI Security in Android

Android graphic user interface (GUI) system plays an important role in rendering app GUIs on display and interacting with users. However, the security of this critical subsystem remains under-investigated. In fact, Android GUI has been plagued by a variety of GUI attacks in recent years. GUI attack refers to any harmful behavior that attempts to adversely affect the integrity or availability of the GUIs belonging to other apps. These attacks are real threats and can cause severe consequences, such as sensitive user information leakage, user device denial of service, etc. Given the seriousness and rapid growth of GUI attacks, we are in a pressing need for a comprehensive defense solution. Nevertheless, existing defense methods fall short in defense coverage, effectiveness and practicality.

To overcome these challenges, we systematically scrutinize the security implications of Android GUI system design and propose a new security model, Android Window Integrity (AWI), to comprehensively protect the system against GUI attacks. The AWI model defines the user session to be protected and the legitimacy of GUI system states in the unique mobile GUI environment. By doing so, it can protect a normal user session against arbitrary manipulation by attackers, and still preserve the original user experience. Our implementation, WindowGuard, enforces the AWI model and responds to a suspicious behavior by briefing the user about a security event and asking for the final decision from the user. This design not only improves the detection accuracy, but also makes WindowGuard more usable and practical to meet diverse user needs. WindowGuard is implemented as an Xposed module, making it practical to be quickly deployed on a large number of user devices. Our evaluation shows thatWindowGuard can successfully detect all known GUI attacks, while yielding small impacts on user experience and system performance.

Chuangang Ren (The Pennsylvania State University)
Peng Liu (The Pennsylvania State University)
Sencun Zhu (The Pennsylvania State University)

Obfuscation-Resilient Privacy Leak Detection for Mobile Apps Through Differential Analysis

Mobile apps are notorious for collecting a wealth of private information from users. Despite significant effort from the research community in developing privacy leak detection tools based on data flow tracking inside the app or through network traffic analysis, it is still unclear whether apps and ad libraries can hide the fact that they are leaking private information. In fact, all existing analysis tools have limitations: data flow tracking suffers from imprecisions that cause false positives, as well as false negatives when the data flow from a source of private information to a network sink is interrupted; on the other hand, network traffic analysis cannot handle encryption or custom encoding.

We propose a new approach to privacy leak detection that is not affected by such limitations, and it is also resilient to obfuscation techniques, such as encoding, formatting, encryption, or any other kind of transformation performed on private information before it is leaked. Our work is based on blackbox differential analysis, and it works in two steps: first, it establishes a baseline of the network behavior of an app; then, it modifies sources of private information, such as the device ID and location, and detects leaks by observing deviations in the resulting network traffic. The basic concept of black-box differential analysis is not novel, but, unfortunately, it is not practical enough to precisely analyze modern mobile apps. In fact, their network traffic contains many sources of non-determinism, such as random identifiers, timestamps, and server-assigned session identifiers, which, when not handled properly, cause too much noise to correlate output changes with input changes.

The main contribution of this work is to make black-box differential analysis practical when applied to modern Android apps. In particular, we show that the network-based non-determinism can often be explained and eliminated, and it is thus possible to reliably use variations in the network traffic as a strong signal to detect privacy leaks. We implemented this approach in a tool, called AGRIGENTO, and we evaluated it on more than one thousand Android apps. Our evaluation shows that our approach works well in practice and outperforms current state-of-the-art techniques. We conclude our study by discussing several case studies that show how popular apps and ad libraries currently exfiltrate data by using complex combinations of encoding and encryption mechanisms that other approaches fail to detect. Our results show that these apps and libraries seem to deliberately hide their data leaks from current approaches and clearly demonstrate the need for an obfuscation-resilient approach such as ours.

Andrea Continella (Politecnico di Milano)
Yanick Fratantonio (UC Santa Barbara)
Martina Lindorfer (UC Santa Barbara)
Alessandro Puccetti (UC Santa Barbara)
Ali Zand (UC Santa Barbara)
Christopher Kruegel (UC Santa Barbara)
Giovanni Vigna (UC Santa Barbara)

Automated Analysis of Privacy Requirements for Mobile Apps

Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance.

Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps that lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.

Sebastian Zimmeck (Carnegie Mellon University)
Ziqi Wang (Carnegie Mellon University)
Lieyong Zou (Carnegie Mellon University)
Roger Iyengar (Washington Univ. in St. Louis)
Bin Liu (Carnegie Mellon University)
Florian Shaub (University of Michigan)
Shomir Wilson (University of Cincinnati)
Norman Sadeh (Carnegie Mellon University)
Steven M. Bellovin (Columbia University)
Joel Reidenberg (Fordham University)

Session 5B: Software and System Security (Part 1)

Session Chair:  Ethan Heilman

Dachshund: Digging for and Securing (Non-)Blinded Constants in JIT Code

Modern browsers such as Chrome and Edge deploy constant blinding to remove attacker-controlled constants from the JIT-compiled code. Without such a defense, attackers can encode arbitrary shellcode in constants that get compiled to executable code. In this paper, we review the security and completeness of current constant blinding implementations. We develop DACHSHUND, a fuzzing-driven framework to find userspecified constants in JIT-compiled code. DACHSHUND reveals several cases in which JIT compilers of modern browsers fail to blind constants, ranging from constants passed as function parameters to blinded constants that second-stage code optimizers revert to a non-protected form. To tackle this problem, we then propose a JavaScript rewriting mechanism that removes all constants from JavaScript code. We prototype this crossbrowser methodology as part of a Web proxy and show that it can successfully remove all constants from JavaScript code.

Giorgi Maisuradze (CISPA, Saarland University)
Michael Backes (CISPA, Saarland Univ. & MPI-SWS)
Christian Rossow (CISPA, Saarland University)

Safelnit: Comprehensive and Practical Mitigation of Uninitialized Read Vulnerabilities

Usage of uninitialized values remains a common error in C/C++ code. This results not only in undefined and generally undesired behavior, but is also a cause of information disclosure and other security vulnerabilities. Existing solutions for mitigating such errors are not used in practice as they are either limited in scope (for example, only protecting the heap), or incur high runtime overhead.

In this paper, we propose SafeInit, a practical protection system which hardens applications against such undefined behavior by guaranteeing initialization of all values on the heap and stack, every time they are allocated or come into scope. Doing so provides comprehensive protection against this class of vulnerabilities in generic programs, including both information disclosure and re-use/logic vulnerabilities.

We show that, with carefully designed compiler optimizations, our implementation achieves sufficiently low overhead (5% for typical server applications and SPEC CPU2006) to serve as a standard hardening protection in practical settings. Moreover, we show that we can effortlessly apply it to harden non-standard code, such as the Linux kernel, with low runtime overhead.

Alyssa Milburn (Vrije Universiteit Amsterdam)
Herber Bos (Vrije Universiteit Amsterdam)
Cristiano Giuffrida (UVrije Universiteit Amsterdam)

MARX: Uncovering Class Hierarchies in C++ Programs

Reverse engineering of binary executables is a difficult task which gets more involved by the way compilers translate high-level concepts used in paradigms such as objectoriented programming into native code, as it is the case for C++. Such code is harder to grasp than, e. g., traditional procedural code, since it is generally more verbose and adds complexity through features such as polymorphism or inheritance. Hence, a deep understanding of interactions between instantiated objects, their corresponding classes, and the connection between classes would vastly reduce the time it takes an analyst to understand the application. The growth in complexity in contemporary C++ applications only amplifies the effect.

In this paper, we introduce Marx, an analysis framework to reconstruct class hierarchies of C++ programs and resolve virtual callsites. We have evaluated the results on a diverse set of large, real-world applications. Our experimental results show that our approach achieves a high precision (93.2% of the hierarchies reconstructed accurately for Node.js, 88.4% for MySQL Server) while keeping analysis times practical. Furthermore, we show that, despite any imprecision in the analysis, the derived information can be reliably used in classic software security hardening applications without breaking programs. We showcase this property for two applications built on top of the output of our framework: vtable protection and type-safe object reuse. This demonstrates that, in addition to traditional reverse engineering applications, Marx can aid in implementing concrete, valuable tools e. g., in the domain of exploit mitigations.

Andre Pawlowski (Ruhr University Bochum)
Moritz Contag (Ruhr University Bochum)
Victor van der Veen (Vrije Universiteit Amsterdam)
Chris Ouwehand (Vrije Universiteit Amsterdam)
Thorsten Holz (Ruhr University Bochum)
Herbert Bos (Vrije Universiteit Amsterdam)
Elias Athanasopoulos (Vrije Universiteit Amsterdam)
Cristiano Giuffrida (Vrije Universiteit Amsterdam)

PT-Rand: Practical Mitigation of Data-only Attacks against Page Tables

Kernel exploits constitute a powerful attack class allowing attackers to gain full control over a system. Various kernel hardening solutions have been proposed or deployed in practice to protect the kernel against code injection (e.g., DEP) or code-reuse exploits (e.g., CFI). However, the security of all these hardening techniques relies heavily on the assumption that kernel page tables cannot be manipulated, e.g., by means of dataonly attacks. Ensuring kernel page tables integrity is not only essential for kernel security but also a challenging task in practice since existing solutions require hardware trust anchors, costly hypervisors, or inefficient integrity checks.

In this paper, we first motivate the importance of protecting kernel page tables by presenting a data-only attack against page tables to bypass the recently released CFI-based (Linux) kernel hardening technique RAP. Thereafter, we present the design and implementation of PT-Rand, the first practical solution to protect kernel page tables that does not suffer from the mentioned deficiencies of previous proposals. PT-Rand randomizes the location of page tables and tackles a number of challenges to ensure that the location of page tables is not leaked. This effectively prevents the attacker from manipulating access permissions of code pages, thereby enabling secure enforcement of kernel exploit mitigation technologies such as CFI. We extensively evaluate our prototype implementation of PT-Rand for the current Linux kernel on the popular Linux distribution Debian and report a low overhead of 0.22% for common benchmarks. Moreover, we combine RAP with PT-Rand to protect RAP against data-only attacks on kernel page tables.

Lucas Davi (Technische Univ. Darmstadt)
David Gens (Technische Univ. Darmstadt)
Christopher Liebchen (Technische Univ. Darmstadt)
Ahmad-Reza Sadeghi (Technische Univ. Darmstadt)

Dynamic Virtual Address Range Adjustment for Intra-Level Privilege Separation on ARM

Privilege separation has long been considered as a fundamental principle in software design to mitigate the potential damage of a security attack. Much effort has been given to develop various privilege separation schemes where a monolithic OS or hypervisor is divided into two privilege domains where one domain is logically more privileged than the other even if both run at an identical processor privilege level. We say that privilege separation is intra-level if it is implemented for software of a certain privilege level without any involvement or assistance of more privileged software. In general, realizing intra-level privilege separation mandates developers to rely on certain security features of the underlying hardware. So far, such development efforts however have been much less focused on ARM architectures than on the Intel x86 family mainly because the architectural provision of ARM security features was relatively insufficient. Unlike on x86, as a result, there exists no full intra-level scheme that can be universally applied to any privilege level on ARM. However, as malware and attacks increase against virtually every level of privileged software including an OS, a hypervisor and even the highest privileged software armored by TrustZone, we have been motivated to develop a technique, named as Hilps, to realize true intra-level privilege separation in all these levels of privileged software on ARM. Pivotal to the success of Hilps is the support from a new hardware feature of ARM   s latest 64-bit architecture, called TxSZ, which we manipulate to elastically adjust the accessible virtual address range for a program. In our experiments, we have applied Hilps to retrofit the core software mechanisms for privilege separation into existing system software and evaluated the performance of the resulting system. According to the experimental results, the system incurs on average just less than 1 % overhead; hence, we conclude that Hilps is quite promising for practical use in real deployments.

Yeongpil Cho (Seoul National Univeristy)
Donghyun Kwon (Seoul National Univeristy)
Hayoon Yi (Seoul National Univeristy)
Yunheung Paek (Seoul National Univeristy)

Session 6A: Cloud and Potpourri

Session Chair:  Adam Bates

Hello from the Other Side: SSH over Robust Cache Covert Channels in the Cloud

Covert channels evade isolation mechanisms between multiple parties in the cloud. Especially cache covert channels allow the transmission of several hundred kilobits per second between unprivileged user programs in separate virtual machines. However, caches are small and shared and thus cache-based communication is susceptible to noise from any system activity and interrupts. The feasibility of a reliable cache covert channel under a severe noise scenario has not been demonstrated yet. Instead, previous work relies on either of the two contradicting assumptions: the assumption of direct applicability of error-correcting codes, or the assumption that noise effectively prevents covert channels.

In this paper, we show that both assumptions are wrong. First, error-correcting codes cannot be applied directly, due to the noise characteristics. Second, even with extraordinarily high system activity, we demonstrate an error-free and highthroughput covert channel. We provide the first comprehensive characterization of noise on cache covert channels due to cache activity and interrupts. We build the first robust covert channel based on established techniques from wireless transmission protocols, adapted for our use in microarchitectural attacks. Our errorcorrecting and error-handling high-throughput covert channel can sustain transmission rates of more than 45 KBps on Amazon EC2, which is 3 orders of magnitude higher than previous covert channels demonstrated on Amazon EC2. Our robust and errorfree channel even allows us to build an SSH connection between two virtual machines, where all existing covert channels fail.

Clémentine Maurice (Graz University of Technology)
Manuel Weber (Graz University of Technology)
Michael Schwarz (Graz University of Technology)
Lukas Giner (Graz University of Technology)
Daniel Gruss (Graz Univ. of Technology, Microsoft Research)
Carlo Alberto Boano (Graz University of Technology)
Stefan Mangard (Graz University of Technology)
Kay Römer (Graz University of Technology)

Dynamic Differential Location Privacy with Personalized Error Bounds

Location privacy continues to attract significant attentions in recent years, fueled by the rapid growth of locationbased services (LBSs) and smart mobile devices. Location obfuscation has been the dominating location privacy preserving approach, which transforms the exact location of a mobile user to a perturbed location before its public release. The notion of location privacy has evolved from user-defined location kanonymity to two statistical quantification based privacy notions: geo-indistinguishability and expected inference error. The former promotes di erential location privacy but does not protect location against inference attacks of Bayesian adversary with using prior information, whereas the latter promotes the background inference resilient location privacy but does not guarantee di erential location privacy with respect to geo-indistinguishability. In this paper we argue that geo-indistinguishability and expected inference error are two complementary notions for location privacy. We formally study the relationship between two privacy notions. By leveraging this relationship and a personalized error bound, we can e ectively combine the two privacy notions. We develop PIVE, a two-phase dynamic di erential location privacy framework. In Phase I, we take into account the user-defined inference error threshold and the prior knowledge about the user   s location to determine a subset of locations as the protection location set for protecting the actual location by increasing adversary   s expected location inference error. In Phase II, we generate pseudo-locations (i.e., perturbed locations) in the way that achieves di erential privacy over the protection location set. This two-phase location obfuscation is constructed dynamically by leveraging the relationship between two privacy notions based on adversary   s current prior information and user-specific privacy requirements on di erent locations and at di erent times. Experiments with real-world datasets demonstrate that our PIVE approach e ectively guarantees the two privacy notions simultaneously and outperforms the existing mechanisms in terms of adaptive privacy protection in presence of skewed locations and computation e ciency.

Lei Yu (Georgia Institute of Technology)
Ling Liu (Georgia Institute of Technology)
Calton Pu (Georgia Institute of Technology)

Are We There Yet? On RPKI’s Deployment and Security

The Resource Public Key Infrastructure (RPKI) binds IP address blocks to owners    public keys. RPKI enables routers to perform Route Origin Validation (ROV), thus preventing devastating attacks such as IP prefix hijacking. Yet, despite extensive effort, RPKI   s deployment is frustratingly sluggish, leaving the Internet largely insecure. We tackle fundamental questions regarding today   s RPKI   s deployment and security: What is the adoption status of RPKI and ROV? What are the implications for global security of partial adoption? What are the root-causes for slow adoption? How can deployment be pushed forward? We address these questions through a combination of empirical analyses, a survey of over 100 network practitioners, and extensive simulations. Our main contributions include the following.We present the first study measuring ROV enforcement, revealing disappointingly low adoption at the core of the Internet. We show, in contrast, that without almost ubiquitous ROV adoption by large ISPs significant security benefits cannot be attained. We next expose a critical security vulnerability: about a third of RPKI authorizations issued for IP prefixes do not protect the prefix from hijacking attacks. We examine potential reasons for scarce adoption of RPKI and ROV, including human error in issuing RPKI certificates and inter-organization dependencies, and present recommendations for addressing these challenges.

Yossi Gilad (Boston University and MIT)
Avichai Cohen (Hebrew University)
Amir Herzberg (Bar Ilan University)
Michael Schapira (Hebrew University)
Haya Shulman (Fraunhofer SIT)

TenantGuard: Scalable Runtime Verification of Cloud-Wide VM-Level Network Isolation

Multi-tenancy in the cloud usually leads to security concerns over network isolation around each cloud tenant   s virtual resources. However, verifying network isolation in cloud virtual networks poses several unique challenges. The sheer size of virtual networks implies a prohibitive complexity, whereas the constant changes in virtual resources demand a short response time. To make things worse, such networks typically allow fine-grained (e.g., VM-level) and distributed (e.g., security groups) network access control. Those challenges can either invalidate existing approaches or cause an unacceptable delay which prevents runtime applications. In this paper, we present TenantGuard, a scalable system for verifying cloud-wide, VMlevel network isolation at runtime. We take advantage of the hierarchical nature of virtual networks, efficient data structures, incremental verification, and parallel computation to reduce the performance overhead of security verification. We implement our approach based on OpenStack and evaluate its performance both in-house and on Amazon EC2, which confirms its scalability and efficiency (13 seconds for verifying 168 millions of VM pairs). We further integrate TenantGuard with Congress, an OpenStack policy service, to verify compliance with respect to isolation requirements based on tenant-specific high-level security policies.

Yushun Wang (CIISE Concordia Univ., Montreal, QC, Canada)
Taous Madi (CIISE Concordia Univ., Montreal, QC, Canada)
Suryadipta Majumdar (CIISE Concordia Univ., Montreal, QC, Canada)
Yosr Jarraya (Ericsson Security Research, Ericsson Canada)
Amir Alimohammadifar (CIISE Concordia Univ., Montreal, QC, Canada)
Makan Pourzandi (Ericsson Security Research, Ericsson Canada)
Lingyu Wang ((CIISE Concordia Univ., Montreal, QC, Canada))
Mourad Debbabi (CIISE Concordia Univ., Montreal, QC, Canada)

Session 6B: Tor

Session Chair:  Prateek Mittal

Dissecting Tor Bridges: A Security Evaluation of their Private and Public Infrastructures

Bridges are onion routers in the Tor Network whose IP addresses are not public. So far, no global security analysis of Tor bridges has been performed. Leveraging public data sources, and two known Tor issues, we perform the first systematic study on the security of the Tor bridges infrastructure. Our study covers both the public infrastructure available to all Tor users, and the previously unreported private infrastructure, comprising private nodes for the exclusive use of those who know their existence.

Our analysis of the public infrastructure is twofold. First, we examine the security implications of the public data in the CollecTor service, identifying several pieces of data that may be detrimental for the security of bridges. Then, we measure security relevant properties of public bridges. Our results show that the 55% of public bridges that carry clients are vulnerable to aggressive blocking; that 90% of bridge clients use default bridges that are trivial to identify; that the concurrent deployment of Pluggable Transports in bridges reduces the security of the most secure transports; and that running non-Tor services in the same host as a bridge may harm its anonymity.

To study the private infrastructure, we use an approach to discover 694 private bridges on the Internet and a novel technique to track bridges across IP changes. We are first to measure the size of the private bridge population (35% discovered bridges are private) and to report existence of infrastructures that use private proxies to forward traffic to backend bridges or relays. We use a novel clustering approach to analyze the different infrastructures using proxies and bridges, examining its hosting and security properties. We provide an extensive discussion on the security implications of our findings.

Srdjan Matic (IMDEA Software Institute)
Carmela Troncoso (IMDEA Software Institute)
Juan Caballero (IMDEA Software Institute)

The Effect of DNS on Tor’s Anonymity

Previous attacks that link the sender and receiver of traffic in the Tor network (   correlation attacks   ) have generally relied on analyzing traffic from TCP connections. The TCP connections of a typical client application, however, are often accompanied by DNS requests and responses. This additional traffic presents more opportunities for correlation attacks. This paper quantifies how DNS traffic can make Tor users more vulnerable to correlation attacks. We investigate how incorporating DNS traffic can make existing correlation attacks more powerful and how DNS lookups can leak information to third parties about anonymous communication. We (i) develop a method to identify the DNS resolvers of Tor exit relays; (ii) develop a new set of correlation attacks (DefecTor attacks) that incorporate DNS traffic to improve precision; (iii) analyze the Internet-scale effects of these new attacks on Tor users; and (iv) develop improved methods to evaluate correlation attacks. First, we find that there exist adversaries that can mount DefecTor attacks: for example, Google   s DNS resolver observes almost 40% of all DNS requests exiting the Tor network. We also find that DNS requests often traverse ASes that the corresponding TCP connections do not transit, enabling additional ASes to gain information about Tor users    traffic. We then show that an adversary that can mount a DefecTor attack can often determine the website that a Tor user is visiting with perfect precision, particularly for less popular websites where the set of DNS names associated with that website may be unique to the site. We also use the Tor Path Simulator (TorPS) in combination with traceroute data from vantage points co-located with Tor exit relays to estimate the power of AS-level adversaries that might mount DefecTor attacks in practice.

Benjamin Greschbach (KTH Royal Institute of Tech.)
Tobias Pulls (Karlstad University)
Laura M. Roberts (Princeton University)
Philipp Winter (Princeton Univeristy)
Nick Feamster (Princeton University)

Avoiding The Man on the Wire: Improving Tor’s Security with Trust-Aware Path Selection

Tor users are vulnerable to deanonymization by an adversary that can observe some Tor relays or some parts of the network. We demonstrate that previous network-aware path-selection algorithms that propose to solve this problem are vulnerable to attacks across multiple Tor connections. We suggest that users use trust to choose the paths through Tor that are less likely to be observed, where trust is flexibly modeled as a probability distribution on the location of the user   s adversaries, and we present the Trust-Aware Path Selection algorithm for Tor that helps users avoid traffic-analysis attacks while still choosing paths that could have been selected by many other users. We evaluate this algorithm in two settings using a high-level map of Internet routing: (i) users try to avoid a single global adversary that has an independent chance to control each Autonomous System organization, Internet Exchange Point organization, and Tor relay family, and (ii) users try to avoid deanonymization by any single country. We also examine the performance of Trust- Aware Path selection using the Shadow network simulator.

Aaron Johnson (U.S. Naval Research Laboratory)
Rob Jansen (U.S. Naval Research Laboratory)
Aaron D. Jaggard (U.S. Naval Research Laboratory)
Joan Feigenbaum (Yale University)
Paul Syverson (U.S. Naval Research Laboratory)

HisTorε: Differentially Private and Robust Statistics Collection for Tor

A large volume of existing research attempts to understand who uses Tor and how the network is used (and misused). However, conducting measurements on the live Tor network, if done improperly, can endanger the security and anonymity of the millions of users who depend on the network to enhance their online privacy. Indeed, several existing measurement studies of Tor have been heavily criticized for unsafe research practices.

Tor needs privacy-preserving methods of gathering statistics. The recently proposed PrivEx system demonstrates how data can be safely collected on Tor using techniques from differential privacy. However, as we demonstrate in this paper, the integrity of the statistics reported by PrivEx is brittle under realistic deployment conditions. An adversary who operates even a single relay in the volunteer-operated anonymity network can arbitrarily influence the result of PrivEx queries. We argue that a safe and useful data collection mechanism must provide both privacy and integrity protections.

This paper presents HisTor , a privacy-preserving statistics collection scheme based on ( ;  )-differential privacy that is robust against adversarial manipulation. We formalize the security guarantees of HisTor  and show using historical data from the Tor Project that HisTor  provides useful data collection and reporting with low bandwidth and processing overheads.

Akshaya Mani (Georgetown University)
Micah Sherr (Georgetown University)

Session 7: Trusted Execution Environments

Session Chair:  Amir Herzberg

SGX-Shield: Enabling Address Space Layout Randomization for SGX Programs

Traditional execution environments deploy Address Space Layout Randomization (ASLR) to defend against memory corruption attacks. However, Intel Software Guard Extension (SGX), a new trusted execution environment designed to serve security-critical applications on the cloud, lacks such an effective, well-studied feature. In fact, we find that applying ASLR to SGX programs raises non-trivial issues beyond simple engineering for a number of reasons: 1) SGX is designed to defeat a stronger adversary than the traditional model, which requires the address space layout to be hidden from the kernel; 2) the limited memory uses in SGX programs present a new challenge in providing a sufficient degree of entropy; 3) remote attestation conflicts with the dynamic relocation required for ASLR; and 4) the SGX specification relies on known and fixed addresses for key data structures that cannot be randomized.

This paper presents SGX-Shield, a new ASLR scheme designed for SGX environments. SGX-Shield is built on a secure in-enclave loader to secretly bootstrap the memory space layout with a finer-grained randomization. To be compatible with SGX hardware (e.g., remote attestation, fixed addresses), SGX-Shield is designed with a software-based data execution protection mechanism through an LLVM-based compiler. We implement SGX-Shield and thoroughly evaluate it on real SGX hardware. It shows a high degree of randomness in memory layouts and stops memory corruption attacks with a high probability. SGX-Shield shows 7.61% performance overhead in running common microbenchmarks and 2.25% overhead in running a more realistic workload of an HTTPS server.

Jaebaek Seo (KAIST)
Byoungyoung Lee (Purdue University)
Seongmin Kim (KAIST)
Ming-Wei Shih (Georgia Institute of Technology)
Insik Shin (KAIST)
Dongsu Han (KAIST)
Taesoo Kim (Georgia Institute of Technology)

T-SGX: Eradicating Controlled-Channel Attacks Against Enclave Programs

Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that enables secure execution of a program in an isolated environment, called an enclave. SGX hardware protects the running enclave against malicious software, including the operating system, hypervisor, and even low-level firmware. This strong security property allows trustworthy execution of programs in hostile environments, such as a public cloud, without trusting anyone (e.g., a cloud provider) between the enclave and the SGX hardware. However, recent studies have demonstrated that enclave programs are vulnerable to accurate controlled-channel attacks conducted by a malicious OS. Since enclaves rely on the underlying OS, curious and potentially malicious OSs can observe a sequence of accessed addresses by intentionally triggering page faults.

In this paper, we propose T-SGX, a complete mitigation solution to the controlled-channel attack in terms of compatibility, performance, and ease of use. T-SGX relies on a commodity component of the Intel processor (since Haswell), called Transactional Synchronization Extensions (TSX), which implements a restricted form of hardware transactional memory. As TSX is implemented as an extension (i.e., snooping the cache protocol), any unusual event, such as an exception or interrupt, that should be handled in its core component, results in an abort of the ongoing transaction. One interesting property is that the TSX abort suppresses the notification of errors to the underlying OS. This means that the OS cannot know whether a page fault has occurred during the transaction. T-SGX, by utilizing this property of TSX, can carefully isolate the effect of attempts to tap running enclaves, thereby completely eradicating the known controlledchannel attack.

We have implemented T-SGX as a compiler-level scheme to automatically transform a normal enclave program into a secured enclave program without requiring manual source code modification or annotation. We not only evaluate the security properties of T-SGX, but also demonstrate that it could be applied to all the previously demonstrated attack targets, such as libjpeg, Hunspell, and FreeType. To evaluate the performance of T-SGX, we ported 10 benchmark programs of nbench to the SGX environment. Our evaluation results look promising. T-SGX is an order of magnitude faster than the state-of-the-art mitigation schemes. On our benchmarks, T-SGX incurs on average 50% performance overhead and less than 30% storage overhead.

Ming-Wei-Shih (Georgia Institute of Technology)
Sangho Lee (Georgia Institute of Technology)
Taesoo Kim (Georgia Institute of Technology)
Marcus Peinado (Microsoft Research)

BOOMERANG: Exploiting the Semantic Gap in Trusted Execution Environments

In the past decade, we have come to rely on computers for various safety and security-critical tasks, such as securing our homes, operating our vehicles, and controlling our finances. To facilitate these tasks, chip manufacturers have begun including trusted execution environments (TEEs) in their processors, which enable critical code (e.g., cryptographic functions) to run in an isolated hardware environment that is protected from the traditional operating system (OS) and its applications. While code in the untrusted environment (e.g., Android or Linux) is forbidden from accessing any memory or state within the TEE, the code running in the TEE, by design, has unrestricted access to the memory of the untrusted OS and its applications. However, due to the isolation between these two environments, the TEE has very limited visibility into the untrusted environment   s security mechanisms (e.g., kernel vs. application memory).

In this paper, we introduce BOOMERANG, a class of vulnerabilities that arises due to this semantic separation between the TEE and the untrusted environment. These vulnerabilities permit untrusted user-level applications to read and write any memory location in the untrusted environment, including security-sensitive kernel memory, by leveraging the TEE   s privileged position to perform the operations on its behalf. BOOMERANG can be used to steal sensitive data from other applications, bypass security checks, or even gain full control of the untrusted OS.

To quantify the extent of this vulnerability, we developed an automated framework for detecting BOOMERANG bugs within the TEEs of popular mobile phones. Using this framework, we were able to confirm the existence of BOOMERANG on four different TEE platforms, affecting hundreds of millions of devices on the market today. Moreover, we confirmed that, in at least two instances, BOOMERANG could be leveraged to completely compromise the untrusted OS (i.e., Android). While the implications of these vulnerabilities are severe, defenses can be quickly implemented by vendors, and we are currently in contact with the affected TEE vendors to deploy adequate fixes. To this end, we evaluated the two most promising defense proposals and their inherent trade-offs. This analysis led the proposal of a novel BOOMERANG defense, addressing the major shortcomings of the existing defenses with minimal performance overhead. Our findings have been reported to and verified by the corresponding vendors, who are currently in the process of creating security patches.

Aravind Machiry (UC Santa Barbara)
Eric Gustafson (UC Santa Barbara)
Chad Spensky (UC Santa Barbara)
Christopher Salls (UC Santa Barbara)
Nick Stephens (UC Santa Barbara)
Ruoyu Wang (UC Santa Barbara)
Antonio Bianchi (UC Santa Barbara)
Yung Ryn Choe (Sandia National Laboratories)
Christopher Kruegel (UC Santa Barbara)
Giovanni Vigna (UC Santa Barbara)

HOP: Hardware makes Obfuscation Practical

Program obfuscation is a central primitive in cryptography, and has important real-world applications in protecting software from IP theft. However, well known results from the cryptographic literature have shown that software only virtual black box (VBB) obfuscation of general programs is impossible. In this paper we propose HOP, a system (with matching theoretic analysis) that achieves simulation-secure obfuscation for RAM programs, using secure hardware to circumvent previous impossibility results. To the best of our knowledge, HOP is the first implementation of a provably secure VBB obfuscation scheme in any model under any assumptions.

HOP trusts only a hardware single-chip processor. We present a theoretical model for our complete hardware design and prove its security in the UC framework. Our goal is both provable security and practicality. To this end, our theoretic analysis accounts for all optimizations used in our practical design, including the use of a hardware Oblivious RAM (ORAM), hardware scratchpad memories, instruction scheduling techniques and context switching. We then detail a prototype hardware implementation of HOP. The complete design requires 72% of the area of a V7485t Field Programmable Gate Array (FPGA) chip. Evaluated on a variety of benchmarks, HOP achieves an overhead of 8    76  relative to an insecure system. Compared to all prior (not implemented) work that strives to achieve obfuscation, HOP improves performance by more than three orders of magnitude. We view this as an important step towards deploying obfuscation technology in practice.

Kartik Nayak (Univ. of Maryland, College Park)
Christopher W. Fletcher (Univ. of Illinois, Urbana-Champaign)
Ling Ren (MIT)
Nishanth Chandran (Microsoft Research)
Satya Lokam (Microsoft Research)
Elaine Shi (Cornell University)
Vipal Goyal (Microsoft Research)

Panoply: Low-TCB Linux Applications With SGX Enclaves

Intel SGX, a new security capability in emerging CPUs, allows user-level application code to execute in hardwareisolated enclaves. Enclave memory is isolated from all other software on the system, even from the privileged OS or hypervisor. While being a promising hardware-rooted building block, enclaves have severely limited capabilities, such as no native access to system calls and standard OS abstractions. These OS abstractions are used ubiquitously in real-world applications.

In this paper, we present a new system called PANOPLY which bridges the gap between the SGX-native abstractions and the standard OS abstractions which feature-rich, commodity Linux applications require. PANOPLY provides a new abstraction called a micro-container (or a    micron   ), which is a unit of code and data isolated in SGX enclaves. Microns expose the standard POSIX abstractions to application logic, including access to filesystems, network, multi-threading, multi-processing and thread synchronization primitives. Further, PANOPLY enforces a strong integrity property for the inter-enclave interactions, ensuring that the execution of the application follows the legitimate control and data-flow even if the OS misbehaves. Thus, commodity Linux applications can enhance security by splitting their application logic in one or more microns, or by importing micron-libraries, with little effort. In contrast to previous systems that enable comparable richness, PANOPLY offers two orders of magnitude lower TCB (about 20 KLOC in total), more than half of which is boiler-plate and can be automatically verified in the future. We demonstrate how PANOPLY enables much stronger security in 4 real-world applications     including Tor, OpenSSL, and web services     which can base security on hardware-root of trust.

Shweta Shinde (National University of Singapore)
Dat Le Tien (National University of Singapore)
Shruti Tople (National University of Singapore)
Prateek Saxena (National University of Singapore)

Session 8: Cyberphysical Security

Session Chair:  Dongyan Xu

Hey, My Malware Knows Physics! Attacking PLCs with Physical Model Aware Rootkit

Trustworthy operation of industrial control systems (ICS) depends on secure code execution on the embedded programmable logic controllers (PLCs). The controllers monitor and control the underlying physical plants such as electric power grids and continuously report back the system status to human operators.

We present HARVEY, 1 a PLC rootkit that implements a physics-aware stealthy attack against cyberphysical power grid control systems. HARVEY sits within the PLC   s firmware below the control logic and modifies control commands before they are sent out by the PLC   s output modules to the physical plant   s actuators. HARVEY replaces legitimate control commands with malicious, adversary-optimal commands to maximize the damage to the physical power equipment and cause large-scale failures. To ensure system safety, the operators observe the status of the power system by fetching system parameter values from PLC devices. To conceal the maliciously caused anomalous behavior from operators, HARVEY intercepts the sensor measurement inputs to the PLC device. HARVEY simulates the power system with the legitimate control commands (which were intercepted/replaced with malicious ones), and calculates/injects the sensor measurements that operators would expect to see. We implemented HARVEY on the widely spread Allen Bradley PLC and evaluated it on a real-world electric power grid test-bed. The results empirically prove HARVEY   s deployment feasibility in practice nowadays.

Luis Garcia (Rutgers University)
Ferdinand Brasser (Tech. Universität at Darmstadt)
Mehmet H. Cintuglu (Florida International University)
Ahmad-Reza Sadeghi (Tech. Universität at Darmstadt)
Osama Mohammed (Floridate International University)
Saman A. Zonouz (Rugers University)

ContexloT: Towards Providing Contextual Integrity to Appified IoT Platforms

The Internet-of-Things (IoT) has quickly evolved to a new appified era where third-party developers can write apps for IoT platforms using programming frameworks. Like other appified platforms, e.g., the smartphone platform, the permission system plays an important role in platform security. However, design flaws in current IoT platform permission models have been reported recently, exposing users to significant harm such as break-ins and theft. To solve these problems, a new access control model is needed for both current and future IoT platforms. In this paper, we propose ContexIoT, a context-based permission system for appified IoT platforms that provides contextual integrity by supporting fine-grained context identification for sensitive actions, and runtime prompts with rich context information to help users perform effective access control. Context definition in ContexIoT is at the inter-procedure control and data flow levels, that we show to be more comprehensive than previous context-based permission systems for the smartphone platform. ContexIoT is designed to be backward compatible and thus can be directly adopted by current IoT platforms.

We prototype ContexIoT on the Samsung SmartThings platform, with an automatic app patching mechanism developed to support unmodified commodity SmartThings apps. To evaluate the system   s effectiveness, we perform the first extensive study of possible attacks on appified IoT platforms by reproducing reported IoT attacks and constructing new IoT attacks based on smartphone malware classes. We categorize these attacks based on lifecycle and adversary techniques, and build the first taxonomized IoT attack app dataset. Evaluating ContexIoT on this dataset, we find that it can effectively distinguish the attack context for all the tested apps. The performance evaluation on 283 commodity IoT apps shows that the app patching adds nearly negligible delay to the event triggering latency, and the permission request frequency is far below the threshold that is considered to risk user habituation or annoyance.

Yunhan Jack Jia (University of Michigan)
Qi Alfred Chen (University of Michigan)
Shiqi Wang (Shanghai Jiaotong University)
Amir Rahmati (University of Michigan)
Earlence Fernandes (University of Michigan)
Z. Morley Mao (University of Michigan)
Atul Prakash (University of Michigan)

FBS-Radar: Uncovering Fake Base Stations at Scale in the Wild

Base stations constitute the basic infrastructure of today   s cellular networks. Unfortunately, vulnerabilities in the GSM (2G) network protocol enable the creation of fake base stations (FBSes) that are not authorized by network operators. Criminal gangs are using FBSes to directly attack users by sending spam and fraud SMS messages, even if the users have access to 3G/4G networks. In this paper, we present the design, deployment, and evolution of an FBS detection system called FBS-Radar, based on crowdsourced data of nearly 100M users. In particular, we evaluate five different metrics for identifying FBSes in the wild, and find that FBSes can be precisely identified without sacrificing user privacy. Additionally, we present a novel method for accurately geolocating FBSes while incurring negligible impact on end-user devices. Our system protects users from millions of spam and fraud SMS messages per day, and has helped the authorities arrest hundreds of FBS operators.

Zhenhua Li (Tsinghua University)
Weiwei Wang (Baidu Mobile Security)
Christo Wilson (Northeastern University)
Jian Chen (Tsinghua University)
Chen Qian (UC Santa Cruz)
Taeho Jung (Illinois Institute of Technology)
Lan Zhang (Univ. of Science and Technology China)
Kebin Liu (Tsinghua University)
Xiangyang Li (Univ. of Science and Technology China)
Yunhao Liu (Tsinghua University)

Internet-scale Probing of CPS: Inference, Characterization and Orchestration Analysis

Although the security of Cyber-Physical Systems (CPS) has been recently receiving significant attention from the research community, undoubtedly, there still exists a substantial lack of a comprehensive and a holistic understanding of attackers    malicious strategies, aims and intentions. To this end, this paper uniquely exploits passive monitoring and analysis of a newly deployed network telescope IP address space in a first attempt ever to build broad notions of real CPS maliciousness. Specifically, we approach this problem by inferring, investigating, characterizing and reporting large-scale probing activities that specifically target more than 20 diverse, heavily employed CPS protocols. To permit such analysis, we initially devise and evaluate a novel probabilistic model that aims at filtering noise that is embedded in network telescope traffic. Subsequently, we generate amalgamated statistics, inferences and insights characterizing such inferred scanning activities in terms of their probe types, the distribution of their sources and their packets    headers, among numerous others, in addition to examining and visualizing the co-occurrence patterns of such events. Further, we propose and empirically evaluate an innovative hybrid approach rooted in time-series analysis and context triggered piecewise hashing to infer, characterize and cluster orchestrated and well-coordinated probing activities targeting CPS protocols, which are generated from Internet-scale unsolicited sources. Our analysis and evaluations, which draw upon extensive network telescope data observed over a recent one month period, demonstrate a staggering 33 thousand probes towards ample of CPS protocols, the lack of interest in UDP-based CPS services, and the prevalence of probes towards the ICCP and Modbus protocols. Additionally, we infer a considerable 74% of CPS probes that were persistent throughout the entire analyzed period targeting prominent protocols such as DNP3 and BACnet. Further, we uncover close to 9 thousand large-scale, stealthy, previously undocumented orchestrated probing events targeting a number of such CPS protocols. We validate the various outcomes through cross-validations against publicly available threat repositories.We concur that the devised approaches, techniques, and methods provide a solid first step towards better comprehending real CPS unsolicited objectives and intents.

Claude Fachkha (New York University Abu Dhabi)
Elias Bou-Harb (Florida Atlantic University)
Anastasis Keliris (New York University Abu Dhabi)
Nasir Memon (New York University)
Mustaque Ahamad (Georgia Institute of Technology)

Wi-Fly?: Detecting Privacy Invasion Attacks by Consumer Drones

Drones are becoming increasingly popular for hobbyists and recreational use. But with this surge in popularity comes increased risk to privacy as the technology makes it easy to spy on people in otherwise-private environments, such as an individual   s home. An attacker can fly a drone over fences and walls in order to observe the inside of a house, without having physical access. Existing drone detection systems require specialist hardware and expensive deployment efforts; making them inaccessible to the general public.

In this work we present a drone detection system that requires minimal prior configuration and uses inexpensive commercial offthe- shelf (COTS) hardware to detect drones that are carrying out privacy invasion attacks. We use a model of the attack structure to derive statistical metrics for movement and proximity, that are then applied to received communications between a drone and its controller. We tested our system in real world experiments with two popular consumer drone models mounting privacy invasion attacks using a range of flight patterns. We were able to both detect the presence of a drone and identify which phase of the privacy attack was in progress. Even in our worst-case we detected an attack before the drone was within 48m of its target.

Simon Birnbach (University of Oxford)
Richard Baker (University of Oxford)
Ivan Martinovic (University of Oxford)

Session 9: Attacks

Session Chairs:  Wil Robertson

ASLR on the Line:  Practical Cache Attacks on the MMU

Address space layout randomization (ASLR) is an important first line of defense against memory corruption attacks and a building block for many modern countermeasures. Existing attacks against ASLR rely on software vulnerabilities and/or on repeated (and detectable) memory probing.

In this paper, we show that neither is a hard requirement and that ASLR is fundamentally insecure on modern cachebased architectures, making ASLR and caching conflicting requirements (ASLR Cache, or simply AnC). To support this claim, we describe a new EVICT+TIME cache attack on the virtual address translation performed by the memory management unit (MMU) of modern processors. Our AnC attack relies on the property that the MMU   s page-table walks result in caching page-table pages in the shared last-level cache (LLC). As a result, an attacker can derandomize virtual addresses of a victim   s code and data by locating the cache lines that store the page-table entries used for address translation.

Relying only on basic memory accesses allows AnC to be implemented in JavaScript without any specific instructions or software features. We show our JavaScript implementation can break code and heap ASLR in two major browsers running on the latest Linux operating system with 28 bits of entropy in 150 seconds. We further verify that the AnC attack is applicable to every modern architecture that we tried, including Intel, ARM and AMD. Mitigating this attack without naively disabling caches is hard, since it targets the low-level operations of the MMU. We conclude that ASLR is fundamentally flawed in sandboxed environments such as JavaScript and future defenses should not rely on randomized virtual addresses as a building block.

Ben Gras (Vrije Universiteit Amsterdam)
Kaveh Razavi (Vrije Universiteit Amsterdam)
Erik Bosman (Vrije Universiteit Amsterdam)
Herbert Box (Vrije Universiteit Amsterdam)
Cristiano Giuffrida (Vrije Universiteit Amsterdam)

Unleashing Use-Before-Initialization Vulnerabilities in the Linux Kernel Using Targeted Stack Spraying

A common type of memory error in the Linux kernel is using uninitialized variables (uninitialized use). Uninitialized uses not only cause undefined behaviors but also impose a severe security risk if an attacker takes control of the uninitialized variables. However, reliably exploiting uninitialized uses on the kernel stack has been considered infeasible until now since the code executed prior to triggering the vulnerability must leave an attacker-controlled pattern on the stack. Therefore, uninitialized uses are largely overlooked and regarded as undefined behaviors, rather than security vulnerabilities. In particular, full memorysafety techniques (e.g., SoftBound+CETS) exclude uninitialized use as a prevention target, and widely used systems such as OpenSSL even use uninitialized memory as a randomness source.

In this paper, we propose a fully automated targeted stackspraying approach for the Linux kernel that reliably facilitates the exploitation of uninitialized uses. Our targeted stack-spraying includes two techniques: (1) a deterministic stack spraying technique that suitably combines tailored symbolic execution and guided fuzzing to identify kernel inputs that user-mode programs can use to deterministically guide kernel code paths and thereby leave attacker-controlled data on the kernel stack, and (2) an exhaustive memory spraying technique that uses memory occupation and pollution to reliably control a large region of the kernel stack. We show that our targeted stack-spraying approach allows attackers to reliably control more than 91% of the Linux kernel stack, which, in combination with uninitialized-use vulnerabilities, suffices for a privilege escalation attack. As a countermeasure, we propose a compiler-based mechanism that initializes potentially unsafe pointer-type fields with almost no performance overhead. Our results show that uninitialized use is a severe attack vector that can be readily exploited with targeted stack-spraying, so future memory-safety techniques should consider it a prevention target, and systems should not use uninitialized memory as a randomness source.

Kangjie Lu (Georgia Institute of Technology)
Marie-Therese Walter (CISPA, Saarland University & Saarland Informatics Campus)
David Pfaff (CISPA, Saarland University & Saarland Informatics Campus)
Stefan Nürnberger (DFKI & CISPA, Saarland University & Saarland Informatics Campus)
Wenke Lee (Georgia Institute of Technology)
Michael Backes (CISPA, Saarland University & MPI-SWS & Saarland Informatics Campus)

Address Oblivious Code Reuse: On the Effectiveness of Leakage Resilient Diversity

Memory corruption vulnerabilities not only allow modification of control data and injection of malicious payloads; they also allow adversaries to reconnoiter a diversified program, customize a payload, and ultimately bypass code randomization defenses. In response, researchers have proposed and built various leakage-resilient defenses against code reuse. Leakage-resilient defenses use memory protection techniques to prevent adversaries from directly reading code as well as pointer indirection or encryption techniques to decouple code pointers from the randomized code layout, avoiding indirect leakage. In this paper, we show that although current code pointer protections do prevent leakage per se, they are fundamentally unable to stop code reuse. Specifically, we demonstrate a new class of attacks we call address-oblivious code reuse that bypasses state-of-the-art leakage-resilience techniques by profiling and reusing protected code pointers, without leaking the code layout. We show that an attacker can accurately identify protected code pointers of interest and mount code-reuse attacks at the abstraction level of pointers without requiring any knowledge of code addresses. We analyze the prevalence of opportunities for such attacks in popular code bases and build three real-world exploits against Nginx and Apache to demonstrate their practicality. We analyze recently proposed leakage resilient defenses and show that they are vulnerable to address oblivious code reuse. Our findings indicate that because of the prevalence of code pointers in realistic programs and the fundamental need to expose them to    read    operations (even indirectly), diversity defenses face a fundamental design challenge in mitigating such attacks.

Robert Rudd (MIT Lincoln Laboratory)
Richard Skowyra (MIT Lincoln Laboratory)
David Bigelow (MIT Lincoln Laboratory)
Veer Dedhia (MIT Lincoln Laboratory)
Thomas Hobson (MIT Lincoln Laboratory)
Stephen Crane (Immunant, Inc)
Christopher Liebchen (Tech Universität Darmstadt)
Per Larsen (UC, Irvine and Immunant, Inc.)
Lucas Davi (Technische Universität Darmstadt)
Michael Franz (University of California, Irvine)
Ahmad-Reza Sadeghi (Tech Universität Darmstadt)
Hamed Okhravi (MIT Lincoln Laboratory)

An Evil Copy: How the Loader Betrays You

Dynamic loading is a core feature used on current systems to (i) enable modularity and reuse, (ii) reduce memory footprint by sharing code pages of libraries and executables among processes, and (iii) simplify update procedures by eliminating the need to recompile executables when a library is updated. The Executable and Linkable Format (ELF) is a generic specification that describes how executable programs are stitched together from object files produced from source code to libraries and executables. Programming languages allow fine-grained control over variables, including access and memory protections, so programmers may write defense mechanisms assuming that the permissions specified at the source and/or compiler level will hold at runtime.

Unfortunately, information about memory protection is lost during compilation. We identify one case that has significant security implications: when instantiating a process, constant external variables that are referenced in executables are forcefully relocated to a writable memory segment without warning. The loader trades security for compatibility due to the lack of memory protection information on the relocated external variables.We call this new attack vector COREV for Copy Relocation Violation. An adversary may use a memory corruption vulnerability to modify such    read-only    constant variables like vtables, function pointers, format strings, and file names to bypass defenses (like FORTIFY SOURCE or CFI) and to escalate privileges.

We have studied all Ubuntu 16.04 LTS packages and found that out of 54,045 packages, 4,570 packages have unexpected copy relocations that change read-only permissions to read-write, presenting new avenues for attack. The attack surface is broad with 29,817 libraries exporting relocatable read-only variables. The set of 6,399 programs with actual copy relocation violations includes ftp servers, apt-get, and gettext. We discuss the cause, effects, and a set of possible mitigation strategies for the COREV attack vector.

Xinyang Ge (The Pennsylvania State University)
Mathias Payer (Purdue University)
Trent Jaeger (The Pennsylvania State University)

Session 10: Software and System Security (Part II)

Session Chair:  Trent Jaeger

Stack Object Protection with Low Fat Pointers

Object bounds overflow errors are a common source of security vulnerabilities. In principle, bounds check instrumentation eliminates the problem, but this introduces high overheads and is further hampered by limited compatibility against un-instrumented code. On 64-bit systems, low-fat pointers are a recent scheme for implementing efficient and compatible bounds checking by transparently encoding meta information within the native pointer representation itself. However, low-fat pointers are traditionally used for heap objects only, where the allocator has sufficient control over object location necessary for the encoding. This is a problem for stack allocation, where there exist strong constraints regarding the location of stack objects that is apparently incompatible with the low-fat pointer approach. To address this problem, we present an extension of low-fat pointers to stack objects by using a collection of techniques, such as pointer mirroring and memory aliasing, thereby allowing stack objects to enjoy bounds error protection from instrumented code. Our extension is compatible with common special uses of the stack, such as alloca, setjmp and longjmp, exceptions, and multi-threading, which rely on direct manipulation of the stack pointer. Our experiments show that we successfully extend the advantages of the low-fat pointer encoding to stack objects. The end result is a competitive bounds checking instrumentation for the stack and heap with low memory and runtime overheads, and high compatibility with un-instrumented legacy code.

Gregory J. Duck (The National Univ. of Singapore)
Roland H. C. Yap (The National Univ. of Singapore)
Lorenzo Cavallaro (Royal Holloway, Univ. of London)

VUzzer: Application-aware Evolutionary Fuzzing

Fuzzing is an effective software testing technique to find bugs. Given the size and complexity of real-world applications, modern fuzzers tend to be either scalable, but not effective in exploring bugs that lie deeper in the execution, or capable of penetrating deeper in the application, but not scalable.

In this paper, we present an application-aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format. In order to maximize coverage and explore deeper paths, we leverage control- and data-flow features based on static and dynamic analysis to infer fundamental properties of the application. This enables much faster generation of interesting inputs compared to an application-agnostic approach. We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets: DARPA Grand Challenge binaries (CGC), a set of real-world applications (binary input parsers), and the recently released LAVA dataset. On all of these datasets, VUzzer yields significantly better results than state-of-the-art fuzzers, by quickly finding several existing and new bugs.

Sanjay Rawat (Vrije Universiteit, Amsterdam, NL)
Vivek Jain (IIIT Hyderabad, India)
Ashish Kumar (IIIT Hyderabad, India)
Lucian Cojocar (Vrije Universiteit, Amsterdam, NL)
Cristiano Giuffrida (Vrije Universiteit, Amsterdam, NL)
Herbert Bos (Vrije Universiteit, Amsterdam, NL)

Self Destructing Exploit Executions via Input Perturbation

Malicious payload injection attacks have been a serious threat to software for decades. Unfortunately, protection against these attacks remains challenging due to the ever increasing diversity and sophistication of payload injection and triggering mechanisms used by adversaries. In this paper, we develop A2C, a system that provides general protection against payload injection attacks. A2C is based on the observation that payloads are highly fragile and thus any mutation would likely break their functionalities. Therefore, A2C mutates inputs from untrusted sources. Malicious payloads that reside in these inputs are hence mutated and broken. To assure that the program continues to function correctly when benign inputs are provided, A2C divides the state space into exploitable and post-exploitable sub-spaces, where the latter is much larger than the former, and decodes the mutated values only when they are transmitted from the former to the latter. A2C does not rely on any knowledge of malicious payloads or their injection and triggering mechanisms. Hence, its protection is general. We evaluate A2C with 30 realworld applications, including apache on a real-world work-load, and our results show that A2C effectively prevents a variety of payload injection attacks on these programs with reasonably low overhead (6.94%).

Yonghwi Kwon (Purdue University)
Brendan Saltaformaggio (Purdue University)
I Luk Kim (Purdue University)
Kyu Hyung Lee (University of Georgia)
Xiangyu Zhang (Purdue University)
Dongyan Xu (Purdue Univerisyt)

A Call to ARMs: Understanding the Costs and Benefits of JIT Spraying Mitigations

JIT spraying allows an attacker to subvert a Just- In-Time compiler, introducing instruction sequences useful to the attacker into executable regions of the victim program   s address space as a side effect of compiling seemingly innocuous code in a safe language like JavaScript.

We present new JIT spraying attacks against Google’s V8 and Mozilla’s SpiderMonkey JavaScript engines on ARM. The V8 attack is the first JIT spraying attack not to rely on instruction decoding ambiguity, and the SpiderMonkey attack uses the first ARM payload that executes unintended instructions derived from intended instruction bytes without resynchronizing to the intended instruction stream. We review the JIT spraying defenses proposed in the literature and their currently-deployed implementations and conclude that the current state of JIT spraying mitigation, which prioritizes low performance overhead, leaves many exploitable attacker options unchecked.

We perform an empirical evaluation of mitigations with low but non-zero overhead in a unified framework and find that full, robust defense implementations of diversification defenses can effectively mitigate JIT spraying attacks in the literature as well as our new attacks with a combined average overhead of 4.56% on x86-64 and 4.88% on ARM32.

Wilson Lian (UC San Diego)
Hovav Shacham (UC San Diego)
Stefan Savage (UC San Diego)

Ramblr: Making Reassembly Great Again

Static binary rewriting has many important applications in reverse engineering, such as patching, code reuse, and instrumentation. Binary reassembling is an efficient solution for static binary rewriting. While there has been a proposed solution to the reassembly of binaries, an evaluation on a realworld binary dataset shows that it suffers from some problems that lead to breaking binaries. Those problems include incorrect symbolization of immediates, failure in identifying symbolizable constants, lack of pointer safety checks, and other issues. Failure in addressing those problems makes the existing approach unsuitable for real-world binaries, especially those compiled with optimizations enabled.

In this paper, we present a new systematic approach for binary reassembling. Our new approach is implemented in a tool called Ramblr. We evaluate Ramblr on 106 real-world programs on Linux x86 and x86-64, and 143 programs collected from the Cyber Grand Challenge Qualification Event. All programs are compiled to binaries with a set of different compilation flags in order to cover as many real-world scenarios as possible. Ramblr successfully reassembles most of the binaries, which is an improvement over the state-of-the-art approach. It should be noted that our reassembling procedure yields no execution overhead and no size expansion.

Ruoyu Wang (UC Santa Barbara)
Yan Shoshitaishvili (UC Santa Barbara)
Antonio Bianchi (UC Santa Barbara)
Aravind Machiry (UC Santa Barbara)
John Grosen (UC Santa Barbara)
Paul Grosen (UC Santa Barbara)
Christopher Kruegel (UC Santa Barbara)
Giovanni Vigna (UC Santa Barbara)


*** Final Agenda ***