Jairo Giraldo (University of Utah), Alvaro Cardenas (UC Santa Cruz), Murat Kantarcioglu (UT Dallas), Jonathan Katz (George Mason University)

Differential Privacy has emerged in the last decade as a powerful tool to protect sensitive information. Similarly, the last decade has seen a growing interest in adversarial classification, where an attacker knows a classifier is trying to detect anomalies and the adversary attempts to design examples meant to mislead this classification.

Differential privacy and adversarial classification have been studied separately in the past. In this paper, we study the problem of how a strategic attacker can leverage differential privacy to inject false data in a system, and then we propose countermeasures against these novel attacks. We show the impact of our attacks and defenses in a real-world traffic estimation system and in a smart metering system.

View More Papers

OmegaLog: High-Fidelity Attack Investigation via Transparent Multi-layer Log Analysis

Wajih Ul Hassan (University of Illinois Urbana-Champaign), Mohammad A. Noureddine (University of Illinois Urbana-Champaign), Pubali Datta (University of Illinois Urbana-Champaign), Adam Bates (University of Illinois Urbana-Champaign)

Read More

BLAG: Improving the Accuracy of Blacklists

Sivaramakrishnan Ramanathan (University of Southern California/Information Sciences Institute), Jelena Mirkovic (University of Southern California/Information Sciences Institute), Minlan Yu (Harvard University)

Read More

Genotype Extraction and False Relative Attacks: Security Risks to...

Peter Ney (University of Washington), Luis Ceze (University of Washington), Tadayoshi Kohno (University of Washington)

Read More