Diego Ortiz, Leilani Gilpin, Alvaro A. Cardenas (University of California, Santa Cruz)

Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we argue that we also need to monitor for deviations from various societal “common sense” rules to identify attacks against autonomous systems. In this paper, we provide a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.

View More Papers

WIP: Adversarial Retroreflective Patches: A Novel Stealthy Attack on...

Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

Assessing the Impact of Interface Vulnerabilities in Compartmentalized Software

Hugo Lefeuvre (The University of Manchester), Vlad-Andrei Bădoiu (University Politehnica of Bucharest), Yi Chen (Rice University), Felipe Huici (Unikraft.io), Nathan Dautenhahn (Rice University), Pierre Olivier (The University of Manchester)

Read More

On the Vulnerability of Traffic Light Recognition Systems to...

Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

Applying Accessibility Metrics to Measure the Threat Landscape for...

John Breton, AbdelRahman Abdou (Carleton University)

Read More