Diego Ortiz, Leilani Gilpin, Alvaro A. Cardenas (University of California, Santa Cruz)

Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we argue that we also need to monitor for deviations from various societal “common sense” rules to identify attacks against autonomous systems. In this paper, we provide a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.

View More Papers

SoundLock: A Novel User Authentication Scheme for VR Devices...

Huadi Zhu (The University of Texas at Arlington), Mingyan Xiao (The University of Texas at Arlington), Demoria Sherman (The University of Texas at Arlington), Ming Li (The University of Texas at Arlington)

Read More

Backdoor Attacks Against Dataset Distillation

Yugeng Liu (CISPA Helmholtz Center for Information Security), Zheng Li (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yun Shen (Netapp), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks...

Hadi Abdullah (Visa Research), Aditya Karlekar (University of Florida), Saurabh Prasad (University of Florida), Muhammad Sajidur Rahman (University of Florida), Logan Blue (University of Florida), Luke A. Bauer (University of Florida), Vincent Bindschaedler (University of Florida), Patrick Traynor (University of Florida)

Read More

Machine Unlearning of Features and Labels

Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Read More