Diego Ortiz, Leilani Gilpin, Alvaro A. Cardenas (University of California, Santa Cruz)

Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we argue that we also need to monitor for deviations from various societal “common sense” rules to identify attacks against autonomous systems. In this paper, we provide a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.

View More Papers

RAI2: Responsible Identity Audit Governing the Artificial Intelligence

Tian Dong (Shanghai Jiao Tong University), Shaofeng Li (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Haojin Zhu (Shanghai Jiao Tong University), Zhen Liu (Shanghai Jiao Tong University)

Read More

FUZZILLI: Fuzzing for JavaScript JIT Compiler Vulnerabilities

Samuel Groß (Google), Simon Koch (TU Braunschweig), Lukas Bernhard (Ruhr-University Bochum), Thorsten Holz (CISPA Helmholtz Center for Information Security), Martin Johns (TU Braunschweig)

Read More

Him of Many Faces: Characterizing Billion-scale Adversarial and Benign...

Shujiang Wu (Johns Hopkins University), Pengfei Sun (F5, Inc.), Yao Zhao (F5, Inc.), Yinzhi Cao (Johns Hopkins University)

Read More

Tactics, Threats & Targets: Modeling Disinformation and its Mitigation

Shujaat Mirza (New York University), Labeeba Begum (New York University Abu Dhabi), Liang Niu (New York University), Sarah Pardo (New York University Abu Dhabi), Azza Abouzied (New York University Abu Dhabi), Paolo Papotti (EURECOM), Christina Pöpper (New York University Abu Dhabi)

Read More