Laura Matzen, Michelle A Leger, Geoffrey Reedy (Sandia National Laboratories)

Binary reverse engineers combine automated and manual techniques to answer questions about software. However, when evaluating automated analysis results, they rarely have additional information to help them contextualize these results in the binary. We expect that humans could more readily understand the binary program and these analysis results if they had access to information usually kept internal to the analysis, like value-set analysis (VSA) information. However, these automated analyses often give up precision for scalability, and imprecise information might hinder human decision making.

To assess how precision of VSA information affects human analysts, we designed a human study in which reverse engineers answered short information flow problems, determining whether code snippets would print sensitive information. We hypothesized that precise VSA information would help our participants analyze code faster and more accurately, and that imprecise VSA information would lead to slower, less accurate performance than no VSA information. We presented hand-crafted code snippets with precise, imprecise, or no VSA information in a blocked design, recording participants’ eye movements, response times, and accuracy while they analyzed the snippets. Our data showed that precise VSA information changed participants’ problem-solving strategies and supported faster, more accurate analyses. However, surprisingly, imprecise VSA information also led to increased accuracy relative to no VSA information, likely due to the extra time participants spent working through the code.

View More Papers

Does Every Second Count? Time-based Evolution of Malware Behavior...

Alexander Küchler (Fraunhofer AISEC), Alessandro Mantovani (EURECOM), Yufei Han (NortonLifeLock Research Group), Leyla Bilge (NortonLifeLock Research Group), Davide Balzarotti (EURECOM)

Read More

Rosita: Towards Automatic Elimination of Power-Analysis Leakage in Ciphers

Madura A. Shelton (University of Adelaide), Niels Samwel (Radboud University), Lejla Batina (Radboud University), Francesco Regazzoni (University of Amsterdam and ALaRI – USI), Markus Wagner (University of Adelaide), Yuval Yarom (University of Adelaide and Data61)

Read More

Data Poisoning Attacks to Deep Learning Based Recommender Systems

Hai Huang (Tsinghua University), Jiaming Mu (Tsinghua University), Neil Zhenqiang Gong (Duke University), Qi Li (Tsinghua University), Bin Liu (West Virginia University), Mingwei Xu (Tsinghua University)

Read More

Practical Non-Interactive Searchable Encryption with Forward and Backward Privacy

Shi-Feng Sun (Monash University, Australia), Ron Steinfeld (Monash University, Australia), Shangqi Lai (Monash University, Australia), Xingliang Yuan (Monash University, Australia), Amin Sakzad (Monash University, Australia), Joseph Liu (Monash University, Australia), ‪Surya Nepal‬ (Data61, CSIRO, Australia), Dawu Gu (Shanghai Jiao Tong University, China)

Read More