Zenong Zhang (University of Texas at Dallas), George Klees (University of Maryland), Eric Wang (Poolesville High School), Michael Hicks (University of Maryland), Shiyi Wei (University of Texas at Dallas)

While many real-world programs are shipped with configurations to enable/disable functionalities, fuzzers have mostly been applied to test single configurations of these programs. In this work, we first conduct an empirical study to understand how program configurations affect fuzzing performance. We find that limiting a campaign to a single configuration can result in failing to cover a significant amount of code. We also observe that different program configurations contribute differing amounts of code coverage, challenging the idea that each one can be efficiently fuzzed individually. Motivated by these two observations we propose ConfigFuzz, which can fuzz configurations along with normal inputs. ConfigFuzz transforms the target program to encode its program options within part of the fuzzable input, so existing fuzzers’ mutation operators can be reused to fuzz program configurations. We instantiate ConfigFuzz on 3 configurable, common fuzzing targets, and integrate their executions in FuzzBench. In our preliminary evaluation, ConfigFuzz nearly always outperforms the baseline fuzzing of a single configuration, and in one target also outperforms the fuzzing of a sequence of sampled configurations. However, we find that sometimes fuzzing a sequence of sampled configurations, with shared seeds, improves on ConfigFuzz. We propose hypotheses and plan to use data visualization to further understand the behavior of ConfigFuzz, and refine it, in the full evaluation.

View More Papers

Building the VPNalyzer System

Reethika Ramesh (University of Michigan), Leonid Evdokimov (Independent), Diwen Xue, Roya Ensafi (University of Michigan)

Read More

Remote Memory-Deduplication Attacks

Martin Schwarzl (Graz University of Technology), Erik Kraft (Graz University of Technology), Moritz Lipp (Graz University of Technology), Daniel Gruss (Graz University of Technology)

Read More

Get a Model! Model Hijacking Attack Against Machine Learning...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More