Alex Groce (Northern Arizona Univerisity), Goutamkumar Kalburgi (Northern Arizona Univerisity), Claire Le Goues (Carnegie Mellon University), Kush Jain (Carnegie Mellon University), Rahul Gopinath (Saarland University)

Most fuzzing efforts, very understandably, focus on fuzzing the program in which bugs are to be found. However, in this paper we propose that fuzzing programs “near” the System Under Test (SUT) can in fact improve the effectiveness of fuzzing, even if it means less time is spent fuzzing the actual target system. In particular, we claim that fault detection and code coverage can be improved by splitting fuzzing resources between the SUT and mutants of the SUT. Spending half of a fuzzing budget fuzzing mutants, and then using the seeds generated to fuzz the SUT can allow a fuzzer to explore more behaviors than spending the entire fuzzing budget on the SUT. The approach works because fuzzing most mutants is “almost” fuzzing the SUT, but may change behavior in ways that allow a fuzzer to reach deeper program behaviors. Our preliminary results show that fuzzing mutants is trivial to implement, and provides clear, statistically significant, benefits in terms of fault detection for a non-trivial benchmark program; these benefits are robust to a variety of detailed choices as to how to make use of mutants in fuzzing. The proposed approach has two additional important advantages: first, it is fuzzer-agnostic, applicable to any corpus-based fuzzer without requiring modification of the fuzzer; second, the fuzzing of mutants, in addition to aiding fuzzing the SUT, also gives developers insight into the mutation score of a fuzzing harness, which may help guide improvements to a project’s fuzzing approach.

View More Papers

Generating Test Suites for GPU Instruction Sets through Mutation...

Shoham Shitrit(University of Rochester) and Sreepathi Pai (University of Rochester)

Read More

HeadStart: Efficiently Verifiable and Low-Latency Participatory Randomness Generation at...

Hsun Lee (National Taiwan University), Yuming Hsu (National Taiwan University), Jing-Jie Wang (National Taiwan University), Hao Cheng Yang (National Taiwan University), Yu-Heng Chen (National Taiwan University), Yih-Chun Hu (University of Illinois at Urbana-Champaign), Hsu-Chun Hsiao (National Taiwan University)

Read More

Get a Model! Model Hijacking Attack Against Machine Learning...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

RVPLAYER: Robotic Vehicle Forensics by Replay with What-if Reasoning

Hongjun Choi (Purdue University), Zhiyuan Cheng (Purdue University), Xiangyu Zhang (Purdue University)

Read More