Patrick Jauernig (Technical University of Darmstadt), Domagoj Jakobovic (University of Zagreb, Croatia), Stjepan Picek (Radboud University and TU Delft), Emmanuel Stapf (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Fuzzing is an automated software testing technique broadly adopted by the industry. A popular variant is mutation-based fuzzing, which discovers a large number of bugs in practice. While the research community has studied mutation-based fuzzing for years now, the algorithms' interactions within the fuzzer are highly complex and can, together with the randomness in every instance of a fuzzer, lead to unpredictable effects. Most efforts to improve this fragile interaction focused on optimizing seed scheduling. However, real-world results like Google's FuzzBench highlight that these approaches do not consistently show improvements in practice. Another approach to improve the fuzzing process algorithmically is optimizing mutation scheduling. Unfortunately, existing mutation scheduling approaches also failed to convince because of missing real-world improvements or too many user-controlled parameters whose configuration requires expert knowledge about the target program. This leaves the challenging problem of cleverly processing test cases and achieving a measurable improvement unsolved. We present DARWIN, a novel mutation scheduler and the first to show fuzzing improvements in a realistic scenario without the need to introduce additional user-configurable parameters, opening this approach to the broad fuzzing community. DARWIN uses an Evolution Strategy to systematically optimize and adapt the probability distribution of the mutation operators during fuzzing. We implemented a prototype based on the popular general-purpose fuzzer AFL. DARWIN significantly outperforms the state-of-the-art mutation scheduler and the AFL baseline in our own coverage experiment, in FuzzBench, and by finding 15 out of 21 bugs the fastest in the MAGMA benchmark. Finally, DARWIN found 20 unique bugs (including one novel bug), 66% more than AFL, in widely-used real-world applications.

View More Papers

Towards a Unified Cybersecurity Testing Lab for Satellite, Aerospace,...

Andrei Costin, Hannu Turtiainen, Syed Khandkher and Timo Hamalainen (Faculty of Information Technology, University of Jyvaskyla, Finland) Presenter: Andrei Costin

Read More

The Walls Have Ears: Gauging Security Awareness in a...

Gokul Jayakrishnan, Vijayanand Banahatti, Sachin Lodha (TCS Research Tata Consultancy Services Ltd.)

Read More

Accountable Javascript Code Delivery

Ilkan Esiyok (CISPA Helmholtz Center for Information Security), Pascal Berrang (University of Birmingham & Nimiq), Katriel Cohn-Gordon (Meta), Robert Künnemann (CISPA Helmholtz Center for Information Security)

Read More

ReScan: A Middleware Framework for Realistic and Robust Black-box...

Kostas Drakonakis (FORTH), Sotiris Ioannidis (Technical University of Crete), Jason Polakis (University of Illinois at Chicago)

Read More