Patrick Jauernig (Technical University of Darmstadt), Domagoj Jakobovic (University of Zagreb, Croatia), Stjepan Picek (Radboud University and TU Delft), Emmanuel Stapf (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Fuzzing is an automated software testing technique broadly adopted by the industry. A popular variant is mutation-based fuzzing, which discovers a large number of bugs in practice. While the research community has studied mutation-based fuzzing for years now, the algorithms' interactions within the fuzzer are highly complex and can, together with the randomness in every instance of a fuzzer, lead to unpredictable effects. Most efforts to improve this fragile interaction focused on optimizing seed scheduling. However, real-world results like Google's FuzzBench highlight that these approaches do not consistently show improvements in practice. Another approach to improve the fuzzing process algorithmically is optimizing mutation scheduling. Unfortunately, existing mutation scheduling approaches also failed to convince because of missing real-world improvements or too many user-controlled parameters whose configuration requires expert knowledge about the target program. This leaves the challenging problem of cleverly processing test cases and achieving a measurable improvement unsolved. We present DARWIN, a novel mutation scheduler and the first to show fuzzing improvements in a realistic scenario without the need to introduce additional user-configurable parameters, opening this approach to the broad fuzzing community. DARWIN uses an Evolution Strategy to systematically optimize and adapt the probability distribution of the mutation operators during fuzzing. We implemented a prototype based on the popular general-purpose fuzzer AFL. DARWIN significantly outperforms the state-of-the-art mutation scheduler and the AFL baseline in our own coverage experiment, in FuzzBench, and by finding 15 out of 21 bugs the fastest in the MAGMA benchmark. Finally, DARWIN found 20 unique bugs (including one novel bug), 66% more than AFL, in widely-used real-world applications.

View More Papers

Kids, Cats, and Control: Designing Privacy and Security Dashboard...

Jacob Abbott (Indiana University), Jayati Dev (Indiana University), DongInn Kim (Indiana University), Shakthidhar Reddy Gopavaram (Indiana University), Meera Iyer (Indiana University), Shivani Sadam (Indiana University) , Shirang Mare (Western Washington University), Tatiana Ringenberg (Purdue University), Vafa Andalibi (Indiana University), and L. Jean Camp(Indiana University)

Read More

Assessing the Impact of Interface Vulnerabilities in Compartmentalized Software

Hugo Lefeuvre (The University of Manchester), Vlad-Andrei Bădoiu (University Politehnica of Bucharest), Yi Chen (Rice University), Felipe Huici (Unikraft.io), Nathan Dautenhahn (Rice University), Pierre Olivier (The University of Manchester)

Read More

DOITRUST: Dissecting On-chain Compromised Internet Domains via Graph Learning

Shuo Wang (CSIRO's Data61 & Cybersecurity CRC, Australia), Mahathir Almashor (CSIRO's Data61 & Cybersecurity CRC, Australia), Alsharif Abuadbba (CSIRO's Data61 & Cybersecurity CRC, Australia), Ruoxi Sun (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Calvin Wang (CSIRO's Data61), Raj Gaire (CSIRO's Data61 & Cybersecurity CRC, Australia), Surya Nepal (CSIRO's Data61 & Cybersecurity CRC, Australia), Seyit Camtepe (CSIRO's…

Read More

PISE: Protocol Inference using Symbolic Execution and Automata Learning

Ron Marcovich, Orna Grumberg, Gabi Nakibly (Technion, Israel Institute of Technology)

Read More