Liam Wachter (EPFL), Julian Gremminger (EPFL), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Mathias Payer (EPFL), Flavio Toffalini (EPFL)

Web browsers are ubiquitous and execute untrusted JavaScript (JS) code. JS engines optimize frequently executed code through just-in-time (JIT) compilation. Subtly conflicting assumptions between optimizations frequently result in JS engine vulnerabilities. Attackers can take advantage of such diverging assumptions and use the flexibility of JS to craft exploits that produce a miscalculation, remove bounds checks in JIT compiled code, and ultimately gain arbitrary code execution. Classical fuzzing approaches for JS engines only detect bugs if the engine crashes or a runtime assertion fails. Differential fuzzing can compare interpreted code against optimized JIT compiled code to detect differences in execution. Recent approaches probe the execution states of JS programs through ad-hoc JS functions that read the value of variables at runtime. However, these approaches have limited capabilities to detect diverging executions and inhibit
optimizations during JIT compilation, thus leaving JS engines under-tested.

We propose DUMPLING, a differential fuzzer that compares the full state of optimized and unoptimized execution for arbitrary JS programs. Instead of instrumenting the JS input, DUMPLING instruments the JS engine itself, enabling deep and precise introspection. These extracted fine-grained execution states, coined as (frame) dumps, are extracted at a high frequency even in the middle of JIT compiled functions. DUMPLING finds eight new bugs in the thoroughly tested V8 engine, where previous differential fuzzing approaches struggled to discover new bugs. We receive $11,000 from Google’s Vulnerability Rewards Program for reporting the vulnerabilities found by DUMPLING.

View More Papers

NDSS Symposium 2025 Welcome and Opening Remarks

General Chairs: David Balenson, USC Information Sciences Institute and Heng Yin, University of California, Riverside Program Chairs: Christina Pöpper, New York University Abu Dhabi and Hamed Okhravi, MIT Lincoln Laboratory Artifact Evaluation Chairs: Daniele Cono D’Elia, Sapienza University and Mathy Vanhoef, KU Leuven

Read More

Automatic Insecurity: Exploring Email Auto-configuration in the Wild

Shushang Wen (School of Cyber Science and Technology, University of Science and Technology of China), Yiming Zhang (Tsinghua University), Yuxiang Shen (School of Cyber Science and Technology, University of Science and Technology of China), Bingyu Li (School of Cyber Science and Technology, Beihang University), Haixin Duan (Tsinghua University; Zhongguancun Laboratory), Jingqiang Lin (School of Cyber…

Read More

Explanation as a Watermark: Towards Harmless and Multi-bit Model...

Shuo Shao (Zhejiang University), Yiming Li (Zhejiang University), Hongwei Yao (Zhejiang University), Yiling He (Zhejiang University), Zhan Qin (Zhejiang University), Kui Ren (Zhejiang University)

Read More

Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability

Keika Mori (Deloitte Tohmatsu Cyber LLC, Waseda University), Daiki Ito (Deloitte Tohmatsu Cyber LLC), Takumi Fukunaga (Deloitte Tohmatsu Cyber LLC), Takuya Watanabe (Deloitte Tohmatsu Cyber LLC), Yuta Takata (Deloitte Tohmatsu Cyber LLC), Masaki Kamizono (Deloitte Tohmatsu Cyber LLC), Tatsuya Mori (Waseda University, NICT, RIKEN AIP)

Read More