Qi Ling (Purdue University), Yujun Liang (Tsinghua University), Yi Ren (Tsinghua University), Baris Kasikci (University of Washington and Google), Shuwen Deng (Tsinghua University)

Since their emergence in 2018, speculative execution attacks have proven difficult to fully prevent without substantial performance overhead. This is because most mitigations hurt modern processors' speculative nature, which is essential to many optimization techniques. To address this, numerous scanners have been developed to identify vulnerable code snippets (speculative gadgets) within software applications, allowing mitigations to be applied selectively and thereby minimizing performance degradation.

In this paper, we show that existing speculative gadget scanners lack accuracy, often misclassifying gadgets due to limited modeling of timing properties. Instead, we identify another fundamental condition intrinsic to all speculative attacks—the timing requirement as a race condition inside the gadget. Specifically, the attacker must optimize the race condition between speculated authorization and secret leakage to successfully exploit the gadget. Therefore, we introduce GadgetMeter, a framework designed to quantitatively gauge the exploitability of speculative gadgets based on their timing property. We systematically explore the attacker's power to optimize the race condition inside gadgets (windowing power). A Directed Acyclic Instruction Graph is used to model timing conditions and static analysis and runtime testing are combined to optimize attack patterns and quantify gadget vulnerability. We use GadgetMeter to evaluate gadgets in a wide range of software, including six real-world applications and the Linux kernel. Our result shows that GadgetMeter can accurately identify exploitable speculative gadgets and quantify their vulnerability level, identifying 471 gadgets reported by GadgetMeter works as unexploitable.

View More Papers

Iris: Dynamic Privacy Preserving Search in Authenticated Chord Peer-to-Peer...

Angeliki Aktypi (University of Oxford), Kasper Rasmussen (University of Oxford)

Read More

IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems

Yuhao Wu (Washington University in St. Louis), Franziska Roesner (University of Washington), Tadayoshi Kohno (University of Washington), Ning Zhang (Washington University in St. Louis), Umar Iqbal (Washington University in St. Louis)

Read More

Rethinking Trust in Forge-Based Git Security

Aditya Sirish A Yelgundhalli (New York University), Patrick Zielinski (New York University), Reza Curtmola (New Jersey Institute of Technology), Justin Cappos (New York University)

Read More

Mixnets on a Tightrope: Quantifying the Leakage of Mix...

Sebastian Meiser, Debajyoti Das, Moritz Kirschte, Esfandiar Mohammadi, Aniket Kate

Read More