Qi Ling (Purdue University), Yujun Liang (Tsinghua University), Yi Ren (Tsinghua University), Baris Kasikci (University of Washington and Google), Shuwen Deng (Tsinghua University)

Since their emergence in 2018, speculative execution attacks have proven difficult to fully prevent without substantial performance overhead. This is because most mitigations hurt modern processors' speculative nature, which is essential to many optimization techniques. To address this, numerous scanners have been developed to identify vulnerable code snippets (speculative gadgets) within software applications, allowing mitigations to be applied selectively and thereby minimizing performance degradation.

In this paper, we show that existing speculative gadget scanners lack accuracy, often misclassifying gadgets due to limited modeling of timing properties. Instead, we identify another fundamental condition intrinsic to all speculative attacks—the timing requirement as a race condition inside the gadget. Specifically, the attacker must optimize the race condition between speculated authorization and secret leakage to successfully exploit the gadget. Therefore, we introduce GadgetMeter, a framework designed to quantitatively gauge the exploitability of speculative gadgets based on their timing property. We systematically explore the attacker's power to optimize the race condition inside gadgets (windowing power). A Directed Acyclic Instruction Graph is used to model timing conditions and static analysis and runtime testing are combined to optimize attack patterns and quantify gadget vulnerability. We use GadgetMeter to evaluate gadgets in a wide range of software, including six real-world applications and the Linux kernel. Our result shows that GadgetMeter can accurately identify exploitable speculative gadgets and quantify their vulnerability level, identifying 471 gadgets reported by GadgetMeter works as unexploitable.

View More Papers

Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A...

Ningfei Wang (University of California, Irvine), Shaoyuan Xie (University of California, Irvine), Takami Sato (University of California, Irvine), Yunpeng Luo (University of California, Irvine), Kaidi Xu (Drexel University), Qi Alfred Chen (University of California, Irvine)

Read More

LLM-xApp: A Large Language Model Empowered Radio Resource Management...

Xingqi Wu (University of Michigan-Dearborn), Junaid Farooq (University of Michigan-Dearborn), Yuhui Wang (University of Michigan-Dearborn), Juntao Chen (Fordham University)

Read More

The Road to Trust: Building Enclaves within Confidential VMs

Wenhao Wang (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Linke Song (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Benshan Mei (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Shuang Liu (Ant Group), Shijun Zhao (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering,…

Read More

type++: Prohibiting Type Confusion with Inline Type Information

Nicolas Badoux (EPFL), Flavio Toffalini (Ruhr-Universität Bochum, EPFL), Yuseok Jeon (UNIST), Mathias Payer (EPFL)

Read More