Brian Johannesmeyer (VU Amsterdam), Jakob Koschel (VU Amsterdam), Kaveh Razavi (ETH Zurich), Herbert Bos (VU Amsterdam), Cristiano Giuffrida (VU Amsterdam)

Due to the high cost of serializing instructions to mitigate Spectre-like attacks on mispredicted conditional branches (Spectre-PHT), developers of critical software such as the Linux kernel selectively apply such mitigations with annotations to code paths they assume to be dangerous under speculative execution. The approach leads to incomplete protection as it applies mitigations only to easy-to-spot gadgets. Still, until now, this was sufficient, because existing gadget scanners (and kernel developers) are pattern-driven: they look for known exploit signatures and cannot detect more generic gadgets.

In this paper, we abandon pattern scanning for an approach that models the essential emph{steps} used in speculative execution attacks, allowing us to find more generic gadgets---well beyond the reach of existing scanners. In particular, we present Kasper, a speculative execution gadget scanner that uses taint analysis policies to model an attacker capable of exploiting arbitrary software/hardware vulnerabilities on a transient path to control data (e.g., through memory massaging or LVI), access secrets (e.g., through out-of-bounds or use-after-free accesses), and leak these secrets (e.g., through cache-based, MDS-based, or port contention-based covert channels).

Finally, where existing solutions target user programs, Kasper finds gadgets in the kernel, a higher-value attack target, but also more complicated to analyze. Even though the kernel is heavily hardened against transient execution attacks, Kasper finds 1379 gadgets that are not yet mitigated. We confirm our findings by demonstrating an end-to-end proof-of-concept exploit for one of the gadgets found by Kasper.

View More Papers

Demystifying Local Business Search Poisoning for Illicit Drug Promotion

Peng Wang (Indiana University Bloomington), Zilong Lin (Indiana University Bloomington), Xiaojing Liao (Indiana University Bloomington), XiaoFeng Wang (Indiana University Bloomington)

Read More

Demo #6: Attacks on CAN Error Handling Mechanism

Khaled Serag (Purdue University), Vireshwar Kumar (IIT Delhi), Z. Berkay Celik (Purdue University), Rohit Bhatia (Purdue University), Mathias Payer (EPFL) and Dongyan Xu (Purdue University)

Read More

Get a Model! Model Hijacking Attack Against Machine Learning...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

Detecting CAN Masquerade Attacks with Signal Clustering Similarity

Pablo Moriano (Oak Ridge National Laboratory), Robert A. Bridges (Oak Ridge National Laboratory) and Michael D. Iannacone (Oak Ridge National Laboratory)

Read More