Shir Bernstein (Ben-Gurion University of the Negev, Israel), David Beste (CISPA Helmholtz Center for Information Security, Germany), Daniel Ayzenshteyn (Ben-Gurion University of the Negev, Israel), Lea Schönherr (CISPA Helmholtz Center for Information Security, Germany), Yisroel Mirsky (Ben-Gurion University of the Negev, Israel)

Large Language Models (LLMs) are increasingly trusted to perform automated code review and static analysis at scale, supporting tasks such as vulnerability detection, summarization, and refactoring. In this paper, we identify and exploit a critical vulnerability in LLM-based code analysis: an abstraction bias that causes models to overgeneralize familiar programming patterns and overlook small, meaningful bugs. Adversaries can exploit this blind spot to hijack the control flow of the LLM’s interpretation with minimal edits and without affecting actual runtime behavior. We refer to this attack as a Familiar Pattern Attack (FPA).

We develop a fully automated, black-box algorithm that discovers and injects FPAs into target code. Our evaluation shows that FPAs are not only effective against basic and reasoning models, but are also transferable across model families (OpenAI, Anthropic, Google), and universal across programming languages (Python, C, Rust, Go). Moreover, FPAs remain effective even when models are explicitly warned about the attack via robust system prompts. Finally, we explore positive, defensive uses of FPAs and discuss their broader implications for the reliability and safety of code-oriented LLMs.

View More Papers

Mobius: Enabling Byzantine-Resilient Single Secret Leader Election with Uniquely...

Hanyue Dou (Institute of Software, Chinese Academy of Sciences; the School of Computer Science and Technology, University of Chinese Academy of Sciences), Peifang Ni (Institute of Software, Chinese Academy of Sciences; Zhongguancun Laboratory), Yingzi Gao (Shandong University), Jing Xu (Institute of Software, Chinese Academy of Sciences; Zhongguancun Laboratory)

Read More

PrivATE: Differentially Private Average Treatment Effect Estimation for Observational...

Quan Yuan (Zhejiang University and University of Virginia), Xiaochen Li (University of North Carolina at Greensboro), Linkang Du (Xi'an Jiaotong University), Min Chen (Vrije Universiteit Amsterdam), Mingyang Sun (Peking University), Yunjun Gao (Zhejiang University), Shibo He (Zhejiang University), Jiming Chen (Zhejiang University and Hangzhou Dianzi University), Zhikun Zhang (Zhejiang University)

Read More

PrivCode: When Code Generation Meets Differential Privacy

Zheng Liu (University of Virginia), Chen Gong (University of Virginia), Terry Yue Zhuo (Monash University and CSIRO's Data61), Kecen Li (University of Virginia), Weichen Yu (Carnegie Mellon University), Matt Fredrikson (Carnegie Mellon University), Tianhao Wang (University of Virginia)

Read More