Md Abdul Hannan (Colorado State University), Ronghao Ni (Carnegie Mellon University), Chi Zhang (Carnegie Mellon University), Limin Jia (Carnegie Mellon University), Ravi Mangal (Colorado State University), Corina S. Pasareanu (Carnegie Mellon University)

Large language models (LLMs) have demonstrated impressive capabilities across a wide range of coding tasks, including summarization, translation, completion, and code generation. Despite these advances, detecting code vulnerabilities remains a challenging problem for LLMs. In-context learning (ICL) has emerged as an effective mechanism for improving model performance by providing a small number of labeled examples within the prompt. Prior work has shown, however, that the effectiveness of ICL depends critically on how these few-shot examples are selected. In this paper, we study two intuitive criteria for selecting few-shot examples for ICL in the context of code vulnerability detection. The first criterion leverages model behavior by prioritizing samples on which the LLM consistently makes mistakes, motivated by the intuition that such samples can expose and correct systematic model weaknesses. The second criterion selects examples based on semantic similarity to the query program, using k-nearest-neighbor retrieval to identify relevant contexts.

We conduct extensive evaluations using open-source LLMs and datasets spanning multiple programming languages. Our results show that for Python and JavaScript, careful selection of few-shot examples can lead to measurable performance improvements in vulnerability detection. In contrast, for C and C++ programs, few-shot example selection has limited impact, suggesting that more powerful but also more expensive approaches, such as retraining or fine-tuning, may be required to substantially improve model performance.

View More Papers

Lightweight Internet Bandwidth Allocation and Isolation with Fractional Fair...

Marc Wyss (ETH Zurich), Yih-Chun Hu (University of Illinois at Urbana-Champaign), Vincent Lenders (University of Luxembourg), Roland Meier (armasuisse), Adrian Perrig (ETH Zurich)

Read More

SVDefense: Effective Defense against Gradient Inversion Attacks via Singular...

Chenxiang Luo (City University of Hong Kong), David K.Y. Yau (Singapore University of Technology and Design), Qun Song (City University of Hong Kong)

Read More

Hey there! You are using WhatsApp: Enumerating Three Billion...

Gabriel K. Gegenhuber (University of Vienna, Faculty of Computer Science and UniVie Doctoral School Computer Science), Philipp E. Frenzel (SBA Research), Maximilian Günther (University of Vienna, Faculty of Computer Science), Johanna Ullrich (University of Vienna, Faculty of Computer Science), Aljosha Judmayer (University of Vienna, Faculty of Computer Science)

Read More