Jie Lin (University of Central Florida), David Mohaisen (University of Central Florida)

Large Language Models (LLMs) have demonstrated strong potential in tasks such as code understanding and generation. This study evaluates several advanced LLMs—such as LLaMA-2, CodeLLaMA, LLaMA-3, Mistral, Mixtral, Gemma, CodeGemma, Phi-2, Phi-3, and GPT-4—for vulnerability detection, primarily in Java, with additional tests in C/C++ to assess generalization. We transition from basic positive sample detection to a more challenging task involving both positive and negative samples and evaluate the LLMs’ ability to identify specific vulnerability types. Performance is analyzed using runtime and detection accuracy in zero-shot and few-shot settings with custom and generic metrics. Key insights include the strong performance of models like Gemma and LLaMA-2 in identifying vulnerabilities, though this success varies, with some configurations performing no better than random guessing. Performance also fluctuates significantly across programming languages and learning modes (zero- vs. few-shot). We further investigate the impact of model parameters, quantization methods, context window (CW) sizes, and architectural choices on vulnerability detection. While CW consistently enhances performance, benefits from other parameters, such as quantization, are more limited. Overall, our findings underscore the potential of LLMs in automated vulnerability detection, the complex interplay of model parameters, and the current limitations in varied scenarios and configurations.

View More Papers

SCAMMAGNIFIER: Piercing the Veil of Fraudulent Shopping Website Campaigns

Marzieh Bitaab (Arizona State University), Alireza Karimi (Arizona State University), Zhuoer Lyu (Arizona State University), Adam Oest (Amazon), Dhruv Kuchhal (Amazon), Muhammad Saad (X Corp.), Gail-Joon Ahn (Arizona State University), Ruoyu Wang (Arizona State University), Tiffany Bao (Arizona State University), Yan Shoshitaishvili (Arizona State University), Adam Doupé (Arizona State University)

Read More

A Large-Scale Measurement Study of the PROXY Protocol and...

Stijn Pletinckx (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Read More

What Makes Phishing Simulation Campaigns (Un)Acceptable? A Vignette Experiment

Jasmin Schwab (German Aerospace Center (DLR)), Alexander Nussbaum (University of the Bundeswehr Munich), Anastasia Sergeeva (University of Luxembourg), Florian Alt (University of the Bundeswehr Munich and Ludwig Maximilian University of Munich), and Verena Distler (Aalto University)

Read More

MineShark: Cryptomining Traffic Detection at Scale

Shaoke Xi (Zhejiang University), Tianyi Fu (Zhejiang University), Kai Bu (Zhejiang University), Chunling Yang (Zhejiang University), Zhihua Chang (Zhejiang University), Wenzhi Chen (Zhejiang University), Zhou Ma (Zhejiang University), Chongjie Chen (HANG ZHOU CITY BRAIN CO., LTD), Yongsheng Shen (HANG ZHOU CITY BRAIN CO., LTD), Kui Ren (Zhejiang University)

Read More