Weiheng Bai (University of Minnesota), Qiushi Wu (IBM Research), Kefu Wu, Kangjie Lu (University of Minnesota)

In recent years, large language models (LLMs) have been widely used in security-related tasks, such as security bug identification and patch analysis. The effectiveness of LLMs in these tasks is often influenced by the construction of appropriate prompts. Some state-of-the-art research has proposed multiple factors to improve the effectiveness of building prompts. However, the influence of prompt content on the accuracy and efficacy of LLMs in executing security tasks remains underexplored. Addressing this gap, our study conducts a comprehensive experiment, assessing various prompt methodologies in the context of security-related tasks. We employ diverse prompt structures and contents and evaluate their impact on the performance of LLMs in security-related tasks. Our findings suggest that appropriately modifying prompt structures and content can significantly enhance the performance of LLMs in specific security tasks. Conversely, improper prompt methods can markedly reduce LLM effectiveness. This research not only contributes to the understanding of prompt influence on LLMs but also serves as a valuable guide for future studies on prompt optimization for security tasks. Our code and dataset is available at Wayne-Bai/Prompt-Affection.

View More Papers

EnclaveFuzz: Finding Vulnerabilities in SGX Applications

Liheng Chen (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences; Institute for Network Science and Cyberspace of Tsinghua University), Zheming Li (Institute for Network Science and Cyberspace of Tsinghua University), Zheyu Ma (Institute for Network Science and Cyberspace of Tsinghua University), Yuan Li (Tsinghua University),…

Read More

Exploring Phishing Threats through QR Codes in Naturalistic Settings

Filipo Sharevski (DePaul University), Mattia Mossano, Maxime Fabian Veit, Gunther Schiefer, Melanie Volkamer (Karlsruhe Institute of Technology)

Read More

Attributions for ML-based ICS Anomaly Detection: From Theory to...

Clement Fung (Carnegie Mellon University), Eric Zeng (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University)

Read More

GNNIC: Finding Long-Lost Sibling Functions with Abstract Similarity

Qiushi Wu (University of Minnesota), Zhongshu Gu (IBM Research), Hani Jamjoom (IBM Research), Kangjie Lu (University of Minnesota)

Read More