Weiheng Bai (University of Minnesota), Qiushi Wu (IBM Research), Kefu Wu, Kangjie Lu (University of Minnesota)

In recent years, large language models (LLMs) have been widely used in security-related tasks, such as security bug identification and patch analysis. The effectiveness of LLMs in these tasks is often influenced by the construction of appropriate prompts. Some state-of-the-art research has proposed multiple factors to improve the effectiveness of building prompts. However, the influence of prompt content on the accuracy and efficacy of LLMs in executing security tasks remains underexplored. Addressing this gap, our study conducts a comprehensive experiment, assessing various prompt methodologies in the context of security-related tasks. We employ diverse prompt structures and contents and evaluate their impact on the performance of LLMs in security-related tasks. Our findings suggest that appropriately modifying prompt structures and content can significantly enhance the performance of LLMs in specific security tasks. Conversely, improper prompt methods can markedly reduce LLM effectiveness. This research not only contributes to the understanding of prompt influence on LLMs but also serves as a valuable guide for future studies on prompt optimization for security tasks. Our code and dataset is available at Wayne-Bai/Prompt-Affection.

View More Papers

Bernoulli Honeywords

Ke Coby Wang (Duke University), Michael K. Reiter (Duke University)

Read More

PrintListener: Uncovering the Vulnerability of Fingerprint Authentication via the...

Man Zhou (Huazhong University of Science and Technology), Shuao Su (Huazhong University of Science and Technology), Qian Wang (Wuhan University), Qi Li (Tsinghua University), Yuting Zhou (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Zhengxiong Li (University of Colorado Denver)

Read More

Using Behavior Monitoring to Identify Privacy Concerns in Smarthome...

Atheer Almogbil, Momo Steele, Sofia Belikovetsky (Johns Hopkins University), Adil Inam (University of Illinois at Urbana-Champaign), Olivia Wu (Johns Hopkins University), Aviel Rubin (Johns Hopkins University), Adam Bates (University of Illinois at Urbana-Champaign)

Read More

DynPRE: Protocol Reverse Engineering via Dynamic Inference

Zhengxiong Luo (Tsinghua University), Kai Liang (Central South University), Yanyang Zhao (Tsinghua University), Feifan Wu (Tsinghua University), Junze Yu (Tsinghua University), Heyuan Shi (Central South University), Yu Jiang (Tsinghua University)

Read More