Xiangpu Song (School of Cyber Science and Technology, Shandong University), Longjia Pei (School of Cyber Science and Technology, Shandong University), Jianliang Wu (Simon Fraser University), Yingpei Zeng (Hangzhou Dianzi University), Gaoshuo He (School of Cyber Science and Technology, Shandong University), Chaoshun Zuo (Independent Researcher), Xiaofeng Liu (School of Cyber Science and Technology, Shandong University), Qingchuan Zhao (City University of Hong Kong), Shanqing Guo (School of Cyber Science and Technology, Shandong University, Shandong Key Laboratory of Artificial Intelligence Security and State Key Laboratory of Cryptography and Digital Economy Security, Shandong University)

Network protocol implementations are expected to strictly comply with their specifications to ensure reliable and secure communications. However, the inherent ambiguity of natural-language specifications often leads to developers' misinterpretations, causing protocol implementations to deviate from standard behaviors. These deviations result in subtle non-compliance bugs that can cause interoperability issues and critical security vulnerabilities. Unlike memory corruption bugs, these bugs typically do not exhibit explicit error behaviors, resulting in existing bug oracles being insufficient to thoroughly detect them. Moreover, existing works require heavy manual effort to verify findings and analyze root causes, severely limiting their scalability in practice.

In this paper, we present ProtocolGuard, a novel framework that systematically detects non-compliance bugs by combining LLM-guided static analysis with fuzzing-based dynamic verification. ProtocolGuard first extracts normative rules from protocol specifications using a hybrid method, and performs LLM-guided program slicing to extract code slices relevant to each rule. It then leverages LLMs to detect semantic inconsistencies between these rules and code logic, and dynamically verify whether these bugs can be triggered. To facilitate bug verification, ProtocolGuard first uses LLMs to automatically generate assertion statements and instrument the code to turn silent inconsistencies into observable assertion failures. Then, it produces initial test cases that are more likely to trigger the bug with the help of LLMs for dynamic verification. Lastly, ProtocolGuard dynamically tests the instrumented code to confirm bug identification and generate proof-of-concept test cases. We implemented a prototype of ProtocolGuard and evaluated it on 11 widely-used protocol implementations. ProtocolGuard successfully discovered 158 non-compliance bugs with high accuracy, 70 of which have been confirmed, and the majority of which can be converted into assertions and dynamically verified. The comparison with existing state-of-the-art tools demonstrates that ProtocolGuard outperforms them in both precision and recall rates in bug detection capabilities.

View More Papers

ExpShield: Safeguarding Web Text from Unauthorized Crawling and LLM...

Ruixuan Liu (Emory University), Toan Tran (Emory University), Tianhao Wang (University of Virginia), Hongsheng Hu (Shanghai Jiao Tong University), Shuo Wang (Shanghai Jiao Tong University), Li Xiong (Emory University)

Read More

Icarus: Achieving Performant Asynchronous BFT with Only Optimistic Paths

Xiaohai Dai (Huazhong University of Science and Technology), Yiming Yu (Huazhong University of Science and Technology), Sisi Duan (Tsinghua University), Rui Hao (Wuhan University of Technology), Jiang Xiao (Huazhong University of Science and Technology), Hai Jin (Huazhong University of Science and Technology)

Read More

Augmented Shuffle Differential Privacy Protocols for Large-Domain Categorical and...

Takao Murakami (ISM/AIST/RIKEN AIP), Yuichi Sei (UEC), Reo Eriguchi (AIST)

Read More