Yunzhe Li (Shanghai Jiao Tong University), Jianan Wang (Shanghai Jiao Tong University), Hongzi Zhu (Shanghai Jiao Tong University), James Lin (Shanghai Jiao Tong University), Shan Chang (Donghua University), Minyi Guo (Shanghai Jiao Tong University)

Large Language Models (LLMs) have become foundational components in a wide range of applications, including natural language understanding and generation, embodied intelligence, and scientific discovery. As their computational requirements continue to grow, these models are increasingly deployed as cloud-based services, allowing users to access powerful LLMs via the Internet. However, this deployment model introduces a new class of threat: denial-of-service (DoS) attacks via unbounded reasoning, where adversaries craft specially designed inputs that cause the model to enter excessively long or infinite generation loops. These attacks can exhaust backend compute resources, degrading or denying service to legitimate users. To mitigate such risks, many LLM providers adopt a closed-source, black-box setting to obscure model internals. In this paper, we propose ThinkTrap, a novel input-space optimization framework for DoS attacks against LLM services even in black-box environments. The core idea of ThinkTrap is to first map discrete tokens into a continuous embedding space, then undertake efficient black-box optimization in a low-dimensional subspace exploiting input sparsity. The goal of this optimization is to identify adversarial prompts that induce extended or non-terminating generation across several state-of-the-art LLMs, achieving DoS with minimal token overhead. We evaluate the proposed attack across multiple commercial, closed-source LLM services. Our results demonstrate that, even far under the restrictive request frequency limits commonly enforced by these platforms, typically capped at ten requests per minute (10 RPM), the attack can degrade service throughput to as low as 1% of its original capacity, and in some cases, induce complete service failure.

View More Papers

What Do They Fix? LLM-Aided Categorization of Security Patches...

Xingyu Li (UC Riverside), Juefei Pu (UC Riverside), Yifan Wu (UC Riverside), Xiaochen Zou (UC Riverside), Shitong Zhu (UC Riverside), Qiushi Wu (IBM), Zheng Zhang (UC Riverside), Joshua Hsu (UC Riverside), Yue Dong (UC Riverside), Zhiyun Qian (UC Riverside), Kangjie Lu (University of Minnesota), Trent Jaeger (UC Riverside), Michael De Lucia (U.S. Army Research Laboratory),…

Read More

HoneySat: A Network-based Satellite Honeypot Framework

Efrén López-Morales (New Mexico State University), Ulysse Planta (CISPA Helmholtz Center for Information Security), Gabriele Marra (CISPA Helmholtz Center for Information Security), Carlos Gonzalez-Cortes (Universidad de Santiago de Chile and German Aerospace Center (DLR)), Jacob Hopkins (Texas A&M University - Corpus Christi), Majid Garoosi (CISPA Helmholtz Center for Information Security), Elías Obreque (Universidad de Chile),…

Read More

Enhancing Semantic-Aware Binary Diffing with High-Confidence Dynamic Instruction Alignment

Chengfeng Ye (The Hong Kong University of Science and Technology, China), Anshunkang Zhou (The Hong Kong University of Science and Technology, China), Charles Zhang (The Hong Kong University of Science and Technology, China)

Read More