Yuhui Wang (Department of Electrical and Computer Engineering, University of Michigan-Dearborn), Xingqi Wu (Department of Electrical and Computer Engineering, University of Michigan-Dearborn), Junaid Farooq (Department of Electrical and Computer Engineering, University of Michigan-Dearborn), Juntao Chen (Department of Computer and Information Sciences, Fordham University)
Large language models (LLMs) are increasingly being integrated into Open Radio Access Network (O-RAN) control loops to enable intent driven automation for resource management and network slicing. However, deploying LLMs within the Near-Real-Time RAN Intelligent Controller (Near- RT RIC) introduces a new control plane vulnerability. Because LLM driven xApps process untrusted telemetry and shared state information, adversaries can exploit prompt injection attacks to manipulate control logic, resulting in unauthorized resource allocation and slice isolation violations. This paper presents PROMPTGUARD, a Zero Trust (ZT) prompting framework for securing LLM driven O-RAN control. PROMPTGUARD is realized as a semantic verification xApp that enforces continuous intent validation on all LLM bound inputs by treating every prompt as potentially adversarial. We implement PROMPTGUARD on the OpenAI Cellular (OAIC) platform and evaluate its effectiveness against multiple prompt injection attacks under strict latency constraints. Results show that PROMPTGUARD mitigates adversarial prompts with high accuracy while preserving the O-RAN latency requirements, establishing ZT prompting as a foundational security primitive for AI-native RANs.