Sri Hrushikesh Varma Bhupathiraju (University of Florida), Shaoyuan Xie (University of California, Irvine), Michael Clifford (Toyota InfoTech Labs), Qi Alfred Chen (University of California, Irvine), Takeshi Sugawara (The University of Electro-Communications), Sara Rampazzi (University of Florida)

Thermal cameras are increasingly considered a viable solution in autonomous systems to ensure perception in low-visibility conditions. Specialized optics and advanced signal processing are integrated into thermal-based perception pipelines of self-driving cars, robots, and drones to capture relative temperature changes and allow the detection of living beings and objects where conventional visible-light cameras struggle, such as during nighttime, fog, or heavy rain. However, it remains unclear whether the security and trustworthiness of thermal-based perception systems are comparable to those of conventional cameras. Our research exposes and mitigates three novel vulnerabilities in thermal image processing, specifically within equalization, calibration, and lensing mechanisms, that are inherent to thermal cameras. These vulnerabilities can be triggered by heat sources naturally present or maliciously placed in the environment, altering the perceived relative temperature, or generating time-controlled artifacts that can undermine the correct functioning of obstacle avoidance.
We systematically analyze vulnerabilities across three thermal cameras used in autonomous systems (FLIR Boson, InfiRay T2S, FPV XK-C130), assessing their impact on three fine-tuned thermal object detectors and two visible-thermal fusion models for autonomous driving.
Our results show a mean average precision drop of 50% in pedestrian detection and 45% in fusion models, caused by flaws in the equalization process. Real-world driving tests at speeds up to 40 km/h show pedestrian misdetection rates up to 100% and the creation of false obstacles with a 91% success rate, persisting minutes after the attack ends. To address these issues, we propose and evaluate three novel threat-aware signal processing algorithms that dynamically detect and suppress attacker-induced artifacts. Our findings shed light on the reliability of thermal-based perception processes, to raise awareness of the limitations of such technology when used for obstacle avoidance.

View More Papers

Automating Function-Level TARA for Automotive Full-Lifecycle Security

Yuqiao Yang (University of Electronic Science and Technology of China), Yongzhao Zhang (University of Electronic Science and Technology of China), Wenhao Liu (GoGoByte Technology), Jun Li (GoGoByte Technology), Pengtao Shi (GoGoByte Technology), DingYu Zhong (University of Electronic Science and Technology of China), Jie Yang (University of Electronic Science and Technology of China), Ting Chen (University…

Read More

An LLM-Driven Fuzzing Framework for Detecting Logic Instruction Bugs...

Jiaxing Cheng (Institute of Information Engineering, CAS; School of Cyber Security, UCAS), Ming Zhou (School of Cyber Science and Engineering, Nanjing University of Science and Technology), Haining Wang (Virginia Tech), Xin Chen (Institute of Institute of Information Engineering, CAS; School of Cyber Security, UCAS), Yuncheng Wang (Institute of Institute of Information Engineering, CAS; School of…

Read More

ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with...

Reachal Wang (Duke University), Yuqi Jia (Duke University), Neil Gong (Duke University)

Read More