Sri Hrushikesh Varma Bhupathiraju (University of Florida), Shaoyuan Xie (University of California, Irvine), Michael Clifford (Toyota InfoTech Labs), Qi Alfred Chen (University of California, Irvine), Takeshi Sugawara (The University of Electro-Communications), Sara Rampazzi (University of Florida)

Thermal cameras are increasingly considered a viable solution in autonomous systems to ensure perception in low-visibility conditions. Specialized optics and advanced signal processing are integrated into thermal-based perception pipelines of self-driving cars, robots, and drones to capture relative temperature changes and allow the detection of living beings and objects where conventional visible-light cameras struggle, such as during nighttime, fog, or heavy rain. However, it remains unclear whether the security and trustworthiness of thermal-based perception systems are comparable to those of conventional cameras. Our research exposes and mitigates three novel vulnerabilities in thermal image processing, specifically within equalization, calibration, and lensing mechanisms, that are inherent to thermal cameras. These vulnerabilities can be triggered by heat sources naturally present or maliciously placed in the environment, altering the perceived relative temperature, or generating time-controlled artifacts that can undermine the correct functioning of obstacle avoidance.
We systematically analyze vulnerabilities across three thermal cameras used in autonomous systems (FLIR Boson, InfiRay T2S, FPV XK-C130), assessing their impact on three fine-tuned thermal object detectors and two visible-thermal fusion models for autonomous driving.
Our results show a mean average precision drop of 50% in pedestrian detection and 45% in fusion models, caused by flaws in the equalization process. Real-world driving tests at speeds up to 40 km/h show pedestrian misdetection rates up to 100% and the creation of false obstacles with a 91% success rate, persisting minutes after the attack ends. To address these issues, we propose and evaluate three novel threat-aware signal processing algorithms that dynamically detect and suppress attacker-induced artifacts. Our findings shed light on the reliability of thermal-based perception processes, to raise awareness of the limitations of such technology when used for obstacle avoidance.

View More Papers

Faster Than Ever: A New Lightweight Private Set Intersection...

Guowei Ling (Shanghai Jiaotong University), Peng Tang (Shanghai Jiao Tong University), Jinyong Shan (Beijing Smartchip Microelectronics Technology Co., Ltd.), Liyao Xiang (Shanghai Jiao Tong University), Weidong Qiu (School of Cyber Science and Engineering, Shanghai Jiao Tong University, China)

Read More

DUALBREACH: Efficient Dual-Jailbreaking via Target-Driven Initialization and Multi-Target Optimization

Xinzhe Huang (Zhejiang university), Kedong Xiu (Zhejiang university), Tianhang Zheng (Zhejiang university), Churui Zeng (Zhejiang university), Wangze Ni (Zhejiang university), Zhan Qin (Zhejiang university), Kui Ren (Zhejiang university), Chun Chen (Zhejiang university)

Read More

LAPSE: Automatic, Formal Fault-Tolerant Correctness Proofs for Native Code

Charles Averill, Ilan Buzzetti (The University of Texas at Dallas), Alex Bellon (UC San Diego), Kevin Hamlen (The University of Texas at Dallas)

Read More