Yuki Hayakawa (Keio University), Takami Sato (University of California, Irvine), Ryo Suzuki, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

LiDAR stands as a critical sensor in the realm of autonomous vehicles (AVs). Considering its safety and security criticality, recent studies have actively researched its security and warned of various safety implications against LiDAR spoofing attacks, which can cause critical safety implications on AVs by injecting ghost objects or removing legitimate objects from their detection. To defend against LiDAR spoofing attacks, pulse fingerprinting has been expected as one of the most promising countermeasures against LiDAR spoofing attacks, and recent research demonstrates its high defense capability, especially against object removal attacks. In this WIP paper, we report the progress in conducting further security analysis on pulse fingerprinting against LiDAR spoofing attacks. We design a novel adaptive attack strategy, the Adaptive High-Frequency Removal (A-HFR) attack, which can be effective against broader types of LiDARs than the existing HFR attacks. We evaluate the A-HFR attack on three commercial LiDAR with pulse fingerprinting and find that the A-HFR attack can successfully remove over 96% of the point cloud within a 20◦ horizontal and a 16◦ vertical angle. Our finding indicates that current pulse fingerprinting techniques might not be sufficiently robust to thwart spoofing attacks. We also discuss potential strategies to enhance the defensive efficacy of pulse fingerprinting against such attacks. This finding implies that the current pulse fingerprinting may not be an ultimate countermeasure against LiDAR spoofing attacks. We finally discuss our future plans.

View More Papers

MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots

Gelei Deng (Nanyang Technological University), Yi Liu (Nanyang Technological University), Yuekang Li (University of New South Wales), Kailong Wang (Huazhong University of Science and Technology), Ying Zhang (Virginia Tech), Zefeng Li (Nanyang Technological University), Haoyu Wang (Huazhong University of Science and Technology), Tianwei Zhang (Nanyang Technological University), Yang Liu (Nanyang Technological University)

Read More

AdvCAPTCHA: Creating Usable and Secure Audio CAPTCHA with Adversarial...

Hao-Ping (Hank) Lee (Carnegie Mellon University), Wei-Lun Kao (National Taiwan University), Hung-Jui Wang (National Taiwan University), Ruei-Che Chang (University of Michigan), Yi-Hao Peng (Carnegie Mellon University), Fu-Yin Cherng (National Chung Cheng University), Shang-Tse Chen (National Taiwan University)

Read More

WIP: Auditing Artist Style Pirate in Text-to-image Generation Models

Linkang Du (Zhejiang University), Zheng Zhu (Zhejiang University), Min Chen (CISPA Helmholtz Center for Information Security), Shouling Ji (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University), Zhikun Zhang (Stanford University)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More