Ryutaro Nishizaka, Yudai Fujiwara, Takuya Shimizu, Kazushi Kato, Yuichi Sugiyama (Ricerca Security, Inc.)

LLM agents that autonomously operate tools such as disassemblers and debuggers are increasingly used for reverse engineering. Designing LLM-resistant protections requires understanding their capability characteristics, yet prior work has not studied this systematically. We propose an analytical model linking a three-stage loop (Observe–Comprehend–Plan) to three categories of software protection (Concealment–Complication– Misdirection) and evaluate three LLM agents on 24 CTF reverse engineering tasks. By analyzing failure logs, we identify four weaknesses (Training bias, Over-trust in observations, Context limitation, Plan persistence) and show that different software protections disrupt different stages and expose different weaknesses. We also find that LLM agents often analyze assembly effectively without a decompiler, and that their strengths differ from human solvers depending on challenge characteristics.

View More Papers

Cognitive Threat Detection for SOC Operations: Automating Manipulation Tactic...

Keerthana Madhavan (School of Computer Science, University of Guelph, Canada), Luiza Antonie (School of Computer Science; CARE-AI, University of Guelph, Canada), Stacey D. Scott, School of Computer Science; CARE-AI, University of Guelph, Canada)

Read More

Strategic Games and Zero Shot Attacks on Heavy-Hitter Network...

Francesco Da Dalt (ETH Zürich), Adrian Perrig (ETH Zurich)

Read More

XR Devices Send WiFi Packets When They Should Not:...

Christopher Vattheuer (University of California, Los Angeles (UCLA)), Justin Feng (University of California, Los Angeles (UCLA)), Hossein Khalili (University of California, Los Angeles (UCLA)), Nader Sehatbakhsh (University of California, Los Angeles (UCLA)), Omid Abari (University of California, Los Angeles (UCLA))

Read More