Yan Pang (University of Virginia), Wenlong Meng (University of Virginia), Xiaojing Liao (Indiana University Bloomington), Tianhao Wang (University of Virginia)

With the rapid development of large language models, the potential threat of their malicious use, particularly in generating phishing content, is becoming increasingly prevalent. Leveraging the capabilities of LLMs, malicious users can synthesize phishing emails that are free from spelling mistakes and other easily detectable features. Furthermore, such models can generate topic-specific phishing messages, tailoring content to the target domain and increasing the likelihood of success.

Detecting such content remains a significant challenge, as LLM-generated phishing emails often lack clear or distinguishable linguistic features. As a result, most existing semantic-level detection approaches struggle to identify them reliably. While certain LLM-based detection methods have shown promise, they suffer from high computational costs and are constrained by the performance of the underlying language model, making them impractical for large-scale deployment.

In this work, we aim to address this issue. We propose Paladin, which embeds trigger-tag associations into vanilla LLM using various insertion strategies, creating them into instrumented LLMs. When an instrumented LLM generates content related to phishing, it will automatically include detectable tags, enabling easier identification. Based on the design on implicit and explicit triggers and tags, we consider four distinct scenarios in our work. We evaluate our method from three key perspectives: stealthiness, effectiveness, and robustness, and compare it with existing baseline methods. Experimental results show that our method outperforms the baselines, achieving over 90% detection accuracy across all scenarios.

View More Papers

Repairing Trust in Domain Name Disputes Practices: Insights from...

Vinny Adjibi (Georgia Institute of Technology), Athanasios Avgetidis (Georgia Institute of Technology), Manos Antonakakis (Georgia Institute of Technology), Alberto Dainotti (Georgia Institute of Technology), Michael Bailey (Georgia Institute of Technology), Fabian Monrose (Georgia Institute of Technology)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives...

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Read More

SAGA: A Security Architecture for Governing AI Agentic Systems

Georgios Syros (Northeastern University), Anshuman Suri (Northeastern University), Jacob Ginesin (Northeastern University), Cristina Nita-Rotaru (Northeastern University), Alina Oprea (Northeastern University)

Read More