Zhibo Zhang (Fudan University), Lei Zhang (Fudan University), Zhangyue Zhang (Fudan University), Geng Hong (Fudan University), Yuan Zhang (Fudan University), Min Yang (Fudan University)

underline{D}edicated underline{U}RL underline{s}hortening underline{s}ervices (DUSSs) are designed to transform textit{trusted} long URLs into the shortened links.
Since DUSSs are widely used in famous corporations to better serve their large number of users (especially mobile users), cyber criminals attempt to exploit DUSS to transform their malicious links and abuse the inherited implicit trust, which is defined as textit{Misdirection Attack} in this paper.
However, little effort has been made to systematically understand such attacks. To fulfill the research gap, we present the first systematic study of the textit{Misdirection Attack} in abusing DUSS to demystify its attack surface, exploitable scope, and security impacts in the real world.

Our study reveals that real-world DUSSs commonly rely on custom URL checks, yet they exhibit unreliable security assumptions regarding web domains and lack adherence to security standards.
We design and implement a novel tool, Ditto, for empirically studying vulnerable DUSSs from a mobile perspective.
Our large-scale study reveals that a quarter of the DUSSs are susceptible to textit{Misdirection Attack}.
More importantly, we find that DUSSs hold implicit trust from both their users and domain-based checkers, extending the consequences of the attack to stealthy phishing and code injection on users' mobile phones.
We have responsibly reported all of our findings to corporations of the affected DUSS and helped them fix their vulnerabilities.

View More Papers

Understanding Data Importance in Machine Learning Attacks: Does Valuable...

Rui Wen (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

Beyond Classification: Inferring Function Names in Stripped Binaries via...

Linxi Jiang (The Ohio State University), Xin Jin (The Ohio State University), Zhiqiang Lin (The Ohio State University)

Read More

IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems

Yuhao Wu (Washington University in St. Louis), Franziska Roesner (University of Washington), Tadayoshi Kohno (University of Washington), Ning Zhang (Washington University in St. Louis), Umar Iqbal (Washington University in St. Louis)

Read More