Duanyi Yao (Hong Kong University of Science and Technology), Songze Li (Southeast University), Xueluan Gong (Wuhan University), Sizai Hou (Hong Kong University of Science and Technology), Gaoning Pan (Hangzhou Dianzi University)

Vertical Federated Learning (VFL) is a collaborative
learning paradigm designed for scenarios where multiple clients
share disjoint features of the same set of data samples. Albeit a
wide range of applications, VFL is faced with privacy leakage
from data reconstruction attacks. These attacks generally fall
into two categories: honest-but-curious (HBC), where adversaries
steal data while adhering to the protocol; and malicious attacks,
where adversaries breach the training protocol for significant
data leakage. While most research has focused on HBC scenarios,
the exploration of malicious attacks remains limited.

Launching effective malicious attacks in VFL presents unique
challenges: 1) Firstly, given the distributed nature of clients’ data
features and models, each client rigorously guards its privacy
and prohibits direct querying, complicating any attempts to steal
data; 2) Existing malicious attacks alter the underlying VFL
training task, and are hence easily detected by comparing the
received gradients with the ones received in honest training. To
overcome these challenges, we develop URVFL, a novel attack
strategy that evades current detection mechanisms. The key idea
is to integrate a discriminator with auxiliary classifier that takes a
full advantage of the label information and generates malicious
gradients to the victim clients: on one hand, label information
helps to better characterize embeddings of samples from distinct
classes, yielding an improved reconstruction performance; on the
other hand, computing malicious gradients with label information
better mimics the honest training, making the malicious gradients
indistinguishable from the honest ones, and the attack much
more stealthy. Our comprehensive experiments demonstrate that
URVFL significantly outperforms existing attacks, and successfully
circumvents SOTA detection methods for malicious attacks.
Additional ablation studies and evaluations on defenses further
underscore the robustness and effectiveness of URVFL.

View More Papers

The Kids Are All Right: Investigating the Susceptibility of...

Elijah Bouma-Sims (Carnegie Mellon University), Lily Klucinec (Carnegie Mellon University), Mandy Lanyon (Carnegie Mellon University), Julie Downs (Carnegie Mellon University), Lorrie Faith Cranor (Carnegie Mellon University)

Read More

The (Un)usual Suspects – Studying Reasons for Lacking Updates...

Maria Hellenthal (CISPA Helmholtz Center for Information Security), Lena Gotsche (CISPA Helmholtz Center for Information Security), Rafael Mrowczynski (CISPA Helmholtz Center for Information Security), Sarah Kugel (Saarland University), Michael Schilling (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

The Road to Trust: Building Enclaves within Confidential VMs

Wenhao Wang (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Linke Song (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Benshan Mei (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Shuang Liu (Ant Group), Shijun Zhao (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering,…

Read More

Heimdall: Towards Risk-Aware Network Management Outsourcing

Yuejie Wang (Peking University), Qiutong Men (New York University), Yongting Chen (New York University Shanghai), Jiajin Liu (New York University Shanghai), Gengyu Chen (Carnegie Mellon University), Ying Zhang (Meta), Guyue Liu (Peking University), Vyas Sekar (Carnegie Mellon University)

Read More