Yixiao Zheng (East China Normal University), Changzheng Wei (Digital Technologies, Ant Group), Xiaodong Qi (East China Normal University), Hanghang Wu (Digital Technologies, Ant Group), Yuhan Wu (East China Normal University), Li Lin (Digital Technologies, Ant Group), Tianmin Song (East China Normal University), Ying Yan (Digital Technologies, Ant Group), Yanqing Yang (East China Normal University), Zhao Zhang (East China Normal University), Cheqing Jin (East China Normal University), Aoying Zhou (East China Normal University)

In Vertical Federated Learning (VFL), prior work has primarily focused on protecting data privacy, while overlooking the risk that participants may manipulate local model execution to mount integrity attacks. Integrating zero-knowledge proofs (ZKPs) into the training process can ensure that each party's computations are verifiable without revealing private data. However, directly encoding deep model training as a monolithic ZKP circuit is impractical due to: (i) complex circuit design and high overhead from frequent parameter commitments, (ii) expensive proof generation for embeddings(cross-party information interface), and (iii) synchronous proof generation that blocks iterative training rounds. To address these challenges, we present ZKSL, an efficient and asynchronous VFL framework that achieves verifiable training under a malicious threat model. ZKSL partitions deep neural networks into layer-wise circuits and generates their proofs in parallel, ensuring input–output consistency via emph{Privacy-Commitment PLONK} (PC-PLONK), a lightweight extension that supports low-cost, iteration-by-iteration parameter commitments. For embedding layers, ZKSL adopts a probabilistic verification technique that reduces proof complexity from ${O(Nnd)}$ to ${O(nd)}$. Furthermore, ZKSL incorporates an asynchronous compute–prove scheduling mechanism to decouple proof generation from training iterations, effectively mitigating pipeline stalls. Experimental results on DeepFM and CNN models show that ZKSL reduces proof generation time by up to 73% while maintaining 99.4% accuracy, demonstrating superior scalability and practicality for real-world federated learning.

View More Papers

Ipotane: Balancing the Good and Bad Cases of Asynchronous...

Xiaohai Dai (Huazhong University of Science and Technology), Chaozheng Ding (Huazhong University of Science and Technology), Hai Jin (Huazhong University of Science and Technology), Julian Loss (CISPA Helmholtz Center for Information Security), Ling Ren (University of Illinois at Urbana-Champaign)

Read More

Actively Understanding the Dynamics and Risks of the Threat...

Tillson Galloway (Georgia Institute of Technology), Omar Alrawi (Georgia Institute of Technology), Allen Chang (Georgia Institute of Technology), Athanasios Avgetidis (Georgia Institute of Technology), Manos Antonakakis (Georgia Institute of Technology), Fabian Monrose (Georgia Institute of Technology)

Read More