Yixiao Zheng (East China Normal University), Changzheng Wei (Digital Technologies, Ant Group), Xiaodong Qi (East China Normal University), Hanghang Wu (Digital Technologies, Ant Group), Yuhan Wu (East China Normal University), Li Lin (Digital Technologies, Ant Group), Tianmin Song (East China Normal University), Ying Yan (Digital Technologies, Ant Group), Yanqing Yang (East China Normal University), Zhao Zhang (East China Normal University), Cheqing Jin (East China Normal University), Aoying Zhou (East China Normal University)

In Vertical Federated Learning (VFL), prior work has primarily focused on protecting data privacy, while overlooking the risk that participants may manipulate local model execution to mount integrity attacks. Integrating zero-knowledge proofs (ZKPs) into the training process can ensure that each party's computations are verifiable without revealing private data. However, directly encoding deep model training as a monolithic ZKP circuit is impractical due to: (i) complex circuit design and high overhead from frequent parameter commitments, (ii) expensive proof generation for embeddings(cross-party information interface), and (iii) synchronous proof generation that blocks iterative training rounds. To address these challenges, we present ZKSL, an efficient and asynchronous VFL framework that achieves verifiable training under a malicious threat model. ZKSL partitions deep neural networks into layer-wise circuits and generates their proofs in parallel, ensuring input–output consistency via emph{Privacy-Commitment PLONK} (PC-PLONK), a lightweight extension that supports low-cost, iteration-by-iteration parameter commitments. For embedding layers, ZKSL adopts a probabilistic verification technique that reduces proof complexity from ${O(Nnd)}$ to ${O(nd)}$. Furthermore, ZKSL incorporates an asynchronous compute–prove scheduling mechanism to decouple proof generation from training iterations, effectively mitigating pipeline stalls. Experimental results on DeepFM and CNN models show that ZKSL reduces proof generation time by up to 73% while maintaining 99.4% accuracy, demonstrating superior scalability and practicality for real-world federated learning.

View More Papers

WiFinger: Fingerprinting Noisy IoT Event Traffic Using Packet-level Sequence...

Ronghua Li (The Hong Kong Polytechnic University), Shinan Liu (The University of Hong Kong), Haibo Hu (The Hong Kong Polytechnic University, PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems), Qingqing Ye (The Hong Kong Polytechnic University), Nick Feamster (University of Chicago)

Read More

Cascading and Proxy Membership Inference Attacks

Yuntao Du (Purdue University), Jiacheng Li (Purdue University), Yuetian Chen (Purdue University), Kaiyuan Zhang (Purdue University), Zhizhen Yuan (Purdue University), Hanshen Xiao (Purdue University and NVIDIA Research), Bruno Ribeiro (Purdue University), Ninghui Li (Purdue University)

Read More

Unshaken by Weak Embedding: Robust Probabilistic Watermarking for Dataset...

Shang Wang (University of Technology Sydney, Australia), Tianqing Zhu (City University of Macau, Macau SAR, China), Dayong Ye (City University of Macau, Macau SAR, China), Hua Ma (Data61, CSIRO, Australia), Bo Liu (University of Technology Sydney, Australia), Ming Ding (Data61, CSIRO, Australia), Shengfang Zhai (National University of Singapore, Singapore), Yansong Gao (School of Cyber Science…

Read More