Yixiao Zheng (East China Normal University), Changzheng Wei (Digital Technologies, Ant Group), Xiaodong Qi (East China Normal University), Hanghang Wu (Digital Technologies, Ant Group), Yuhan Wu (East China Normal University), Li Lin (Digital Technologies, Ant Group), Tianmin Song (East China Normal University), Ying Yan (Digital Technologies, Ant Group), Yanqing Yang (East China Normal University), Zhao Zhang (East China Normal University), Cheqing Jin (East China Normal University), Aoying Zhou (East China Normal University)

In Vertical Federated Learning (VFL), prior work has primarily focused on protecting data privacy, while overlooking the risk that participants may manipulate local model execution to mount integrity attacks. Integrating zero-knowledge proofs (ZKPs) into the training process can ensure that each party's computations are verifiable without revealing private data. However, directly encoding deep model training as a monolithic ZKP circuit is impractical due to: (i) complex circuit design and high overhead from frequent parameter commitments, (ii) expensive proof generation for embeddings(cross-party information interface), and (iii) synchronous proof generation that blocks iterative training rounds. To address these challenges, we present ZKSL, an efficient and asynchronous VFL framework that achieves verifiable training under a malicious threat model. ZKSL partitions deep neural networks into layer-wise circuits and generates their proofs in parallel, ensuring input–output consistency via emph{Privacy-Commitment PLONK} (PC-PLONK), a lightweight extension that supports low-cost, iteration-by-iteration parameter commitments. For embedding layers, ZKSL adopts a probabilistic verification technique that reduces proof complexity from ${O(Nnd)}$ to ${O(nd)}$. Furthermore, ZKSL incorporates an asynchronous compute–prove scheduling mechanism to decouple proof generation from training iterations, effectively mitigating pipeline stalls. Experimental results on DeepFM and CNN models show that ZKSL reduces proof generation time by up to 73% while maintaining 99.4% accuracy, demonstrating superior scalability and practicality for real-world federated learning.

View More Papers

Towards Effective Prompt Stealing Attack against Text-to-Image Diffusion Models

Shiqian Zhao (Nanyang Technological University), Chong Wang (Nanyang Technological University), Yiming Li (Nanyang Technological University), Yihao Huang (NUS), Wenjie Qu (NUS), Siew-Kei Lam (Nanyang Technological University), Yi Xie (Tsinghua University), Kangjie Chen (Nanyang Technological University), Jie Zhang (CFAR and IHPC, A*STAR, Singapore), Tianwei Zhang (Nanyang Technological University)

Read More

ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with...

Reachal Wang (Duke University), Yuqi Jia (Duke University), Neil Gong (Duke University)

Read More

Action Required: A Mixed-Methods Study of Security Practices in...

Yusuke Kubo (NTT DOCOMO BUSINESS, Inc. / Waseda University), Fumihiro Kanei (NTT DOCOMO BUSINESS, Inc.), Mitsuaki Akiyama (NTT, Inc.), Takuro Wakai (Waseda University), Tatsuya Mori (Waseda University / NICT / RIKEN AIP)

Read More