Wenhao Wang (Yale University, IC3), Fangyan Shi (Tsinghua University), Dani Vilardell (Cornell University, IC3), Fan Zhang (Yale University, IC3)

Succinct Non-interactive Arguments of Knowledge (SNARKs) can enable efficient verification of computation in many applications. However, generating SNARK proofs for large-scale tasks, such as verifiable machine learning or virtual machines, remains computationally expensive. A promising approach is to distribute the proof generation workload across multiple workers. A practical distributed SNARK protocol should have three properties: horizontal scalability with low overhead (linear computation and logarithmic communication per worker), accountability (efficient detection of malicious workers), and a universal trusted setup independent of circuits and the number of workers. Existing protocols fail to achieve all these properties.

In this paper, we present Cirrus, the first distributed SNARK generation protocol achieving all three desirable properties at once. Our protocol builds on HyperPlonk (EUROCRYPT'23), inheriting its universal trusted setup. It achieves linear computation complexity for both workers and the coordinator, along with low communication overhead. To achieve accountability, we introduce a highly efficient accountability protocol to localize malicious workers. Additionally, we propose a hierarchical aggregation technique to further reduce the coordinator’s workload.

We implemented and evaluated Cirrus on machines with modest hardware. Our experiments show that Cirrus is highly scalable: it generates proofs for circuits with 33M gates in under 40 seconds using 32 8-core machines. Compared to the state-of-the-art accountable protocol Hekaton (CCS’24), Cirrus achieves over 7× faster proof generation for PLONK-friendly circuits such as the Pedersen hash. Our accountability protocol also efficiently identifies faulty workers within just 4 seconds, making Cirrus particularly suitable for decentralized and outsourced computation scenarios.

View More Papers

Prεεmpt: Sanitizing Sensitive Prompts for LLMs

Amrita Roy Chowdhury (University of Michigan, Ann Arbor), David Glukhov (University of Toronto), Divyam Anshumaan (University of Wisconsin), Prasad Chalasani (Langroid), Nicholas Papernot (University of Toronto), Somesh Jha (University of Wisconsin), Mihir Bellare (UCSD)

Read More

Side-channel Inference of User Activities in AR/VR Using GPU...

Seonghun Son (Iowa State University), Chandrika Mukherjee (Purdue University), Reham Mohamed Aburas (American University of Sharjah), Berk Gulmezoglu (Iowa State University), Z. Berkay Celik (Purdue University)

Read More

Lightening the Load: A Cluster-Based Framework for A Lower-Overhead,...

Khashayar Khajavi (Simon Fraser University), Tao Wang (Simon Fraser University)

Read More