Cheng Chu (Indiana University Bloomington), Qian Lou (University of Central Florida), Fan Chen (Indiana University Bloomington), Lei Jiang (Indiana University Bloomington)

Variational quantum algorithms (VQAs) have emerged as one of the most promising paradigms for achieving practical quantum advantage in the noisy intermediate-scale quantum (NISQ) era. To enhance the computational accuracy of VQAs on noisy hardware, zero noise extrapolation (ZNE) has become a widely adopted and effective error mitigation technique. However, the growing reliance on ZNE also increases the importance of identifying potential adversarial exploits. We examine existing backdoor attacks and highlight why they struggle to compromise ZNE. Specifically, quantum backdoor attacks that modify circuit structures merely shift the ideal output without affecting the noise-dependent extrapolation process, leaving ZNE intact. Likewise, parameter-level backdoors that are trained without accounting for device-specific noise exhibit inconsistent behavior across different hardware platforms, resulting in unreliable or ineffective attacks. Building on these observations, we uncover a new class of backdoor vulnerabilities that specifically target the unique properties of ZNE.

In this study, we propose QNBAD, a novel and stealthy backdoor attack targeting ZNE. QNBAD is carefully designed to preserve the correct functionality of variational quantum circuits on most devices. However, under a specific noise model, it leverages subtle interactions between quantum noise and circuit structure to systematically manipulate the sampled expectation values across different noise levels. This targeted perturbation corrupts the ZNE fitting process and leads to significantly biased final estimates. Compared to prior backdoor methods, QNBAD achieves substantially greater absolute error amplification, ranging from 1.68× to 11.7× across four platforms and six applications. Furthermore, it remains effective across a variety of fitting functions and ZNE variants.

View More Papers

EXIA: Trusted Transitions for Enclaves via External-Input Attestation

Zhen Huang (Shanghai Jiao Tong University), Yidi Kao (Auburn University), Sanchuan Chen (Auburn University), Guoxing Chen (Shanghai Jiao Tong University), Yan Meng (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University)

Read More

Better Safe than Sorry: Uncovering the Insecure Resource Management...

Yizhe Shi (Fudan University), Zhemin Yang (Fudan University), Dingyi Liu (Fudan University), Kangwei Zhong (Fudan University), Jiarun Dai (Fudan University), Min Yang (Fudan University)

Read More

PrivORL: Differentially Private Synthetic Dataset for Offline Reinforcement Learning

Chen GONG (University of Virginia), Zheng Liu (University of Virginia), Kecen Li (University of Virginia), Tianhao Wang (University of Virginia)

Read More