Minkyung Park (University of Texas at Dallas), Zelun Kong (University of Texas at Dallas), Dave (Jing) Tian (Purdue University), Z. Berkay Celik (Purdue University), Chung Hwan Kim (University of Texas at Dallas)

Deep neural networks (DNNs) are integral to modern computing, powering applications such as image recognition, natural language processing, and audio analysis. The architectures of these models (e.g., the number and types of layers) are considered valuable intellectual property due to the significant expertise and computational effort required for their design. Although trusted execution environments (TEEs) like Intel SGX have been adopted to safeguard these models, recent studies on model extraction attacks have shown that side-channel attacks (SCAs) can still be leveraged to extract the architectures of DNN models. However, many existing model extraction attacks either do not account for TEE protections or are limited to specific model types, reducing their real-world applicability.

In this paper, we introduce DNN Latency Sequencing (DLS), a novel model extraction attack framework that targets DNN architectures running within Intel SGX enclaves. DLS employs SGX-Step to single-step model execution and collect fine-grained latency traces, which are then analyzed at both the function and basic block levels to reconstruct the model architecture. Our key insight is that DNN architectures inherently influence execution behavior, enabling accurate reconstruction from latency patterns. We evaluate DLS on models built with three widely used deep learning libraries, Darknet, TensorFlow Lite, and ONNX Runtime, and show that it achieves architecture recovery accuracies of 97.3%, 96.4%, and 93.6%, respectively. We further demonstrate that DLS enables advanced attacks, highlighting its practicality and effectiveness.

View More Papers

Crack in the Armor: Underlying Infrastructure Threats to RPKI...

Yunhao Liu (Tsinghua University & Zhongguancun Laboratory), Jessie Hui Wang (Tsinghua University & Zhongguancun Laboratory), Yuedong Xu (Fudan University), Zongpeng Li (Tsinghua University), Yangyang Wang (Tsinghua University & Zhongguancun Laboratory), Jilong Wang (Tsinghua University & Zhongguancun Laboratory)

Read More

Towards Effective Prompt Stealing Attack against Text-to-Image Diffusion Models

Shiqian Zhao (Nanyang Technological University), Chong Wang (Nanyang Technological University), Yiming Li (Nanyang Technological University), Yihao Huang (NUS), Wenjie Qu (NUS), Siew-Kei Lam (Nanyang Technological University), Yi Xie (Tsinghua University), Kangjie Chen (Nanyang Technological University), Jie Zhang (CFAR and IHPC, A*STAR, Singapore), Tianwei Zhang (Nanyang Technological University)

Read More

InverTune: A Backdoor Defense Method for Multimodal Contrastive Learning...

Mengyuan Sun (Wuhan University), Yu Li (Wuhan University), Yunjie Ge (Wuhan University), Yuchen Liu (Wuhan University), Bo Du (Wuhan University), Qian Wang (Wuhan University)

Read More