Minkyung Park (University of Texas at Dallas), Zelun Kong (University of Texas at Dallas), Dave (Jing) Tian (Purdue University), Z. Berkay Celik (Purdue University), Chung Hwan Kim (University of Texas at Dallas)

Deep neural networks (DNNs) are integral to modern computing, powering applications such as image recognition, natural language processing, and audio analysis. The architectures of these models (e.g., the number and types of layers) are considered valuable intellectual property due to the significant expertise and computational effort required for their design. Although trusted execution environments (TEEs) like Intel SGX have been adopted to safeguard these models, recent studies on model extraction attacks have shown that side-channel attacks (SCAs) can still be leveraged to extract the architectures of DNN models. However, many existing model extraction attacks either do not account for TEE protections or are limited to specific model types, reducing their real-world applicability.

In this paper, we introduce DNN Latency Sequencing (DLS), a novel model extraction attack framework that targets DNN architectures running within Intel SGX enclaves. DLS employs SGX-Step to single-step model execution and collect fine-grained latency traces, which are then analyzed at both the function and basic block levels to reconstruct the model architecture. Our key insight is that DNN architectures inherently influence execution behavior, enabling accurate reconstruction from latency patterns. We evaluate DLS on models built with three widely used deep learning libraries, Darknet, TensorFlow Lite, and ONNX Runtime, and show that it achieves architecture recovery accuracies of 97.3%, 96.4%, and 93.6%, respectively. We further demonstrate that DLS enables advanced attacks, highlighting its practicality and effectiveness.

View More Papers

CoordMail: Exploiting SMTP Timeout and Command Interaction to Coordinate...

Ruixuan Li (Tsinghua University and Beijing National Research Center for Information Science and Technology), Chaoyi Lu (Zhongguancun Laboratory), Baojun Liu (Tsinghua University and Beijing National Research Center for Information Science and Technology), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd), Jun Shao (Zhejiang Gongshang University and Zhejiang Key Laboratory of Big…

Read More

Kick Bad Guys Out! Conditionally Activated Anomaly Detection in...

Shanshan Han (University of California, Irvine), Wenxuan Wu (Texas A&M University), Baturalp Buyukates (University of Birmingham), Weizhao Jin (University of Southern California), Qifan Zhang (Palo Alto Networks), Yuhang Yao (Carnegie Mellon University), Salman Avestimehr (University of Southern California)

Read More

The People Led, AI Powered Security Operations Center

Alycia Carey, Joshua Reynolds, Chris Fennell (Walmart)

Read More