Heng Yin, Professor, Department of Computer Science and Engineering, University of California, Riverside

Deep learning, particularly Transformer-based models, has recently gained traction in binary analysis, showing promising outcomes. Despite numerous studies customizing these models for specific applications, the impact of such modifications on performance remains largely unexamined. Our study critically evaluates four custom Transformer models (jTrans, PalmTree, StateFormer, Trex) across various applications, revealing that except for the Masked Language Model (MLM) task, additional pre-training tasks do not significantly enhance learning. Surprisingly, the original BERT model often outperforms these adaptations, indicating that complex modifications and new pre-training tasks may be superfluous. Our findings advocate for focusing on fine-tuning rather than architectural or task-related alterations to improve model performance in binary analysis.

Speaker's Biography: Dr. Heng Yin is a Professor in the Department of Computer Science and Engineering at University of California, Riverside. He obtained his PhD degree from the College of William and Mary in 2009. His research interests lie in computer security, with an emphasis on binary code analysis. His publications appear in top-notch technical conferences and journals, such as IEEE S&P, ACM CCS, USENIX Security, NDSS, ISSTA, ICSE, TSE, TDSC, etc. His research is sponsored by National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Air Force Office of Scientific Research (AFOSR), and Office of Naval Research (ONR). In 2011, he received the prestigious NSF Career award. He received Google Security and Privacy Research Award, Amazon Research Award, DSN Distinguished Paper Award, and RAID Best Paper Award.

View More Papers

SHAFT: Secure, Handy, Accurate and Fast Transformer Inference

Andes Y. L. Kei (Chinese University of Hong Kong), Sherman S. M. Chow (Chinese University of Hong Kong)

Read More

JMPscare: Introspection for Binary-Only Fuzzing

Dominik Maier, Lukas Seidel (TU Berlin)

Read More

DLBox: New Model Training Framework for Protecting Training Data

Jaewon Hur (Seoul National University), Juheon Yi (Nokia Bell Labs, Cambridge, UK), Cheolwoo Myung (Seoul National University), Sangyun Kim (Seoul National University), Youngki Lee (Seoul National University), Byoungyoung Lee (Seoul National University)

Read More

Rondo: Scalable and Reconfiguration-Friendly Randomness Beacon

Xuanji Meng (Tsinghua University), Xiao Sui (Shandong University), Zhaoxin Yang (Tsinghua University), Kang Rong (Blockchain Platform Division,Ant Group), Wenbo Xu (Blockchain Platform Division,Ant Group), Shenglong Chen (Blockchain Platform Division,Ant Group), Ying Yan (Blockchain Platform Division,Ant Group), Sisi Duan (Tsinghua University)

Read More