Ahmed Mostafa, Raisul Arefin Nahid, Samuel Mulder (Auburn University)

Tokenization is fundamental in assembly code analysis, impacting intrinsic characteristics like vocabulary size, semantic coverage, and extrinsic performance in downstream tasks. Despite its significance, tokenization in the context of assembly code remains an underexplored area. This study aims to address this gap by evaluating the intrinsic properties of Natural Language Processing (NLP) tokenization models and parameter choices, such as vocabulary size. We explore preprocessing customization options and pre-tokenization rules tailored to the unique characteristics of assembly code. Additionally, we assess their impact on downstream tasks like function signature prediction—a critical problem in binary code analysis.
To this end, we conduct a thorough study on various tokenization models, systematically analyzing their efficiency in encoding assembly instructions and capturing semantic nuances. Through intrinsic evaluations, we compare tokenizers based on tokenization efficiency, vocabulary compression, and representational fidelity for assembly code. Using state-of-the-art pre-trained models such as the decoder-only Large Language Model (LLM) Llama 3.2, the encoder-only transformer BERT, and the encoderdecoder model BART, we evaluate the effectiveness of these tokenizers across multiple performance metrics. Preliminary findings indicate that tokenizer choice significantly influences downstream performance, with intrinsic metrics providing partial but incomplete predictability of extrinsic evaluation outcomes. These results reveal complex trade-offs between intrinsic tokenizer properties and their utility in practical assembly code tasks. Ultimately, this study provides valuable insights into optimizing tokenization models for low-level code analysis, contributing to the robustness and scalability of Natural Language Model (NLM)-based binary analysis workflows.

View More Papers

The (Un)usual Suspects – Studying Reasons for Lacking Updates...

Maria Hellenthal (CISPA Helmholtz Center for Information Security), Lena Gotsche (CISPA Helmholtz Center for Information Security), Rafael Mrowczynski (CISPA Helmholtz Center for Information Security), Sarah Kugel (Saarland University), Michael Schilling (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

On Borrowed Time – Preventing Static Side-Channel Analysis

Robert Dumitru (Ruhr University Bochum and The University of Adelaide), Thorben Moos (UCLouvain), Andrew Wabnitz (Defence Science and Technology Group), Yuval Yarom (Ruhr University Bochum)

Read More

Mysticeti: Reaching the Latency Limits with Uncertified DAGs

Kushal Babel (Cornell Tech & IC3), Andrey Chursin (Mysten Labs), George Danezis (Mysten Labs & University College London (UCL)), Anastasios Kichidis (Mysten Labs), Lefteris Kokoris-Kogias (Mysten Labs & IST Austria), Arun Koshy (Mysten Labs), Alberto Sonnino (Mysten Labs & University College London (UCL)), Mingwei Tian (Mysten Labs)

Read More

Repurposing Neural Networks for Efficient Cryptographic Computation

Xin Jin (The Ohio State University), Shiqing Ma (University of Massachusetts Amherst), Zhiqiang Lin (The Ohio State University)

Read More