Luke Kurlandski (Rochester Institute of Technology), Harel Berger (Ariel University), Yin Pan (Rochester Institute of Technology), Matthew Wright (Rochester Institute of Technology)

Malware poses an increasing threat to critical computing infrastructure, driving demand for more advanced detection and analysis methods. Although raw-binary malware classifiers show promise, they are limited in their capabilities and struggle with the challenges of modeling long sequences. Meanwhile, the rise of large language models (LLMs) in natural language processing showcases the power of massive, self-supervised models trained on heterogeneous datasets, offering flexible representations for numerous downstream tasks. The success behind these models is rooted in the size and quality of their training data, the expressiveness and scalability of their neural architecture, and their ability to learn from unlabeled data in a self-supervised manner.

In this work, we take the first steps toward developing large malware language models (LMLMs), the malware analog to LLMs. We tackle the core aspects of this objective, namely, questions about data, models, pretraining, and finetuning. By pretraining a malware classification model with language modeling objectives, we were able to improve downstream performance on diverse practical malware classification tasks on average by 1.1% and up to 28.6%, indicating that these models could serve to succeed raw-binary malware classifiers.

View More Papers

Q-MLLM: Vector Quantization for Robust Multimodal Large Language Model...

Wei Zhao (Singapore Management University), Zhe Li (Singapore Management University), Yige Li (Singapore Management University), Jun Sun (Singapore Management University)

Read More

ZKSL: Verifiable and Efficient Split Federated Learning via Asynchronous...

Yixiao Zheng (East China Normal University), Changzheng Wei (Digital Technologies, Ant Group), Xiaodong Qi (East China Normal University), Hanghang Wu (Digital Technologies, Ant Group), Yuhan Wu (East China Normal University), Li Lin (Digital Technologies, Ant Group), Tianmin Song (East China Normal University), Ying Yan (Digital Technologies, Ant Group), Yanqing Yang (East China Normal University), Zhao…

Read More

BunnyFinder: Finding Incentive Flaws for Ethereum Consensus

Rujia Li (Tsinghua University), Mingfei Zhang (Shandong University), Xueqian Lu (Independent Reseacher), Wenbo Xu (AntChain Platform Division, Ant Group), Ying Yan (Blockchain Platform Division, Ant Group), Sisi Duan (Tsinghua University)

Read More