Michael Pucher (University of Vienna), Christian Kudera (SBA Research), Georg Merzdovnik (SBA Research)

The complexity and functionality of malware is ever-increasing. Obfuscation is used to hide the malicious intent from virus scanners and increase the time it takes to reverse engineer the binary. One way to minimize this effort is function clone detection. Detecting whether a function is already known, or similar to an existing function, can reduce analysis effort. Outside of malware, the same function clone detection mechanism can be used to find vulnerable versions of functions in binaries, making it a powerful technique.

This work introduces a slim approach for the identification of obfuscated function clones, called OFCI, building on recent advances in machine learning based function clone detection. To tackle the issue of obfuscation, OFCI analyzes the effect of known function calls on function similarity. Furthermore, we investigate function similarity classification on code obfuscated through virtualization by applying function clone detection on execution traces. While not working adequately, it nevertheless provides insight into potential future directions.

Using the ALBERT transformer OFCI can achieve an 83% model size reduction in comparison to state-of-the-art approaches, while only causing an average 7% decrease in the ROC-AUC scores of function pair similarity classification. However, the reduction in model size comes at the cost of precision for function clone search. We discuss the reasons for this as well as other pitfalls of building function similarity detection tooling.

View More Papers

Creating Human Readable Path Constraints from Symbolic Execution

Tod Amon (Sandia National Laboratories), Tim Loffredo (Sandia National Laboratories)

Read More

MIRROR: Model Inversion for Deep Learning Network with High...

Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University),...

Read More

Get a Model! Model Hijacking Attack Against Machine Learning...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz...

Read More

What You See is Not What the Network Infers:...

Yijun Yang (The Chinese University of Hong Kong), Ruiyuan Gao (The Chinese University of Hong Kong), Yu Li (The Chinese...

Read More