Jef Jacobs (DistriNet, KU Leuven), Jorn Lapon (DistriNet, KU Leuven), Vincent Naessens (DistriNet, KU Leuven)

Large Language Models (LLMs) are increasingly used as autonomous agents in domains such as cybersecurity and system administration. The performance of these agents depends heavily on their ability to interact effectively with operating systems, often through Bash commands. Current implementations primarily rely on proprietary cloud-based models, which raise privacy and data confidentiality concerns when deployed in real-world environments. Locally hosted open-source LLMs offer a promising alternative, but their performance for such tasks remains unclear.

This paper presents an empirical evaluation of 22 opensource language models (ranging from 1B to 32B parameters) on Natural Language–to–Bash translation tasks. We introduce an improved scoring system for assessing task success and analyze performance under 10 distinct prompting techniques. Our findings show that Qwen3 models achieve strong results in NL2Bash tasks, that role-play prompting significantly benefits most models, and Chain-of-Thought and RAG can surprisingly hurt local model performance if not carefully designed. We further observe that the impact of prompting strategies varies with model size.

View More Papers

Evaluating Impact of Coverage Feedback on Estimators for Maximum...

Nelum Attanayake (School of Computer Science, University of Sydney), Danushka Liyanage (School of Computer Science, University of Sydney), Clement Canonne (School of Computer Science, University of Sydney), Suranga Seneviratne (School of Computer Science, University of Sydney), Rahul Gopinath (School of Computer Science, University of Sydney)

Read More

Minding the Gap: Bridging Causal Disconnects in System Provenance

Hanke Kimm (Stony Brook University, NY, USA), Sagar Mishra (Stony Brook University, NY, USA), R. Sekar (Stony Brook University, NY, USA)

Read More

Memory Band-Aid: A Principled Rowhammer Defense-in-Depth

Carina Fiedler (Graz University of Technology), Jonas Juffinger (Graz University of Technology), Sudheendra Raghav Neela (Graz University of Technology), Martin Heckel (Hof University of Applied Sciences), Hannes Weissteiner (Graz University of Technology), Abdullah Giray Yağlıkçı (ETH Zürich), Florian Adamsky (Hof University of Applied Sciences), Daniel Gruss (Graz University of Technology)

Read More