MiTechnological advances relating to artificial intelligence (AI) and explainable AI (xAI) techniques are at a stage of development that requires better understanding of operational context. AI tools are primarily viewed as black boxes and some hesitation exists in employing them due to lack of trust and transparency. xAI technologies largely aim to overcome these issues to improve operational efficiency and effectiveness of operators, speeding up the process and allowing for more consistent and informed decision making from AI outputs. Such efforts require not only robust and reliable models but also relevant and understandable explanations to end users to successfully assist in achieving user goals, reducing bias, and improving trust in AI models. Cybersecurity operations settings represent one such context in which automation is vital for maintaining cyber defenses. AI models and xAI techniques were developed to aid analysts in identifying events and making decisions about flagged events (e.g. network attack). We instrumented the tools used for cybersecurity operations to unobtrusively collect data and evaluate the effectiveness of xAI tools. During a pilot study for deployment, we found that xAI tools, while intended to increase trust and improve efficiency, were not utilized heavily, nor did they improve analyst decision accuracy. Critical lessons were learned that impact the utility and adoptability of the technology, including consideration of end users, their workflows, their environments, and their propensity to trust xAI outputs.

View More Papers

Stop to Unlock: Improving the Security of Android Unlock...

Alexander Suchan (SBA Research); Emanuel von Zezschwitz (Usable Security Methods Group, University of Bonn, Bonn, Germany); Katharina Krombholz (CISPA Helmholtz...

Read More

Hazard Integrated: Understanding Security Risks in App Extensions to...

Mingming Zha (Indiana University Bloomington), Jice Wang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Yuhong...

Read More

Demo #12: Too Afraid to Drive: Systematic Discovery of...

Ziwen Wan (UC Irvine), Junjie Shen (UC Irvine), Jalen Chuang (UC Irvine), Xin Xia (UCLA), Joshua Garcia (UC Irvine), Jiaqi...

Read More

NSFuzz: Towards Efficient and State-Aware Network Service Fuzzing

Shisong Qin (Tsinghua University), Fan Hu (State Key Laboratory of Mathematical Engineering and Advanced Computing), Bodong Zhao (Tsinghua University), Tingting...

Read More