Megan Nyre-Yu (Sandia National Laboratories), Elizabeth S. Morris (Sandia National Laboratories), Blake Moss (Sandia National Laboratories), Charles Smutz (Sandia National Laboratories), Michael R. Smith (Sandia National Laboratories)

MiTechnological advances relating to artificial intelligence (AI) and explainable AI (xAI) techniques are at a stage of development that requires better understanding of operational context. AI tools are primarily viewed as black boxes and some hesitation exists in employing them due to lack of trust and transparency. xAI technologies largely aim to overcome these issues to improve operational efficiency and effectiveness of operators, speeding up the process and allowing for more consistent and informed decision making from AI outputs. Such efforts require not only robust and reliable models but also relevant and understandable explanations to end users to successfully assist in achieving user goals, reducing bias, and improving trust in AI models. Cybersecurity operations settings represent one such context in which automation is vital for maintaining cyber defenses. AI models and xAI techniques were developed to aid analysts in identifying events and making decisions about flagged events (e.g. network attack). We instrumented the tools used for cybersecurity operations to unobtrusively collect data and evaluate the effectiveness of xAI tools. During a pilot study for deployment, we found that xAI tools, while intended to increase trust and improve efficiency, were not utilized heavily, nor did they improve analyst decision accuracy. Critical lessons were learned that impact the utility and adoptability of the technology, including consideration of end users, their workflows, their environments, and their propensity to trust xAI outputs.

View More Papers

PoF: Proof-of-Following for Vehicle Platoons

Ziqi Xu (University of Arizona), Jingcheng Li (University of Arizona), Yanjun Pan (University of Arizona), Loukas Lazos (University of Arizona, Tucson), Ming Li (University of Arizona, Tucson), Nirnimesh Ghose (University of Nebraska–Lincoln)

Read More

FakeGuard: Exploring Haptic Response to Mitigate the Vulnerability in...

Aditya Singh Rathore (University at Buffalo, SUNY), Yijie Shen (Zhejiang University), Chenhan Xu (University at Buffalo, SUNY), Jacob Snyderman (University at Buffalo, SUNY), Jinsong Han (Zhejiang University), Fan Zhang (Zhejiang University), Zhengxiong Li (University of Colorado Denver), Feng Lin (Zhejiang University), Wenyao Xu (University at Buffalo, SUNY), Kui Ren (Zhejiang University)

Read More

LogicMEM: Automatic Profile Generation for Binary-Only Memory Forensics via...

Zhenxiao Qi (UC Riverside), Yu Qu (UC Riverside), Heng Yin (UC Riverside)

Read More

VPNInspector: Systematic Investigation of the VPN Ecosystem

Reethika Ramesh (University of Michigan), Leonid Evdokimov (Independent), Diwen Xue (University of Michigan), Roya Ensafi (University of Michigan)

Read More