Megan Nyre-Yu (Sandia National Laboratories), Elizabeth S. Morris (Sandia National Laboratories), Blake Moss (Sandia National Laboratories), Charles Smutz (Sandia National Laboratories), Michael R. Smith (Sandia National Laboratories)

MiTechnological advances relating to artificial intelligence (AI) and explainable AI (xAI) techniques are at a stage of development that requires better understanding of operational context. AI tools are primarily viewed as black boxes and some hesitation exists in employing them due to lack of trust and transparency. xAI technologies largely aim to overcome these issues to improve operational efficiency and effectiveness of operators, speeding up the process and allowing for more consistent and informed decision making from AI outputs. Such efforts require not only robust and reliable models but also relevant and understandable explanations to end users to successfully assist in achieving user goals, reducing bias, and improving trust in AI models. Cybersecurity operations settings represent one such context in which automation is vital for maintaining cyber defenses. AI models and xAI techniques were developed to aid analysts in identifying events and making decisions about flagged events (e.g. network attack). We instrumented the tools used for cybersecurity operations to unobtrusively collect data and evaluate the effectiveness of xAI tools. During a pilot study for deployment, we found that xAI tools, while intended to increase trust and improve efficiency, were not utilized heavily, nor did they improve analyst decision accuracy. Critical lessons were learned that impact the utility and adoptability of the technology, including consideration of end users, their workflows, their environments, and their propensity to trust xAI outputs.

View More Papers

Too Afraid to Drive: Systematic Discovery of Semantic DoS...

Ziwen Wan (University of California, Irvine), Junjie Shen (University of California, Irvine), Jalen Chuang (University of California, Irvine), Xin Xia (The University of California, Los Angeles), Joshua Garcia (University of California, Irvine), Jiaqi Ma (The University of California, Los Angeles), Qi Alfred Chen (University of California, Irvine)

Read More

Hazard Integrated: Understanding Security Risks in App Extensions to...

Mingming Zha (Indiana University Bloomington), Jice Wang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Yuhong Nan (Sun Yat-sen University), Xiaofeng Wang (Indiana Unversity Bloomington), Yuqing Zhang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Zelin Yang (National Computer Network Intrusion Protection Center, University of Chinese Academy…

Read More

Progressive Scrutiny: Incremental Detection of UBI bugs in the...

Yizhuo Zhai (University of California, Riverside), Yu Hao (University of California, Riverside), Zheng Zhang (University of California, Riverside), Weiteng Chen (University of California, Riverside), Guoren Li (University of California, Riverside), Zhiyun Qian (University of California, Riverside), Chengyu Song (University of California, Riverside), Manu Sridharan (University of California, Riverside), Srikanth V. Krishnamurthy (University of California, Riverside),…

Read More

Generating Test Suites for GPU Instruction Sets through Mutation...

Shoham Shitrit(University of Rochester) and Sreepathi Pai (University of Rochester)

Read More