Lea Duesterwald (Carnegie Mellon University), Ian Yang (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University)

Human actions or lack thereof contribute to a large majority of cybersecurity incidents. Traditionally, when looking for advice on cybersecurity questions, people have turned to search engines or social sites like Reddit. The rapid adoption of chatbot technologies is offering a potentially more direct way of getting similar advice. Initial research suggests, however, that while chatbot answers to common cybersecurity questions tend to be fairly accurate, they may not be very effective as they often fall short on other desired qualities such as understandability, actionability, or motivational power. Research in this area thus far has been limited to the evaluation by researchers themselves on a small number of synthetic questions. This article reports on what we believe to be the first in situ evaluation of a cybersecurity Question Answering (QA) assistant. We also evaluate a prompt engineered to help the cybersecurity QA assistant generate more effective answers. The study involved a 10-day deployment of a cybersecurity QA assistant in the form of a Chrome extension. Collectively, participants (N=51) evaluated answers generated by the assistant to over 1,000 cybersecurity questions they submitted as part of their regular day-to-day activities. The results suggest that a majority of participants found the assistant useful and often took actions based on the answers they received. In particular, the study indicates that prompting successfully improved the effectiveness of answers and, in particular, the likelihood that users follow their recommendations (fraction of participants who actually followed the advice was 0.514 with prompting vs. 0.402 without prompting, p=4.61E-04), an impact on people’s actual behavior. We provide a detailed analysis of data collected in this study, discuss their implications, and outline next steps in the development and deployment of effective cybersecurity QA assistants that offer the promise of changing actual user behavior and of reducing human-related security incidents.

View More Papers

Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach

Christian van Sloun (RWTH Aachen University), Vincent Woeste (RWTH Aachen University), Konrad Wolsing (RWTH Aachen University & Fraunhofer FKIE), Jan Pennekamp (RWTH Aachen University), Klaus Wehrle (RWTH Aachen University)

Read More

Careful About What App Promotion Ads Recommend! Detecting and...

Shang Ma (University of Notre Dame), Chaoran Chen (University of Notre Dame), Shao Yang (Case Western Reserve University), Shifu Hou (University of Notre Dame), Toby Jia-Jun Li (University of Notre Dame), Xusheng Xiao (Arizona State University), Tao Xie (Peking University), Yanfang Ye (University of Notre Dame)

Read More

Beyond Classification: Inferring Function Names in Stripped Binaries via...

Linxi Jiang (The Ohio State University), Xin Jin (The Ohio State University), Zhiqiang Lin (The Ohio State University)

Read More

Five Word Password Composition Policy

Sirvan Almasi (Imperial College London), William J. Knottenbelt (Imperial College London)

Read More