Monday, 26 February

  • 09:00 - 09:05
    Opening Remarks
    Aviary Ballroom
  • 09:05 - 10:00
    Test of Time Award with Keynote
    Aviary Ballroom
  • 10:00 - 10:20
    Morning Coffee Break
    Boardroom with Foyer
  • 10:20 - 12:00
    Paper Session 1
    Aviary Ballroom
    • Sijie Zhuo (University of Auckland), Robert Biddle (University of Auckland and Carleton University, Ottawa), Lucas Betts, Nalin Asanka Gamagedara Arachchilage, Yun Sing Koh, Danielle Lottridge, Giovanni Russello (University of Auckland)

      Phishing is when social engineering is used to deceive a person into sharing sensitive information or downloading malware. Research on phishing susceptibility has focused on personality traits, demographics, and design factors related to the presentation of phishing. There is very little research on how a person’s state of mind might impact outcomes of phishing attacks. We conducted a scenario-based in-lab experiment with 26 participants to examine whether workload affects risky cybersecurity behaviours. Participants were tasked to manage 45 emails for 30 minutes, which included 4 phishing emails. We found that, under high workload, participants had higher physiological arousal and longer fixations, and spent half as much time reading email compared to low workload. There was no main effect for workload on phishing clicking, however a post-hoc analysis revealed that participants were more likely to click on task-relevant phishing emails compared to non-relevant phishing emails during high workload whereas there was no difference during low workload. We discuss the implications of state of mind and attention related to risky cybersecurity behaviour.

    • Michael Clark (Brigham Young University), Scott Ruoti (The University of Tennessee), Michael Mendoza (Imperial College London), Kent Seamons (Brigham Young University)

      Users struggle to select strong passwords. System-assigned passwords address this problem, but they can be difficult for users to memorize. While password managers can help store system-assigned passwords, there will always be passwords that a user needs to memorize, such as their password manager’s master password. As such, there is a critical need for research into helping users memorize system-assigned passwords. In this work, we compare three different designs for password memorization aids inspired by the method of loci or memory palace. Design One displays a two-dimensional scene with objects placed inside it in arbitrary (and randomized) positions, with Design Two fixing the objects’ position within the scene, and Design Three displays the scene using a navigable, three-dimensional representation. In an A-B study of these designs, we find that, surprisingly, there is no statistically significant difference between the memorability of these three designs, nor that of assigning users a passphrase to memorize, which we used as the control in this study. However, we find that when perfect recall failed, our designs helped users remember a greater portion of the encoded system-assigned password than did a passphrase, a property we refer to as durability. Our results indicate that there could be room for memorization aids that incorporate fuzzy or error-correcting authentication. Similarly, our results suggest that simple (i.e., cheap to develop) designs of this nature may be just as effective as more complicated, high-fidelity (i.e., expensive to develop) designs.

    • Adryana Hutchinson (The George Washington University), Jinwei Tang (Clark University), Adam Aviv (The George Washington University), Peter Story (Clark University)

      To protect their security, users are instructed to use unique passwords for all their accounts. Password managers make this possible, as they can generate, store, and autofill passwords within a user’s browser. Unfortunately, prior work has identified usability issues which may deter users from using password managers. In this paper, we measure the prevalence of usability issues affecting four popular password managers (Chrome, Safari, Bitwarden, and Keeper). We tested these password managers with their out-of-the-box settings on 60 randomly sampled websites. We show that users are likely to encounter issues using password managers during account registration and authentication. We found that usability issues were widespread, but varied by password manager. Common issues included password managers not prompting the user to generate passwords, autofilling web forms incorrectly or not at all, and generating passwords that were incompatible with websites’ password policies. We found that Chrome and Safari had fewer interaction issues than the other password managers we tested. We conclude by suggesting ways that websites and password managers can improve their compatibility with each other. For example, we recommend that password managers tailor their passwords to websites’ requirements (like Chrome and Safari), or adopt alphanumeric-only password generation by default (like Bitwarden).

    • Elina van Kempen, Zane Karl, Richard Deamicis, Qi Alfred Chen (UC Irivine)

      Biometric authentication systems, such as fingerprint scanning or facial recognition, are now commonplace and available on the majority of new smartphones and laptops. With the development of tablet-digital pen systems, the deployment of handwriting authentication is to be considered.

      In this paper, we evaluate the viability of using the dynamic properties of handwriting, provided by the Apple Pencil, to distinguish and authenticate individuals. Following the data collection phase involving 30 participants, we examined the accuracy of time-series classification models on different inputs and on textindependent against text-dependent authentication, and we analyzed the effect of handwriting forgery. Additionally, participants completed a user survey to gather insight on the public reception of handwriting authentication. While classification models proved to have high accuracy, above 99% in many cases, and participants had a globally positive view of handwriting authentication, the models were not always robust against forgeries, with up to 21.3% forgery success rate. Overall, participants were positive about using handwriting authentication but showed some concern regarding its privacy and security impacts.

    • Hao-Ping (Hank) Lee (Carnegie Mellon University), Wei-Lun Kao (National Taiwan University), Hung-Jui Wang (National Taiwan University), Ruei-Che Chang (University of Michigan), Yi-Hao Peng (Carnegie Mellon University), Fu-Yin Cherng (National Chung Cheng University), Shang-Tse Chen (National Taiwan University)

      Audio CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is an accessible alternative to the traditional CAPTCHA for people with visual impairments. However, the literature has found that audio CAPTCHA suffers from both lower usability and security than its visual counterpart. In this paper, we propose AdvCAPTCHA, a novel audio CAPTCHA generated by using adversarial machine learning techniques. By conducting studies with people with and without visual impairments, we show that AdvCAPTCHA can outperform the status quo audio CAPTCHA in security but not usability. We demonstrate AdvCAPTCHA’s feasibility of providing detection of malicious attacks. We also present an evaluation metric, thresholding, to quantify the trade-off between usability and security for audio CAPTCHA design. Finally, we discuss approaches to the real-world adoption of AdvCAPTCHA.

    • Filipo Sharevski (DePaul University), Mattia Mossano, Maxime Fabian Veit, Gunther Schiefer, Melanie Volkamer (Karlsruhe Institute of Technology)

      QR codes, designed for convenient access to links, have recently been appropriated as phishing attack vectors. As this type of phishing is relatively and many aspects of the threat in real conditions are unknown, we conducted a study in naturalistic settings (n=42) to explore how people behave around QR codes that might contain phishing links. We found that 28 (67%) of our participants opened the link embedded in the QR code without inspecting the URL for potential phishing cues. As a pretext, we used a poster that invited people to scan a QR code and contribute to a humanitarian aid. The choice of a pretext was persuasive enough that 22 (52%) of our participants indicated that it was the main reason why they scanned the QR code and accessed the embedded link in the first place. We used three link variants to test if people are able to spot a potential phishing threat associated with the poster’s QR code (every participant scanned only one variant). In the variants where the link appeared legitimate or it was obfuscated by a link shortening service, only two out of 26 participants (8%) abandoned the URL when they saw the preview in the QR code scanner app. In the variant when the link explicitly contained the word “phish” in the domain name, this ratio rose to 7 out of 16 participants (44%). We use our findings to propose usable security interventions in QR code scanner apps intended to warn users about potentially phishing links.

    • Asangi Jayatilaka (Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide, School of Computing Technologies, RMIT University), Nalin Asanka Gamagedara Arachchilage (School of Computer Science, The University of Auckland), M. Ali Babar (Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide)

      Despite technical and non-technical countermeasures, humans continue to be tricked by phishing emails. How users make email response decisions is a missing piece in the puzzle to identifying why people still fall for phishing emails. We conducted an empirical study using a think-aloud method to investigate how people make ‘response decisions’ while reading emails. The grounded theory analysis of the in-depth qualitative data has enabled us to identify different elements of email users’ decision-making that influence their email response decisions. Furthermore, we developed a theoretical model that explains how people could be driven to respond to emails based on the identified elements of users’ email decision-making processes and the relationships uncovered from the data. The findings provide deeper insights into phishing email susceptibility due to people’s email response decision-making behavior. We also discuss the implications of our findings for designers and researchers working in anti-phishing training, education, and awareness interventions.

  • 12:00 - 13:30
    Lunch
    Lawn
  • 13:30 - 15:10
    Paper Session 2
    Aviary Ballroom
    • This study delves into the utilization patterns, perceptions, and misconceptions surrounding Virtual Private Networks (VPNs) among users in Canada and Japan. We administered a comprehensive survey to 234 VPN users in these two countries, aiming to elucidate the motivations behind VPN usage, users’ comprehension of VPN functionality, and prevalent misconceptions. A distinctive feature of our research lies in its cross-cultural comparison, a departure from previous studies predominantly centered on users within a Western context. Our findings underscore noteworthy distinctions among participant groups. Specifically, Japanese users predominantly employ VPNs for security purposes, whereas Canadian users leverage VPNs for a more diverse array of services, encompassing privacy and access to region-specific content. Furthermore, disparities in VPN understanding emerged, with Canadians demonstrating a superior grasp of VPN applications despite limited technical knowledge, while Japanese participants exhibited a more profound understanding of VPNs, particularly in relation to encrypting transmitted traffic. Notably, both groups exhibited a constrained awareness regarding the data logging practices associated with VPNs. This research significantly contributes to the broader comprehension of VPN usage and sheds light on the cultural intricacies that shape VPN adoption and perceptions, offering valuable insights into the diverse motivations and behaviors of users in Canada and Japan.

    • Cem Topcuoglu (Northeastern University), Andrea Martinez (Florida International University), Abbas Acar (Florida International University), Selcuk Uluagac (Florida International University), Engin Kirda (Northeastern University)

      Operating Systems (OSs) play a crucial role in shaping user perceptions of security and privacy. Yet, the distinct perception of different OS users received limited attention from security researchers. The two most dominant operating systems today are MacOS and Microsoft Windows. Although both operating systems contain advanced cybersecurity features that have made it more difficult for attackers to launch their attacks and compromise users, the folk wisdom suggests that users regard MacOS as being the more secure operating system among the two. However, this common belief regarding the comparison of these two operating systems, as well as the mental models behind it, have not been studied yet.

      In this paper, by conducting detailed surveys with a large number of MacOS and Windows users (n = 208) on Amazon Mechanical Turk, we aim to understand the differences in perception among MacOS and Windows users concerning the cybersecurity and privacy of these operating systems. Our results confirm the folk wisdom and show that many Windows and MacOS users indeed perceive MacOS as a more secure and private operating system compared to Windows, basing their belief on reputation rather than technical decisions. Additionally, we found that MacOS users often take fewer security measures, influenced by a strong confidence in their system’s malware protection capabilities. Moreover, our analysis highlights the impact of the operating system’s reputation and the primary OS used on users’ perceptions of security and privacy. Finally, our qualitative analysis revealed many misconceptions such as being MacOS malware-proof. Overall, our findings suggest the need for more focused security training and OS improvements and show the shreds of evidence that the mental model of users in this regard is a vital process to predict new attack surfaces and propose usable solutions.

    • Xinyao Ma, Ambarish Aniruddha Gurjar, Anesu Christopher Chaora, Tatiana R Ringenberg, L. Jean Camp (Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington)

      This study delves into the crucial role of developers in identifying privacy sensitive information in code. The context informs the research of diverse global data protection regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). It specifically investigates programmers’ ability to discern the sensitivity level of data processing in code, a task of growing importance given the increasing legislative demands for data privacy.

      We conducted an online card-sorting experiment to explore how the participating programmers across a range of expertise perceive the sensitivity of variable names in code snippets. Our study evaluates the accuracy, feasibility, and reliability of our participating programmers in determining what constitutes a ’sensitive’ variable. We further evaluate if there is a consensus among programmers, how their level of security knowledge influences any consensus, and whether any consensus or impact of expertise is consistent across different categories of variables. Our findings reveal a lack of consistency among participants regarding the sensitivity of processing different types of data, as indicated by snippets of code with distinct variable names. There remains a significant divergence in opinions, particularly among those with more technical expertise. As technical expertise increases, consensus decreases across the various categories of sensitive data. This study not only sheds light on the current state of programmers’ privacy awareness but also motivates the need for developing better industry practices and tools for automatically identifying sensitive data in code.

    • Tu Le (University of California, Irvine), Zixin Wang (Zhejiang University), Danny Yuxing Huang (New York University), Yaxing Yao (Virginia Tech), Yuan Tian (University of California, Los Angeles)

      Voice-controlled devices or their software component, known as voice personal assistant (VPA), offer technological advancements that improve user experience. However, they come with privacy concerns such as unintended recording of the user’s private conversations. This data could potentially be stolen by adversaries or shared with third parties. Therefore, users need to be aware of these and other similar potential privacy risks presented by VPAs. In this paper, we first study how VPA users monitor their voice interaction recorded by their VPAs and their expectations via an online survey of 100 users. We find that even though users were aware of the VPAs holding recordings of them, they initially thought reviewing the recordings was unnecessary. However, they were surprised that there were unintended recordings and that they could review the recordings. When presented with what types of unintended recordings might happen, more users wanted the option to review their interaction history. This indicates the importance of data transparency. We then build a browser extension that helps users monitor their voice interaction history and notifies users of unintended conversations recorded by their voice assistants. Our tool experiments with notifications using smart light devices in addition to the traditional push notification approach. With our tool, we then interview 10 users to evaluate the usability and further understand users’ perceptions of such unintended recordings. Our results show that unintended recordings could be common in the wild and there is a need for a tool to help manage the voice interaction recordings with VPAs. Smart light notification is potentially a useful mechanism that should be adopted in addition to the traditional push notification.

    • Julie Haney, Clyburn Cunningham, Susanne Furman (National Institute of Standards and Technology)

      The “research-practice gap” can prevent the application of valuable research insights into practice. While the gap has been studied in several fields, it is unclear if prior findings and recommendations apply to human-centered cybersecurity (HCC), which may have its own challenges due to the unique characteristics of the cybersecurity field. Overcoming the gap in HCC is especially important given the large role of human behavior in cybersecurity outcomes. As a starting point for understanding this potential gap, we conducted a survey of 152 cybersecurity practitioners. We found that, while participants see the value in and are eager to receive and integrate HCC insights, they experienced a number of challenges in doing so. Based on our results, we discuss implications of our results, including how we extend prior research-practice work, suggestions for how to better support practitioners in integrating HCC into their work, and foundations for future work to explore meaningful solutions.

    • L Yasmeen Abdrabou (Lancaster University), Mariam Hassib (Fortiss Research Institute of the Free State of Bavaria), Shuqin Hu (LMU Munich), Ken Pfeuffer (Aarhus University), Mohamed Khamis (University of Glasgow), Andreas Bulling (University of Stuttgart), Florian Alt (University of the Bundeswehr Munich)

      Existing gaze-based methods for user identification either require special-purpose visual stimuli or artificial gaze behaviour. Here, we explore how users can be differentiated by analysing natural gaze behaviour while freely looking at images. Our approach is based on the observation that looking at different images, for example, a picture from your last holiday, induces stronger emotional responses that are reflected in gaze behavioor and, hence, is unique to the person having experienced that situation. We collected gaze data in a remote study (N = 39) where participants looked at three image categories: personal images, other people’s images, and random images from the Internet. We demonstrate the potential of identifying different people using machine learning with an accuracy of 85%. The results pave the way towards a new class of authentication methods solely based on natural human gaze behaviour.

    • Imani N. S. Munyaka (University of California, San Diego), Daniel A Delgado, Juan Gilbert, Jaime Ruiz, Patrick Traynor (University of Florida)

      Telephone carriers and third-party developers have created technical solutions to detect and notify consumers of spam calls. The goal of this technology is to help users make decisions about incoming calls and reduce the negative effects of spam calls on finances and daily life. Although useful, this technology has varying accuracy due to technical limitations. In this study, we conduct design interviews, a call response diary study, and an MTurk survey (N=143) to explore the relationship between warning accuracy and callee decision-making for incoming calls. Our results suggest that previous call experience can lead to incomplete mental models of how Caller ID works. Additionally, we find that false alarms and missed detection do not impact call response but can influence user expectations of the call. Since adversaries can use mismatched expectations to their advantage, we recommend using warning design characteristics that align with user expectations under detection accuracy constraints.

  • 15:10 - 15:40
    Afternoon Coffee Break
    Boardroom with Foyer
  • 15:40 - 16:30
    Invited Talk (joint)
    Aviary Ballroom
  • 16:40 - 17:10
    Paper Session 3 (Vision)
    Aviary Ballroom
    • Arjun Arunasalam (Purdue University), Habiba Farrukh (University of California, Irvine), Eliz Tekcan (Purdue University), Z. Berkay Celik (Purdue University)

      Refugees form a vulnerable population due to their forced displacement, facing many challenges in the process, such as language barriers and financial hardship. Recent world events such as the Ukrainian and Afgan refugee crises have centered this population in online discourse, especially in social media, e.g., TikTok and Twitter. Although discourse can be benign, hateful and malicious discourse also emerges. Thus, refugees often become targets of toxic content, where malicious attackers post online hate targeting this population. Such online toxicity can vary in nature; e.g., toxicity can differ in scale (individual vs. group), and intent (embarrassment vs. harm), and the varying types of toxicity targeting refugees largely remain unexplored. We seek to understand the types of toxic content targeting refugees in online spaces. To do so, we carefully curate seed queries to collect a corpus of ∼3 M Twitter posts targeting refugees. We semantically sample this corpus to produce an annotated dataset of 1,400 posts against refugees from seven different languages. We additionally use a deductive approach to qualitatively analyze the motivating sentiments (reasons) behind toxic posts. We discover that trolling and hate speech are the predominant toxic content that targets refugees. Furthermore, we uncover four main motivating sentiments (e.g., perceived ungratefulness, perceived fear of safety). Our findings synthesize important lessons for moderating toxic content, especially for vulnerable communities.

    • Sakuna Harinda Jayasundara, Nalin Asanka Gamagedara Arachchilage, Giovanni Russello (University of Auckland)

      Access control failures can cause data breaches, putting entire organizations at risk of financial loss and reputation damage. One of the main reasons for such failures is the mistakes made by system administrators when they manually generate low-level access control policies directly from highlevel requirement specifications. Therefore, to help administrators in that policy generation process, previous research proposed graphical policy authoring tools and automated policy generation frameworks. However, in reality, those tools and frameworks are neither usable nor reliable enough to help administrators generate access control policies accurately while avoiding access control failures. Therefore, as a solution, in this paper, we present “AccessFormer”, a novel policy generation framework that improves both the usability and reliability of access control policy generation. Through the proposed framework, on the one hand, we improve the reliability of policy generation by utilizing Language Models (LMs) to generate, verify, and refine access control policies by incorporating the system’s as well as administrator’s feedback. On the other hand, we also improve the usability of the policy generation by proposing a usable policy authoring interface designed to help administrators understand policy generation mistakes and accurately provide feedback.

    • Tobias Länge (Karlsruhe Institute of Technology), Philipp Matheis (Karlsruhe Institute of Technology), Reyhan Düzgün (Ruhr University Bochum), Melanie Volkamer (Karlsruhe Institute of Technology), Peter Mayer (Karlsruhe Institute of Technology, University of Southern Denmark)

      Virtual reality (VR) is a growing technology with social, gaming and commercial applications. Due to the sensitive data involved, these systems require secure authentication. Shoulder-surfing, in particular, poses a significant threat as (1) interaction is mostly performed by means of visible gestures and (2) wearing the glasses prevents noticing bystanders. In this paper, we analyze research proposing shoulder-surfing resistant schemes for VR and present new shoulder-surfing resistant authentication schemes. Furthermore, we conducted a user study and found authenticating with our proposed schemes is efficient with times as low as 5.1 seconds. This is faster than previous shoulder-surfing resistant VR schemes, while offering similar user satisfaction.

  • 17:10 - 17:15
    Closing Remarks
    Aviary Ballroom