Friday, 27 February

  • 07:30 - 09:00
    Breakfast
    Pacific Ballroom D
  • 09:00 - 09:10
    Opening Remarks
    Pacific Ballroom
  • 09:10 - 10:00
    Test-of-Time Award Keynote
    Pacific Ballroom
  • 10:00 - 10:20
    Morning Break
    Pacific Ballroom D
  • 10:20 - 12:00
    Paper Session 1: How People Experience Security and Privacy
    Pacific Ballroom
    • Pithayuth Charnsethikul (University of Southern California), Anushka Fattepurkar (University of Southern California), Dipsy Desai (University of Southern California), Gale Lucas (University of Southern California), Jelena Mirkovic (University of Southern California)

      We replicated the study by Mayer et al. [1] on password habits and password manager (PM) usage at a large private US university. We conducted an online survey (n=437) and found high awareness (96%) and usage (94%) of PMs, but limited use of password generation (26%) and substantial password reuse, with participants reusing more than half of their passwords. These findings are consistent with the original study. However, we found that participants were unlikely to adopt a free third-party PM offered by the university, contrary to the original findings. Extending the original study, we found that awareness of the free PM was low: only 35% knew about it, and its adoption was even lower, at just 15%. We also found that faculty had the strongest password habits, while students had the weakest. Based on our findings, we provide recommendations for increasing the use of password generation features, broadening adoption of an institution-provided PM, and guiding future replication efforts.

    • Rozalina Doneva (Karlsruhe Institute of Technology (KIT)), Anne Hennig (Karlsruhe Institute of Technology (KIT)), Peter Mayer (University of Southern Denmark (SDU))

      While passwordless authentication methods are on the rise, password-based authentication remains widely used in practice. In search of effective means to promote stronger password choices, we created and evaluated the effectiveness of six interactive password strength calculator designs with respect to usability, emotional affect, password strength, and password length, by conducting an online survey with 89 participants. The results showed that while all six designs increased password strength and length compared to the control group, the differences were not statistically significant. Based on the mean values, fear-appeal nudges yielded results of similar strength to positive-feedback nudges. Still, positive feedback nudges resulted in slightly longer passwords, breaking with the paradigm that only fear appeals effectively support the creation of secure passwords. Furthermore, designs with additional information and guidance yielded longer and stronger passwords than those without, although the differences were not statistically significant. However, designs with additional information guidance exhibited significantly higher usability scores, indicating that providing guidance not only has the potential to enhance password security effectively but also improves usability.

    • Renascence Tarafder Prapty (University of California Irvine), Gene Tsudik (University of California Irvine)

      Multi-Factor Authentication (MFA) enhances login security by requiring users to use multiple authentication factors. MFA adoption has surged in recent years in response to the growing frequency, diversity, and sophistication of attacks. Duo is among the most popular MFA providers, used by thousands of organizations worldwide, including Fortune 500 companies and large educational institutions. However, its usability has not been investigated thoroughly or recently. Although prior work addressed technical challenges and user perceptions during initial implementation phases, there was no assessment of key usability metrics, such as average task completion time and System Usability Scale (SUS) scores. Moreover, relevant prior results are outdated, having been conducted years ago when the entire MFA concept was relatively new and unfamiliar to the average user.

      Motivated by the above, we conducted a long-term and largescale Duo usability study. This study took place at the University of California Irvine (UCI) over the course of the 2024-2025 academic year and it involved 2, 559 unique participants. Our analysis is based on a large set of authentication log files and a survey of 57 randomly selected participants. The study reveals that the average overhead of a Duo Push notification task is nearly 8 seconds, a duration described by participants as short to moderate. Several factors influence this overhead, including the time of day when the task was performed, the participant’s field of study, as well as their education/student level. The rate of authentication failures due to incomplete Duo tasks is 4.35%. Furthermore, 43.86% of survey respondents reported experiencing a Duo login failure at least once. The Duo SUS score is found to be 70, corresponding to a “Good” usability level: while participants generally find Duo easy to use, they also perceive it as annoying. On a positive note, Duo increases participants’ sense of security regarding their accounts. Finally, participants described commonly encountered issues and provided constructive suggestions for improvement.

    • Ece Gumusel (University of Illinois Urbana-Champaign), Yueru Yan (Indiana University Bloomington), Ege Otenen (Indiana University Bloomington)

      Interacting with Large Language Model (LLM) chatbots exposes users to new security and privacy challenges, yet little is known about how people perceive and manage these risks. While prior research has largely examined technical vulnerabilities, users’ perceptions of privacy—particularly in the United States, where regulatory protections are limited—remain underexplored. In this study, we surveyed 267 U.S.-based LLM users to understand their privacy perceptions, practices, and data-sharing preferences, and how demographics and prior LLM experience shape these behaviors. Results show low awareness of privacy policies, moderate concern over data handling, and reluctance to share sensitive information like social security or credit card numbers. Usage frequency and prior experience strongly influence comfort and control behaviors, while demographic factors shape disclosure patterns of certain personal data. These findings reveal privacy behaviors that diverge from traditional online practices and uncover nuanced trade-offs that could introduce security risks in LLM interactions. Building on these lessons, we provide actionable guidance for reducing user-related vulnerabilities and shaping effective policy and governance.

    • Mete Harun Akcay (Abo Academy University), Siddarth Prakash Rao (Nokia Bell Labs), Alexandros Bakas (Nokia Bell Labs), Buse Atli (Linkoping University)

      User-generated content, such as photos, comprises the majority of online media content and drives engagement due to the human ability to process visual information quickly. Consequently, many online platforms are designed for sharing visual content, with billions of photos posted daily. However, photos often reveal more than they intended through visible and contextual cues, leading to privacy risks. Previous studies typically treat privacy as a property of the entire image, overlooking individual objects that may carry varying privacy risks and influence how users perceive it. We address this gap with a mixed-methods study (n = 92) to understand how users evaluate the privacy of images containing multiple sensitive objects. Our results reveal mental models and nuanced patterns that uncover how granular details, such as photo-capturing context and copresence of other objects, affect privacy perceptions. These novel insights could enable personalized, context-aware privacy protection designs on social media and future technologies.

    • Lily Klucinec (Carnegie Mellon University), Ellie Young (Carnegie Mellon University), Elijah Bouma-Sims (Carnegie Mellon University), Lorrie Faith Cranor (Carnegie Mellon University)

      Prior work has shown that teenagers engage with crypto assets such as Bitcoin, NFTs, and cryptocurrency futures. However, no human subjects research has investigated teens’ interactions with these assets. Building on prior research by Bouma-Sims et al. studying teenagers on Reddit, we surveyed 143 emerging adults aged 18-20 about their most notable positive or negative experiences and harms they encountered while using crypto assets as minors. Our findings suggest that while minors were overwhelmingly motivated by profit and sometimes encouraged by family members to engage, crypto assets also filled a gap in internet payment systems, allowing minors to access digital goods without parental involvement. Engaging in crypto assets puts minors at risk for digital and financial harms they otherwise would not encounter, such as pump-and-dump scams and gambling losses. We discuss the difficulties of protecting minors from these harms in the greater landscape of crypto market regulation.

    • Filipo Sharevski (DePaul University), Jennifer Vander Loop (DePaul University), Sarah Ferguson (DePaul University), Viktorija Paneva (LMU Munich)

      For all the immersive potential offered by Virtual Reality (VR) headsets, the technology itself is also conducive to perceptual manipulations. Altering user perception in VR could negatively affect security behavior, as translating prior experiences into an immersive environment might introduce an atypical susceptibility to phishing. A case in point is the routine evaluation of potentially suspicious emails for links or attachments, a task that people might be proficient in traditional interactive environments but fall for when doing so via a VR headset. To explore VR’s potential for such manipulative alterations, we devised a study exploring user assessment and action on suspicious emails and warnings through virtual reality (VR) headsets. A balanced set of (n=20) Apple Vision Pro users and (n=20) Meta Quest 3 users were invited to evaluate their own Gmail messages. Prior to doing so, we covertly sent a false positive suspicious email – containing either a URL or attachment – that contained a warning banner but was nonetheless legitimate. Our observations showed that two Apple Vision Pro participants clicked the link, and one Meta Quest 3 participant opened the attachment. In all three cases, the susceptibility to phishing was due to the headsets’ hypersensitive click response and poor ergonomic precision during the email evaluation task. Although the perceptual manipulation in these cases could be deemed as unintentional, we nonetheless provide evidence of VR’s potential to negatively affect users’ defenses against immersive social engineering manifestations. Based on these findings and the participation experience, we offer recommendations for implementing suspicious email warnings tailored for VR environments.

  • 12:00 - 13:30
    Lunch
    Loma Vista Terrace and Harborside
  • 13:30 - 15:10
    Paper Session 2: Security and Privacy in Practice
    Pacific Ballroom
    • Tamara Bondar (Carleton University), Hala Assal (Carleton University)

      System administrators are the ones primarily responsible for ensuring the security of their systems and services. While security is typically atop their considerations, they also tend to various competing priorities. Through an interview study with 7 sysadmins, and a large-scale survey study with 124 sysadmins in North America, this paper explores factors influencing system administrators’ security vulnerability remediation decisions. In addition, we explore how the vulnerability creator (whether the sysadmin themself or another sysadmin) affects remediation decisions.

      Our findings reveal that remediation decisions are often complex and influenced by various factors, including vulnerability severity and the sysadmin’s skills and experience. The creator of the vulnerability had minimal effect on vulnerability remediation decisions, as we found that sysadmins typically assume psychological ownership and moral responsibility towards their systems. Collaboration between sysadmins, and with third-party vendors was recommended by our participants to facilitate vulnerability remediation.

    • Niklas Busch (CISPA Helmholtz Center for Information Security, Germany), Philip Klostermeyer (CISPA Helmholtz Center for Information Security, Germany), Jan H. Klemmer (CISPA Helmholtz Center for Information Security, Germany), Yasemin Acar (Paderborn University, Germany), Sascha Fahl (CISPA Helmholtz Center for Information Security, Germany)

      Hardening computer systems against cyberattacks is crucial for security. However, past incidents illustrated that many system operators struggle with effective system hardening. Hence, many computer systems and applications remain vulnerable to security threats. To date, the research community lacks a comprehensive understanding of system operators’ motivations, practices, and challenges related to system hardening. With a focus on practices and challenges, we qualitatively analyzed 316 Stack Exchange (SE) posts related to system hardening. We find that access control and deployment-related issues are the most challenging, and system operators suffer from misconceptions and unrealistic expectations. Most frequently, posts focused on operating systems and server applications. System operators were driven by the fear of their systems getting attacked or by compliance reasons. Finally, we discuss our research questions, make recommendations for future system hardening, and illustrate the implications of our work.

    • Marthin Toruan (Royal Melbourne Institute of Technology), R.D.N. Shakya (University of Moratuwa), Samuel Tseitkin (ExeQuantum), Raymond K. Zhao (ExeQuantum), Nalin Arachchilage (Royal Melbourne Institute of Technology)

      Advances in quantum computing increasingly threaten the security and privacy of data protected by current cryptosystems, particularly those relying on public-key cryptography. In response, the international cybersecurity community has prioritized the implementation of Post-Quantum Cryptography (PQC), a new cryptographic standard designed to resist quantum attacks while operating on classical computers. The National Institute of Standards and Technology (NIST) has already standardized several PQC algorithms and plans to deprecate classical asymmetric schemes, such as RSA and ECDSA, by 2035. Despite this urgency, PQC adoption remains slow, often due to limited developer expertise. Application Programming Interfaces (APIs) are intended to bridge this gap, yet prior research on classical security APIs demonstrates that poor usability of cryptographic APIs can lead developers to introduce vulnerabilities during implementation of the applications, a risk amplified by the novelty and complexity of PQC. To date, the usability of PQC APIs has not been systematically studied. This research presents an empirical evaluation of the usability of the PQC APIs, observing how developers interact with APIs and documentation during software development tasks. The study identifies cognitive factors that influence the developer’s performance when working with PQC primitives with minimal onboarding. The findings highlight opportunities across the PQC ecosystem to improve developer facing guidance, terminology alignment, and workflow examples to better support non-specialists.

    • Ravi Mahankali (University of Bristol), Jo Hallett (University of Bristol)

      What usability issues do developers using Differential Privacy libraries face? We analyzed 2,021 GitHub issues from 5 major Differential Privacy libraries, identifying the usability problems like API confusion, poor error feedback, and documentation gaps. Unlike other privacy-preserving technologies, such as cryptographic libraries, that struggle with installation issues, Differential Privacy libraries face unique challenges. The main contributions of this work include: comprehensive taxonomy of 14 distinct usability issue categories identified through a systematic analysis of real-world developer experiences; empirical evidence that Differential Privacy libraries face different usability challenges compared to other privacy libraries, with API misuse dominating at 31.5% of all issues; and library-specific usability profiles revealing that specialized libraries (IBM DP and Google DP) show distinct patterns from general-purpose frameworks (PySyft), indicating the need for specialized library usability design approaches.

    • Masoumeh Shafieinejad (Vector Institute), Xi He (Vector Institute and Univesity of Waterloo), Bailey Kacsmar (Amii & University of Alberta)

      Privacy is an instance of a social norm formed through legal, technical, and cultural dimensions. Institutions such as regulators, industry, and researchers act as societal agents that both influence and respond to evolving norms. Attempts to promote privacy must account for this complexity and the dynamic interactions among these actors. Privacy enhancing technologies (PETs) are technical solutions that allow for the development of solutions that benefit society, while ensuring the privacy of the individuals whose data is being used. However, despite increased privacy challenges and a corresponding increase in new regulations across the globe, a low adoption rate of PETs persists. In this work, we investigate the factors influencing industry’s decision-making processes around PETs adoption as well as the extent to which privacy regulations inspire such adoption through a qualitative survey study with 22 industry participants from across Canada Informed by the results of our analysis, we make recommendations for industry, researchers, and policymakers on how to support what each of them seeks from the other when attempting to improve digital privacy protections. By advancing our understanding of what challenges the industry faces, we increase the effectiveness of future privacy research that aims to help overcome these issues.

    • Alexandra Xinran Li (Carnegie Mellon University), Tian Wang (University of Illinois Urbana-Champaign), Yu-Ju Yang (University of Illinois Urbana-Champaign), Miguel Rivera-Lanas (Carnegie Mellon University), Debeshi Ghosh (Carnegie Mellon University), Hana Habib (Carnegie Mellon University), Lorrie Cranor (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University)

      Privacy regulations impose requirements on data collection and use, including obligations to disclose practices and provide choices free of deceptive patterns, emphasizing usercentric notice and choice delivery. The UsersFirst framework introduces a threat taxonomy to guide organizations in identifying where notices and choices fail to adequately support users. This paper presents an experiment evaluating its effectiveness. Twenty-six participants with privacy expertise analyzed usercentric threats in one of two scenarios, either with or without the taxonomy. Our results show that participants using the taxonomy identified significantly more relevant threats: over twice as many in one scenario and 50% more in the other. While the UsersFirst threat taxonomy helped privacy analysts more effectively identify areas where privacy notices and choice mechanisms fall short, we also identified areas for possible improvements to the taxonomy. Finally, we demonstrate an approach to assessing privacy threat analysis tools that may be useful to other researchers.

    • Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

      In the study of Human-Computer Interaction, privacy is often seen as a core issue, and it has been explored directly in connection with User Interface (UI) and User Experience (UX) design. We systematically investigate the key considerations and factors for privacy in UI/UX, drawing upon the extant literature and 15 semi-structured interviews with experts working in the field. These insights lead to the synthesis of 14 primary design considerations for privacy in UI/UX, as well as 14 key factors under four main axes affecting privacy work therein. From these findings, we produce our main research artifact, a UI/UX Privacy Pattern Catalog, which we validate in a series of two interactive workshops and one online survey with UI/UX practitioners. Our work not only systematizes a field growing in both attention and importance, but it also provides an actionable and expert-validated artifact to guide UI/UX designers in realizing privacy-preserving UI/UX design.

  • 15:10 - 15:40
    Afternoon Break
    Pacific Ballroom D
  • 15:40 - 17:25
    Paper Session 3: Emerging Platforms, Power, and Risk
    Pacific Ballroom
    • Ismat Jarin (University of California, Irvine), Olivia Figueira (University of California, Irvine), Yu Duan (University of California, Irvine), Tu Le (The University of Alabama), Athina Markopoulou (University of California, Irvine)

      Virtual reality (VR) platforms and apps collect users’ sensor data, including motion, facial, eye, and hand data, in abstracted form. These data may expose users to unique privacy risks without their knowledge or meaningful awareness, yet the extent of these risks remains understudied. To address this gap, we propose VR ProfiLens, a framework to study user profiling based on VR sensor data and the resulting privacy risks across consumer VR apps. To systematically study this problem, we first develop a taxonomy rooted in CCPA definition of personal information and expanded it by sensor groups, apps, and threat contexts to identify user attributes at risk. Then, we conduct a user study in which we collect VR sensor data from four sensor groups from real users interacting with 10 popular consumer VR apps, followed by a survey. We design and apply an analysis pipeline to demonstrate the feasibility of inferring user attributes using these data. Our results demonstrate that user attributes, including sensitive personal information, have a moderately high to high risk (with up to ∼ 90% F1 score) of being inferred from the abstracted sensor data. Through feature analysis, we further identify correlations among app groups and sensor groups in inferring user attributes. Our findings highlight risks to users, including privacy loss, tracking, targeted advertising, and safety threats. Finally, we discuss both design implications and regulatory recommendations to enhance transparency and better protect users’ privacy in VR.

    • Shijing He (King’s College London), Yaxiong Lei (University of St Andrews), Xiao Zhan (Universitat Politecnica de Valencia), Ruba Abu-Salma (King’s College London), Jose Such (INGENIO (CSIC-UPV))

      The growing adoption of AI-driven smart home devices has introduced new privacy risks for domestic workers (DWs), who are frequently monitored in employers’ homes while also using smart devices in their own households. We conducted semi-structured interviews with 18 UK-based DWs and performed a human-centered threat modeling analysis of their experiences through the lens of Communication Privacy Management (CPM). Our findings extend existing threat models beyond abstract adversaries and single-household contexts by showing how AI analytics, residual data logs, and cross-household data flows shaped the privacy risks faced by participants. In employer-controlled homes, AI-enabled features and opaque, agency-mediated employment arrangements intensified surveillance and constrained participants’ ability to negotiate privacy boundaries. In their own homes, participants had greater control as device owners but still faced challenges, including gendered administrative roles, opaque AI functionalities, and uncertainty around data retention. We synthesize these insights into a sociotechnical threat model that identifies DW agencies as institutional adversaries and maps AI-driven privacy risks across interconnected households, and we outline social and practical implications for strengthening DW privacy and agency.

    • Dev Vikesh Doshi (California State University San Marcos), Mehjabeen Tasnim (California State University San Marcos), Fernando Landeros (California State University San Marcos), Chinthagumpala Muni Venkatesh (California State University San Marcos), Daniel Timko (Emerging Threats Lab / Smishtank.com), Muhammad Lutfor Rahman (California State University San Marcos)

      Phishing attacks through text, also known as smishing, are a prevalent type of social engineering tactic in which attackers impersonate brands to deceive victims into providing personal information and/or money. While smishing awareness and cyber education are a key method by which organizations communicate this awareness, the guidance itself varies widely. In this paper, we investigate the state of practice of how 149 well-known brands across 25 categories educate their customers about smishing and what smishing prevention and reporting advice they provide. After conducting a comprehensive content analysis of the brands, we identified significant gaps in the smishing-related information provided: only 46% of the 149 brands mentioned the definition of smishing, less than 1% had a video tutorial on smishing, and only 50% of brands provided instructions on how to report. Our study highlights variation in terminology, prevention advice, and reporting mechanisms across industries, with some brands recommending potentially ineffective strategies such as ”ignoring suspicious messages.” These findings establish a baseline for understanding the current state of industry smishing awareness advice and provide specific areas where standardization improvements are needed. From our evaluation, we provide recommendations for brands on how to offer streamlined education to their respective customers on smishing for better awareness and protection against increasing smishing attacks.

    • Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)

      Generative AI has enabled the large-scale production of photorealistic synthetic sexual imagery, yet prior work on non-consensual intimate imagery and deepfakes has focused mostly on underground forums and dedicated nudification tools. In this paper, we investigate whether these services have moved into mainstream gig marketplaces, where they benefit from larger user bases and higher trust.

      We present the first systematic study of sexually explicit AI generation services (often advertised as AI NSFW services) on a major freelance marketplace, Fiverr. We discover these listings by employing a range of sampling approaches, including keyword searches, sitemap analysis, and snowball sampling, and confirm that they are sexually explicit through an LLM classifier. Through this process we identify 593 AI-enabled NSFW gigs. We also collect a set of control groups from other AI and non-AI categories (n=1,028). We use an LLM to extract each gig’s risk indicators, advertised tools, platform targets, pricing, and seller attributes.

      Our results reveal a rapidly emerging market with new NSFW service freelancers joining at consistently higher rates than any other group we observed (74.9% of NSFW sellers joined in 2025). Within the NSFW segment, 82.8% expose deepfake-enabling features and 87.6% violate Fiverr’s policies on pornography and deepfakes. We also uncover a new type of service, not previously documented: custom sexually explicit LoRA/model training. Sellers disproportionately target downstream platforms such as OnlyFans (54.2%), Instagram (29.5%), and Fanvue (24.1%). For the usable security and privacy community, our results reframe abuse-enabling generative AI as a mainstream problem rather than a dark corner of the Internet.

    • Sicheng Jin (University of New South Wales), Rahat Masood (University of New South Wales), Jung-Sook Lee (University of New South Wales), Hye-Young (Helen) Paik (University of New South Wales)

      The integration of educational technology (edtech) into primary and secondary schools has substantially accelerated, making digital applications core components of modern learning environments. While ostensibly beneficial, these apps introduce substantial privacy and security risks for children, frequently through opaque data collection and sharing practices. However, existing research on children’s applications has predominantly relied on automated dynamic analysis tools which fail to replicate authentic human behaviours, such as navigating parental gates, configuring privacy settings, or specifically claiming as student or teacher. Furthermore, prior studies have largely overlooked the accessibility of privacy policies for non-legal experts and do not reflect the current practices of Australian education departments. This paper presents a comprehensive analysis of approximately 200 Android applications sourced from both Australian school recommendations and the Google Play Store’s ”Kids” and ”Educational” categories. Our methodology follows three-stepped approach: (1) static analysis of application code; (2) dynamic analysis of live network traffic to observe real-world data transmissions; and (3) textual analysis of privacy policies to assess their readability and compare their disclosures against observed behaviour. The findings indicate that a substantial subset, 46% of apps, still engage in risky data practices, such as transmitting persistent identifiers not explicitly mentioned in their privacy policies. Additionally, these policies are typically written at a reading level above that of the average Australian parent. Our analysis shows that only 3% of privacy policies meet the threshold of being “fairly easy” to read, leaving most apps effectively inaccessible for parents. Policies rarely matched practice: only about 1 in 4 apps were fully consistent, while the remainder showed partial or conflicting disclosures, often omitting the information about third-party recipients and timing of collection. The vast majority (89.3%) of apps initiated outbound connections before any user activity on the apps. These findings offer crucial insights for educators, parents, developers, and policymakers in Australia and abroad to make informed decisions about selecting apps for children and shaping appropriate policy frameworks for educational apps.

    • Sarah Tabassum (University of North Carolina at Charlotte, USA), Narges Zare (University of North Carolina at Charlotte, USA), Cori Faklaris(University of North Carolina at Charlotte, USA)

      In today’s digital world, migrants stay connected to family, institutions, and services across borders, but this reliance on digital communication also exposes them to unfamiliar risks when they enter new technological and cultural environments. Educational migrants (also known as international students) depend on online platforms to manage admission, housing, work, and everyday life in the United States. Yet this transition often introduces an unfamiliar and fragmented digital ecosystem where they encounter privacy and security threats such as phishing, identity fraud, and cross-channel scams. Existing security tools rarely consider the situated vulnerabilities of newcomers who must interpret these threats without local knowledge or culturally familiar cues. To investigate these challenges, we conducted participatory design sessions with 22 educational migrants from Global South countries studying in the United States. Using inductive open coding within a reflexive thematic analysis framework, we identified seven themes of desired features. Participants proposed a range of support mechanisms, including transparent reporting and verification workflows, scam filtering, migrant-focused scam databases, and university-integrated safety tools Participants also mapped their concepts to high-level AI capabilities, emphasizing detection, identification, and interpretable explanations. Our findings highlight the need for transparent, culturally grounded, and context-aware digital safety supports for newcomers during their early experiences in the U.S. digital ecosystem.

    • Julie M. Haney (National Institute of Standards and Technology, Gaithersburg, Maryland), Shanee Dawkins (National Institute of Standards and Technology, Gaithersburg, Maryland), Sandra Spickard Prettyman (Cultural Catalyst LLC, Chicago), Mary F. Theofanos (National Institute of Standards and Technology, Gaithersburg, Maryland), Kristen K. Greene (National Institute of Standards and Technology, Gaithersburg, Maryland), Kristin L. Kelly Koskey (Cultural Catalyst LLC, Chicago), Jody L. Jacobs (National Institute of Standards and Technology, Gaithersburg, Maryland)

      By using cryptographic techniques, end-to-end verifiable (E2EV) voting systems have been proposed as a way to increase voter trust and confidence in elections by providing the public with direct evidence of the integrity of election systems and outcomes. However, it is unclear as to whether the path to E2EV adoption for in-person elections in the United States is feasible given the confluence of factors impacting voter trust and technology adoption. Our research addresses this gap with a first-of-its-kind interview study with 33 election experts in four areas: accessibility, cybersecurity, usability, and general elections. We found that participants’ understanding of and opinions on E2EV diverged. While E2EV was lauded by some for increased security and transparency, others described it does not address major challenges to voter trust in U.S. elections and might actually have a negative impact due to complexity and limitations. Overall, participants recognized that the feasibility of widescale E2EV adoption hinges on not just the strength and security of the technology, but also on consideration of the people and process issues surrounding it. Based on our results, we offer suggestions for future work towards informing decisions about whether to adopt E2EV systems more widely.

    • Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

      Understanding how psychological traits shape attack strategies of cyber attackers is critical for developing proactive defenses. This paper presents an early-stage study using a controlled, multi-stage Capture-the-Flag (CTF) environment designed to elicit behavioral expressions of persistence, resilience, risk-taking, and openness to experience. Participants complete validated personality inventories before engaging in a cyberattack task within a simulated but realistic environment that mimics a corporate network. That environment contains both real and deceptive vulnerabilities that attackers can exploit to escalate their privilege and access resources in the system. During that time, system logs, continuously taken screenshots, and think-aloud data will capture their actions and strategies. From that data, behavioral indicators, such as retries, strategic pivots, early high-risk actions, and exploration breadth, will be extracted and used to predict traits. The larger goal is to automatically guess attackers’ future actions, and proactively deploy defense mechanisms in run time. As a vision-track contribution, this work establishes a methodological foundation for profiling attackers through behavioral telemetry, supporting the future development of human-aware, proactive cyber defense strategies.

  • 17:25 - 17:30
    Closing Remarks
    Pacific Ballroom