All times in Pacific Standard Time (PST).

Monday, 27 February

  • 09:00 - 09:15
    Opening Remarks
  • 09:15 - 09:45
    Test of Time Award with Keynote by Mary Ellen Zurko
  • 09:45 - 10:15
    Session 1 (AR/VR)
    • Stefany Cruz (Northwestern University), Logan Danek (Northwestern University), Shinan Liu (University of Chicago), Christopher Kraemer (Georgia Institute of Technology), Zixin Wang (Zhejiang University), Nick Feamster (University of Chicago), Danny Yuxing Huang (New York University), Yaxing Yao (University of Maryland), Josiah Hester (Georgia Institute of Technology)

      Users face various privacy risks in smart homes, yet there are limited ways for them to learn about the details of such risks, such as the data practices of smart home devices and their data flow. In this paper, we present Privacy Plumber, a system that enables a user to inspect and explore the privacy “leaks” in their home using an augmented reality tool. Privacy Plumber allows the user to learn and understand the volume of data leaving the home and how that data may affect a user’s privacy— in the same physical context as the devices in question, because we visualize the privacy leaks with augmented reality. Privacy Plumber uses ARP spoofing to gather aggregate network traffic information and presents it through an overlay on top of the device in an smartphone app. The increased transparency aims to help the user make privacy decisions and mend potential privacy leaks, such as instruct Privacy Plumber on what devices to block, on what schedule (i.e., turn off Alexa when sleeping), etc. Our initial user study with six participants demonstrates participants’ increased awareness of privacy leaks in smart devices, which further contributes to their privacy decisions (e.g., which devices to block).

    • Shady Mansour (LMU Munich), Pascal Knierim (Universitat Innsbruck), Joseph O’Hagan (University of Glasgow), Florian Alt (University of the Bundeswehr Munich), Florian Mathis (University of Glasgow)

      VR Head-Mounted Displays (HMDs) provide unlimited and personalized virtual workspaces and will enable working anytime and anywhere. However, if HMDs are to become ubiquitous, VR users are at risk of being observed, which can threaten their privacy. We examine six Bystander Awareness Notification Systems (BANS) to enhance VR users’ bystander awareness whilst immersed in VR. In a user study (N=28), we explore how future HMDs equipped with BANS might enable users to maintain their privacy while contributing towards enjoyable and productive travels. Results indicate that BANS increase VR users’ bystander awareness without affecting presence and productivity. Users prefer BANS that extract and present the most details of reality to facilitate their bystander awareness. We conclude by synthesizing four recommendations, such as providing VR users with control over BANS and considering how VR users can best transition between realities, to inform the design of privacy-preserving HMDs.

  • 10:15 - 10:45
    Coffee Break
  • 10:45 - 12:00
    Session 2 (User Attitudes and Behaviors)
    • Yasmeen Abdrabou (University of the Bundeswehr Munich), Elisaveta Karypidou (LMU Munich), Florian Alt (University of the Bundeswehr Munich), Mariam Hassib (University of the Bundeswehr Munich)

      We propose an approach to identify users’ exposure to fake news from users’ gaze and mouse movement behavior. Our approach is meant as an enabler for interventions that make users aware of engaging with fake news while not being consciously aware of this. Our work is motivated by the rapid spread of fake news on the web (in particular, social media) and the difficulty and effort required to identify fake content, either technically or by means of a human fact checker. To this end, we set out with conducting a remote online study (N = 54) in which participants were exposed to real and fake social media posts while their mouse and gaze movements were recorded. We identify the most predictive gaze and mouse movement features and show that fake news can be predicted with 68.4% accuracy from users’ gaze and mouse movement behavior. Our work is complemented by discussing the implications of using behavioral features for mitigating the spread of fake news on social media.

    • Nikolas Pilavakis, Adam Jenkins, Nadin Kokciyan, Kami Vaniea (University of Edinburgh)

      When people identify potential malicious phishing emails one option they have is to contact a help desk to report it and receive guidance. While there is a great deal of effort put into helping people identify such emails and to encourage users to report them, there is relatively little understanding of what people say or ask when contacting a help desk about such emails. In this work, we qualitatively analyze a random sample of 270 help desk phishing tickets collected across nine months. We find that when reporting or asking about phishing emails, users often discuss evidence they have observed or gathered, potential impacts they have identified, actions they have or have not taken, and questions they have. Some users also provide clear arguments both about why the email really is phishing and why the organization needs to take action about it.

    • Florian Lachner, Minzhe Yuan Chen Cheng, Theodore Olsauskas-Warren (Google)

      Online behavioral advertising is a double-edged sword. While relevant display ads are generally considered useful, opaque tracking based on third-party cookies has reached unfettered sprawl and is deemed to be privacy-intrusive. However, existing ways to preserve privacy do not sufficiently balance the needs of both users and the ecosystem. In this work, we evaluate alternative browser controls. We leverage the idea of inferring interests on users’ devices and designed novel browser controls to manage these interests. Through a mixed method approach, we studied how users feel about this approach. First, we conducted pilot interviews with 9 participants to test two design directions. Second, we ran a survey with 2,552 respondents to measure how our final design compares with current cookie settings. Respondents reported a significantly higher level of perceived privacy and feeling of control when introduced to the concept of locally inferred interests with an option for removal.

    • This paper explores how cultural factors impact the password-sharing attitudes and practices of young Bangladeshi adults. We conducted semi-structured interviews with 24 Bangladeshi participants aged between 18 and 39 about how, why, and with whom they share passwords. Using Grounded Theory, we identified three stages of password sharing (motivations, expectations, and problems) and three cultural factors (gender identity, collectivist social norms, and religious identity) that impact password sharing in Bangladesh. We found that password sharing is pervasive, and deeply affected by Bangladeshi culture and identity. Young adults’ motivations and expectations for password sharing were complex and nuanced, and often served poorly by the tools and accounts that they were attempting to share. We found that Bangladeshi culture creates a situation in which password sharing is inevitable, but where individuals are inconvenienced and sometimes endangered by the action.

    • Gokul Jayakrishnan, Vijayanand Banahatti, Sachin Lodha (TCS Research Tata Consultancy Services Ltd.)

      The pandemic changed the global enterprise working model. Work from home became the norm and so did the associated security risks. The new workspace posed new dangers such as insecure network and lack of organizational supervisions at home. Failing to adhere to strict security practices in the comfort of home could result in the leakage of confidential information. So, employees’ security awareness plays a major role in this new setting. In this paper, we present ‘Secure Workspace’, a serious game set in a simulated home workspace, that we used to gauge the awareness levels of enterprise employees on secure practices. Our game was well received and played by over 36,000 participants. Based on the participants’ performance, we present insights on their awareness, and an advisory to help reduce the number of security violations while working from home.

  • 12:00 - 13:45
    Lunch
  • 13:45 - 14:45
    Session 3 (Vision Track)
    • Nyteisha Bookert, Mohd Anwar (North Carolina Agricultural and Technical State University)

      Patient-generated health data is growing at an unparalleled rate due to advancing technologies (e.g., the Internet of Medical Things, 5G, artificial intelligence) and increased consumer transactions. The influx of data has offered life-altering solutions. Consequently, the growth has created significant privacy challenges. A central theme to mitigating risks is promoting transparency and notifying stakeholders of data practices through privacy policies. However, natural language privacy policies have several limitations, such as being difficult to understand (by the user), lengthy, and having conflicting requirements. Yet they remain the de facto standard to inform users of privacy practices and how organizations follow privacy regulations. We developed an automated process to evaluate the appropriateness of combining machine learning and custom named entity recognition techniques to extract IoMT-relevant privacy factors in the privacy policies of IoMT devices. We employed machine learning and the natural language processing technique of named entity recognition to automatically analyze a corpus of policies and specifications to extract privacy-related information for the IoMT device. Based on the natural language analysis of policies, we provide fine-grained annotations that can help reduce the manual and tedious process of policy analysis and aid privacy engineers and policy makers in developing suitable privacy policies.

    • Muhammad Hassan, Mahnoor Jameel, Masooda Bashir (University of Illinois at Urbana Champaign)

      Social network platforms are now widely used as a mode of communication globally due to their popularity and their ease of use. Among the various content-sharing capabilities made available via these applications, link-sharing is a common activity among social media users. While this feature provides a desired functionality for the platform users, link sharing enables attackers to exploit vulnerabilities and compromise users’ devices. Attackers can exploit this content-sharing feature by posting malicious/harmful URLs or deceptive posts and messages which are intended to hide a dangerous link. However, it is not clear how the most common social media applications monitor and/or filter when their users share malicious URLs or links through their platforms. To investigate this security vulnerability, we designed an exploratory study to examine the top five android social media applications’ performance when it comes to malicious link sharing. The aim was to determine if the selected applications had any filtering or defenses against malicious URL sharing. Our results show that most of the selected social media applications did not have an effective defense against the posting and spreading of malicious URLs. While our results are exploratory, we believe our study demonstrates the presence of a vital security vulnerability that malicious attackers or unaware users can use to spread harmful links. In addition, our findings can be used to improve our understanding of link-based attacks as well as the design of security measures that usability into account

    • Kaustav Bhattacharjee, Aritra Dasgupta (New Jersey Institute of Technology)

      The open data ecosystem is susceptible to vulnerabilities due to disclosure risks. Though the datasets are anonymized during release, the prevalence of the release-and-forget model makes the data defenders blind to privacy issues arising after the dataset release. One such issue can be the disclosure risks in the presence of newly released datasets which may compromise the privacy of the data subjects of the anonymous open datasets. In this paper, we first examine some of these pitfalls through the examples we observed during a red teaming exercise and then envision other possible vulnerabilities in this context. We also discuss proactive risk monitoring, including developing a collection of highly susceptible open datasets and a visual analytic workflow that empowers data defenders towards undertaking dynamic risk calibration strategies.

    • Nick Ceccio, Naman Gupta, Majed Almansoori, Rahul Chatterjee (University of Wisconsin-Madison)

      Intimate partner violence (IPV) is a prevalent societal issue that affects many people globally. Unfortunately, abusers rely on technology to spy on their partners. Prior works show that victims and advocates fail to combat and prevent technology-enabled stalking due to their limited technical background. However, not much is known about this issue; why do victims and advocates struggle to combat technology-enabled stalking despite the ease of finding resources online? To answer this question, we aim to conduct a mixed-method study to explore smartphone usage patterns and internet search behavior while detecting and preventing technology-enabled abuse. In this future work, we plan to conduct a mixed-method between-group study to investigate the smartphone usage patterns and internet search behavior of participants helping their friend combat technology-enabled spying. We expect the tech-savvy participants to be more effective and time-efficient in finding and disabling stalking methods than non-tech-savvy participants.

  • 15:00 - 15:30
    Coffee Break
  • 15:30 - 16:30
    Session 4 (Privacy and Security Tools)
    • Jacob Abbott (Indiana University), Jayati Dev (Indiana University), DongInn Kim (Indiana University), Shakthidhar Reddy Gopavaram (Indiana University), Meera Iyer (Indiana University), Shivani Sadam (Indiana University) , Shirang Mare (Western Washington University), Tatiana Ringenberg (Purdue University), Vafa Andalibi (Indiana University), and L. Jean Camp(Indiana University)

      In the last decade integration of Internet of Things (IoT) ecosystems has increased exponentially, and it is necessary that our understanding of human behavior when interacting with multiple smart devices in an IoT ecosystem keep pace. To better understand users’ perceptions and use of in-home IoT ecosystem over time, we implemented an ecosystem in homes of participants so that we could both test previous findings about individual devices and identify differences that arise in the content of a home with multiple IoT devices. Specifically, we recruited eight participants from separate households who installed identical IoT configurations, and interviewed each participant for five weeks. We included an Android dashboard to provide device control and data transparency. We detail the semi-structured interviews to compare user perceptions of what devices are classified as IoT, the perceived sustainability of IoT devices, interactions with and desires of dashboard information, and exploration of current notification preferences and mitigation strategies. We discuss the factors which participants identified as being relevant to their personal experiences with IoT devices and contribute recommendations for dashboard designs and control mechanisms for IoT devices. We note that the participants uniformly had a more expansive definition of IoT than that found in much of the previous literature, implying that our understanding of perceptions of in-home IoT may be informed by previous research on security systems, wearables, watches, and phones. We identify where our results reify findings of studies of those devices.

    • Shikun Zhang, Norman Sadeh (Carnegie Mellon University)

      Inspired by earlier academic research, iOS app privacy labels and the recent Google Play data safety labels have been introduced as a way to systematically present users with concise summaries of an app’s data practices. Yet, little research has been conducted to determine how well today’s mobile app privacy labels address people’s actual privacy concerns or questions. We analyze a crowd-sourced corpus of privacy questions collected from mobile app users to determine to what extent these mobile app labels actually address users’ privacy concerns and questions. While there are differences between iOS labels and Google Play labels, our results indicate that an important percentage of people’s privacy questions are not answered or only partially addressed in today’s labels. Findings from this work not only shed light on the additional fields that would need to be included in mobile app privacy labels but can also help inform refinements to existing labels to better address users’ typical privacy questions.

    • Philipp Markert (Ruhr University Bochum), Andrick Adhikari (University of Denver), Sanchari Das (University of Denver)

      Websites are used regularly in our day-to-day lives, yet research has shown that it is challenging for many users to use them securely, e.g., most prominently due to weak passwords through which they access their accounts. At the same time, many services employ low-security measures, making their users even more prone to account compromises with little to no means of remediating compromised accounts. Additionally, remediating compromised accounts requires users to complete a series of steps, ideally all provided and explained by the service. However, for U.S.-based websites, prior research has shown that the advice provided by many services is often incomplete. To further understand the underlying issue and its implications, this paper reports on a study that analyzes the account remediation procedure covering the 50 most popular websites in 30 countries, 6 each in Africa, the Americas, Asia, Europe, and Oceania. We conducted the first transcontinental analysis on the account remediation protocols of popular websites. The analysis is based on 5 steps websites need to provide advice for: compromise discovery, account recovery, access limitation, service restoration, and prevention. We find that the lack of advice prior work identified for websites from the U.S. also holds across continents, with the presence ranging from 37% to 77% on average. Additionally, we identified considerable differences when comparing countries and continents, with countries in Africa and Oceania significantly more affected by the lack of advice. To address this, we suggest providing publicly available and easy-to-follow remediation advice for users and guidance for website providers so they can provide all the necessary information.

    • Jens Christian Dalgaard, Niek A. Janssen, Oksana Kulyuk, Carsten Schurmann (IT University of Copenhagen)

      Cybersecurity concerns are increasingly growing across different sectors globally, yet security education remains a challenge. As such, many of the current proposals suffer from drawbacks, such as failing to engage users or to provide them with actionable guidelines on how to protect their security assets in practice. In this work, we propose an approach for designing security trainings from an adversarial perspective, where the audience learns about the specific methodology of the specific methods, which attackers can use to break into IT systems. We design a platform based on our proposed approach and evaluate it in an empirical study (N = 34), showing promising results in terms of motivating users to follow security policies.