Ruizhe Wang (University of Waterloo), Roberta De Viti (MPI-SWS), Aarushi Dubey (University of Washington), Elissa Redmiles (Georgetown University)

The voluntary donation of private health information for altruistic purposes, such as supporting research advancements, is a common practice. However, concerns about data misuse and leakage may deter people from donating their information. Privacy Enhancing Technologies (PETs) aim to alleviate these concerns and, in turn, allow for safe and private data sharing. This study conducts a vignette survey (N = 494) with participants recruited from Prolific to examine the willingness of US-based people to donate medical data for developing new treatments under four general guarantees offered across PETs: data expiration, anonymization, purpose restriction, and access control. The study explores two mechanisms for verifying these guarantees: self-auditing and expert auditing, and controls for the impact of confounds including demographics and two types of data collectors: for-profit and non-profit institutions.

Our findings reveal that respondents hold such high expectations of privacy from non-profit entities a priori that explicitly outlining privacy protections has little impact on their overall perceptions. In contrast, offering privacy guarantees elevates respondents’ expectations of privacy for for-profit entities, bringing them nearly in line with those for non-profit organizations. Further, while the technical community has suggested audits as a mechanism to increase trust in PET guarantees, we observe limited effect from transparency about such audits. We emphasize the risks associated with these findings and underscore the critical need for future interdisciplinary research efforts to bridge the gap between the technical community’s and end-users’ perceptions regarding the effectiveness of auditing PETs.

View More Papers

DSEF: DNS Synthetic Traffic Evaluation Framework

Jihye Kim (Research Institute CODE, University of the Bundeswehr Munich)

Read More

UDIM: Formal User-Device Interaction Model for Approximating Artifact Coverage...

Maximilian Eichhorn (Friedrich-Alexander-Universitat Erlangen-Nurnberg), Andreas Hammer (Friedrich-Alexander-Universitat Erlangen-Nurnberg), Gaston Pugliese (Friedrich-Alexander-Universitat Erlangen-Nurnberg), Felix Freiling (Friedrich-Alexander-Universitat Erlangen-Nurnberg)

Read More

Loki: Proactively discovering online scams by mining toxic search...

Pujan Paudel (Boston University), Gianluca Stringhini (Boston University)

Read More