Ece Gumusel (University of Illinois Urbana-Champaign), Yueru Yan (Indiana University Bloomington), Ege Otenen (Indiana University Bloomington)

Interacting with Large Language Model (LLM) chatbots exposes users to new security and privacy challenges, yet little is known about how people perceive and manage these risks. While prior research has largely examined technical vulnerabilities, users’ perceptions of privacy—particularly in the United States, where regulatory protections are limited—remain underexplored. In this study, we surveyed 267 U.S.-based LLM users to understand their privacy perceptions, practices, and data-sharing preferences, and how demographics and prior LLM experience shape these behaviors. Results show low awareness of privacy policies, moderate concern over data handling, and reluctance to share sensitive information like social security or credit card numbers. Usage frequency and prior experience strongly influence comfort and control behaviors, while demographic factors shape disclosure patterns of certain personal data. These findings reveal privacy behaviors that diverge from traditional online practices and uncover nuanced trade-offs that could introduce security risks in LLM interactions. Building on these lessons, we provide actionable guidance for reducing user-related vulnerabilities and shaping effective policy and governance.

View More Papers

Work in Progress: A Comparative Long-Term Study of Fallback...

Philipp Markert, Maximilian Golla (Ruhr University Bochum); Elizabeth Stobert (National Research Council of Canada); Markus Dürmuth (Ruhr University Bochum)

Read More

Security and Privacy Challenges in Standardized IoT Systems: Insights...

Anna Maria Mandalari (University College London), Volker Stocker (Weizenbaum Institute)

Read More

Bounded Autonomy in the SOC: Mitigating Hallucinations in Agentic...

Samuel Addington (California State University Long Beach)

Read More