Ece Gumusel (University of Illinois Urbana-Champaign), Yueru Yan (Indiana University Bloomington), Ege Otenen (Indiana University Bloomington)
Interacting with Large Language Model (LLM) chatbots exposes users to new security and privacy challenges, yet little is known about how people perceive and manage these risks. While prior research has largely examined technical vulnerabilities, users’ perceptions of privacy—particularly in the United States, where regulatory protections are limited—remain underexplored. In this study, we surveyed 267 U.S.-based LLM users to understand their privacy perceptions, practices, and data-sharing preferences, and how demographics and prior LLM experience shape these behaviors. Results show low awareness of privacy policies, moderate concern over data handling, and reluctance to share sensitive information like social security or credit card numbers. Usage frequency and prior experience strongly influence comfort and control behaviors, while demographic factors shape disclosure patterns of certain personal data. These findings reveal privacy behaviors that diverge from traditional online practices and uncover nuanced trade-offs that could introduce security risks in LLM interactions. Building on these lessons, we provide actionable guidance for reducing user-related vulnerabilities and shaping effective policy and governance.