Hai Huang (Tsinghua University), Jiaming Mu (Tsinghua University), Neil Zhenqiang Gong (Duke University), Qi Li (Tsinghua University), Bin Liu (West Virginia University), Mingwei Xu (Tsinghua University)

Recommender systems play a crucial role in helping users to find their interested information in various web services such as Amazon, YouTube, and Google News. Various recommender systems, ranging from neighborhood-based, association-rule-based, matrix-factorization-based, to deep learning based, have been developed and deployed in industry. Among them, deep learning based recommender systems become increasingly popular due to their superior performance.

In this work, we conduct the first systematic study on data poisoning attacks to deep learning based recommender systems. An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users. To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system. Specifically, we formulate our attack as an optimization problem, such that the injected ratings would maximize the number of normal users to whom the target items are recommended. However, it is challenging to solve the optimization problem because it is a non-convex integer programming problem. To address the challenge, we develop multiple techniques to approximately solve the optimization problem. Our experimental results on three real-world datasets, including small and large datasets, show that our attack is effective and outperforms existing attacks. Moreover, we attempt to detect fake users via statistical analysis of the rating patterns of normal and fake users. Our results show that our attack is still effective and outperforms existing attacks even if such a detector is deployed.

View More Papers

Short Paper: Declarative Demand-Driven Reverse Engineering

Yihao Sun, Jeffrey Ching, Kristopher Micinski (Department of Electical Engineering and Computer Science, Syracuse University)

Read More

Demo #9: Attacking Multi-Sensor Fusion based Localization in High-Level...

Junjie Shen, Jun Yeon Won, Zeyuan Chen and Qi Alfred Chen (UC Irvine)

Read More

Доверя́й, но проверя́й: SFI safety for native-compiled Wasm

Evan Johnson (University of California San Diego), David Thien (University of California San Diego), Yousef Alhessi (University of California San Diego), Shravan Narayan (University Of California San Diego), Fraser Brown (Stanford University), Sorin Lerner (University of California San Diego), Tyler McMullen (Fastly Labs), Stefan Savage (University of California San Diego), Deian Stefan (University of California…

Read More

icLibFuzzer: Isolated-context libFuzzer for Improving Fuzzer Comparability

Yu-Chuan Liang, Hsu-Chun Hsiao (National Taiwan University)

Read More