Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (armasuisse Science and Technology), Bartlomiej Surma (CISPA Helmholtz Center for Information Security), Praveen Manoharan (CISPA Helmholtz Center for Information Security), Jilles Vreeken (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Social graphs derived from online social interactions contain a wealth of information that is nowadays extensively used by both industry and academia. However, as social graphs contain sensitive information, they need to be properly anonymized before release. Most of the existing graph anonymization mechanisms rely on the perturbation of the original graph’s edge set. In this paper, we identify a fundamental weakness of these mechanisms: They neglect the strong structural proximity between friends in social graphs, thus add implausible fake edges for anonymization.
To exploit this weakness, we first propose a metric to quantify an edge’s plausibility by relying on graph embedding. Extensive experiments on three real-life social network datasets demonstrate that our plausibility metric can very effectively differentiate fake edges from original edges with AUC values above 0.95 in most of the cases. We then rely on a Gaussian mixture model to automatically derive the threshold on the edge plausibility values to determine whether an edge is fake, which enables us to recover to a large extent the original graph from the anonymized graph. Then, we demonstrate that our graph recovery attack jeopardizes the privacy guarantees provided by the considered graph anonymization mechanisms.
To mitigate this vulnerability, we propose a method to generate fake yet plausible edges given the graph structure and incorporate it into the existing anonymization mechanisms. Our evaluation demonstrates that the enhanced mechanisms decrease the chances of graph recovery, reduce the success of graph de-anonymization (up to 30%), and provide even better utility than the existing anonymization mechanisms.

View More Papers

Heterogeneous Private Information Retrieval

Hamid Mozaffari (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More

DeepBinDiff: Learning Program-Wide Code Representations for Binary Diffing

Yue Duan (Cornell University), Xuezixiang Li (UC Riverside), Jinghan Wang (UC Riverside), Heng Yin (UC Riverside)

Read More

DISCO: Sidestepping RPKI's Deployment Barriers

Tomas Hlavacek (Fraunhofer SIT), Italo Cunha (Universidade Federal de Minas Gerais), Yossi Gilad (Hebrew University of Jerusalem), Amir Herzberg (University of Connecticut), Ethan Katz-Bassett (Columbia University), Michael Schapira (Hebrew University of Jerusalem), Haya Shulman (Fraunhofer SIT)

Read More

On the Resilience of Biometric Authentication Systems against Random...

Benjamin Zi Hao Zhao (University of New South Wales and Data61 CSIRO), Hassan Jameel Asghar (Macquarie University and Data61 CSIRO), Mohamed Ali Kaafar (Macquarie University and Data61 CSIRO)

Read More