Harry Halpin (Nym Technologies)

With the ascendance of artificial intelligence (AI), one of the largest problems facing privacy-enhancing technologies (PETs) is how they can successfully counter-act the large-scale surveillance that is required for the collection of data–and metadata–necessary for the training of AI models. While there has been a flurry of research into the foundations of AI, the field of privacy-enhancing technologies still appears to be a grabbag of techniques without an overarching theoretical foundation. However, we will point to the potential unification of AI and PETS via the concepts of signal and noise, as formalized by informationtheoretic metrics like entropy. We overview the concept of entropy (“noise”) and its applications in both AI and PETs. For example, mixnets can be thought of as noise-generating networks, and so the inverse of neural networks. Then we defend the use of entropy as a metric to compare both different PETs, as well as both PETs and AI systems.

View More Papers

What’s Done Is Not What’s Claimed: Detecting and Interpreting...

Chang Yue (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Zhixiu Guo (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Jun Dai, Xiaoyan Sun (Department of Computer Science, Worcester Polytechnic Institute), Yi Yang (Institute of Information Engineering, Chinese Academy…

Read More

BrowserFM: A Feature Model-based Approach to Browser Fingerprint Analysis

Maxime Huyghe (Univ. Lille, Inria, CNRS, UMR 9189 CRIStAL), Clément Quinton (Univ. Lille, Inria, CNRS, UMR 9189 CRIStAL), Walter Rudametkin (Univ. Rennes, Inria, CNRS, UMR 6074 IRISA)

Read More

Space Cybersecurity Testbed: Fidelity Framework, Example Implementation, and Characterization

Jose Luis Castanon Remy, Caleb Chang, Ekzhin Ear, Shouhuai Xu (University of Colorado Colorado Springs (UCCS))

Read More

The Midas Touch: Triggering the Capability of LLMs for...

Yi Yang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Jinghua Liu (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Kai Chen (Institute of Information Engineering, Chinese Academy of…

Read More