The transformative power of Artificial Intelligence (AI) is reshaping industries, from healthcare and finance to transportation and entertainment, ushering in a new era of innovation and efficiency. However, with the rapid advancements in AI, there is an escalating concern regarding the profound implications for its security, privacy, and safety. In particular, it poses challenges in addressing critical vulnerabilities and risks, including the leakage of training data, the emergence of adversarial samples in machine learning models, and the threat of backdoor attacks on AI systems.
On the other hand, Confidential Computing has witnessed significant growth in recent years. This technology aims to safeguard data during its use by establishing an isolated execution environment that is encrypted. Such a design is particularly beneficial for AI applications where elements like the model, training data, or inference tasks possess sensitive or private details.
Following the release of NVIDIA’s H100 this year, deploying ML systems on TEEs that require real-world GPU support has become more practical. Identifying ways to utilize TEEs to address concerns in AI security, privacy, and safety has emerged as a hot topic. It is not only urgently needed for in-depth discussion but also represents a promising new research direction. However, there is limited interaction between practitioners and researchers in Confidential Computing and AI security; the two fields currently operate somewhat in isolation. Through this workshop, our aim is to bridge this gap by bringing together practitioners and researchers from both domains. By issuing a call for publications for this workshop, we hope to advance the use of TEEs in addressing AI security challenges and to explore new issues arising in the convergence of these two domains.