Workshop on the Safety and Explainability of Large Models Optimization and Deployment (MLSafety) 2025
Co-located with NDSS Symposium 2025, San Diego, CA
The widespread application of artificial intelligence technologies across various fields has become a critical driving force for modern societal development, especially for deep neural networks and large language models. However, these models with large number of parameter numbers pose challenges to be deployed in the devices with limited resources, especially for the distributed edge devices. In recent years, researchers have proposed various optimization techniques, such as model distillation, pruning, and compression and so on, aiming to reduce computational resource consumption while retaining the system performance. However, the optimization process raises safety concerns. Model compression and simplification may introduce novel vulnerabilities, making the models more susceptible to attacks. While optimizing models reduce computational resource consumption, they also challenge the decision-making process, complicating the interpretation of model behaviours. Thus, ensuring the safety, robustness, and explainability of large models while improving computational efficiency and resource utilization is our main goal.
To address these challenges, our goal is to gather researchers and experts from the areas of but not limited to model optimization, safety, and explainability at this NDSS Symposium Workshop. We aim to find innovative solutions that balance performance optimization, safety assurance, and explainability through interdisciplinary exchange and collaboration. We hope this workshop will become an important platform promoting the continuous research and development of AI technologies, further advancing research in model optimization concerning safety and explainability.
Submissions
The call for papers is open until 15 December 2024.
Leadership
Organizing Committee, Technical Program Committee, and Steering Committee.