Call for Papers: Workshop on the Safety and Explainability of Large Models Optimization and Deployment (MLSafety) 2025
The call for papers is now open.
The widespread application of artificial intelligence technologies across various fields has become a critical driving force for modern societal development, especially for deep neural networks and large language models. However, these models with a large number of parameters pose challenges to be deployed in the devices with limited resources, especially for the distributed edge devices. In recent years, researchers have proposed various optimization techniques, such as model distillation, pruning, and compression and so on, aiming to reduce computational resource consumption while retaining the system performance. However, the optimization process raises safety concerns. Model compression and simplification may introduce novel vulnerabilities, making the models more susceptible to attacks. While optimizing models reduce computational resource consumption, they also challenge the decision-making process, complicating the interpretation of model behaviours. Thus, ensuring the safety, robustness, and explainability of large models while improving computational efficiency and resource utilization is our main goal.
To address these challenges, our goal is to gather researchers and experts from the areas of but not limited to model optimization, safety, and explainability at this NDSS Symposium Workshop. We aim to find innovative solutions that balance performance optimization, safety assurance, and explainability through interdisciplinary exchange and collaboration. We hope this workshop will become an important platform promoting the continuous research and development of AI technologies, further advancing research in model optimization concerning safety and explainability.
Submission Guidelines for Papers
We accept (1) regular papers up to 8 pages, (2) short papers or work-in-progress (WIP) papers up to 5 pages, and (3) poster papers up to 1 page. All submissions should be in the double-column NDSS format, including both references and appendices. Additionally, we welcome Systemization of Knowledge (SoK) papers, which can be up to 12 pages in length, excluding references and clearly marked appendices. Please note that reviewers are not obligated to read the appendices or any supplementary material provided. Authors must adhere to the NDSS format without altering the font size or margins. For regular papers, submissions that are concise will not be at a disadvantage. As such, we encourage authors to submit papers that reflect the depth and breadth of their research contribution, without undue length.
Papers should be set to US letter size (not A4). Use a two-column layout with each column not exceeding 9.25 in. in height and 3.5 in. in width. The text should be in Times font. Font size should be 10-point or larger and line spacing should be 11-point or larger. Authors are required to use NDSS templates for their submissions.
All submissions must be in Portable Document Format (.pdf). Ensure that any special fonts, images, and figures are correctly rendered. When printed in black and white using Adobe Reader, all components should be clear and legible. All submissions should be anonymized for the review process.
Special Categories: If your paper falls under the Short/WIP/SoK/poster category, please prefix your title with “Short:”, “WIP:”, “SoK:”, or “Poster:” respectively.
The submission portal for papers is: https://sellmod24.hotcrp.com/
All accepted submissions will be presented at the workshop and included in the NDSS workshop proceedings. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
For any questions, please contact one the workshop organizers at [email protected].
Areas of Interest
- Techniques and Safety Evaluations of model distillation
- Applications of adversarial training in model optimization
- Safety benchmarking of model pruning and compression techniques
- Privacy protection and data safety in AI models
- Explainability in model optimization
- Robustness and safety of large models
- Deployment strategies for models in resource-constrained scenarios
- Adversarial attacks and defense mechanisms in large models
- Design and optimization of lightweight models
- Balancing performance and safety in model optimization techniques
- Evaluation methods for ensuring model safety and explainability
- Cross-disciplinary research on model optimization and safety issues
- Model optimization and safety challenges in multimodal data
- Structured pruning and safety of large models
- Explainability-based debugging and optimization of large models
- Explainability and decision transparency of large model compression
Important Dates
- Paper Submission Deadline: 15 December 2024 Anywhere-on-earth (AoE)
- Author Notification: 26 January 2025 Anywhere-on-earth (AoE)
- Camera Ready Deadline: 5 February 2025 Anywhere-on-earth (AoE)
- Workshop Date: 28 February 2025, co-located with NDSS Symposium 2025