Pluralistic Alignment
@ ICML 2026 Workshop
July 10 or 11, 2026.
COEX Convention & Exhibition Center, Seoul, South Korea.
Pluralistic AI: Aligning with the Diversity of Human Values
Welcome to the Pluralistic Alignment Workshop! Aligning AI systems with human preferences and societal values has become a critical challenge as these technologies grow more powerful and pervasive. However, current AI alignment methods have proven insufficient for capturing the full spectrum of complex—and often conflicting—real-world values held across diverse populations. This workshop addresses this gap by examining how to integrate diverse perspectives, values, and expertise into pluralistic AI alignment frameworks. We will explore novel approaches to multi-objective alignment, drawing inspiration from established governance mechanisms and consensus-building practices to navigate the value conflicts inherent in pluralistic societies. The workshop will cover technical innovations in preference elicitation and dataset collection, algorithm development for multi-stakeholder optimization, and the design of human-AI interaction workflows that authentically reflect pluralistic values across diverse communities. By convening researchers, practitioners, and domain experts from AI safety, political philosophy, social science, and human-computer interaction, this workshop aims to foster interdisciplinary collaboration that advances both the theoretical foundations and practical implementation of pluralistic AI alignment.
Stay tuned by following us on Twitter @pluralistic_ai.
Our workshop aims to bring together researchers with diverse scientific backgrounds, including (but not limited to) machine learning, human-computer interaction, philosophy, and policy studies. More broadly, our workshop lies at the intersection of computer and social sciences. We welcome all interested researchers to discuss the aspects of pluralistic AI, from its definition to the technical pipeline to broad deployment and social acceptance.
We invite submissions that discuss the technical, philosophical, and societal aspects of pluralistic AI. We provide a non-exhaustive list of topics we hope to cover below. We also broadly welcome any submissions which are broadly relevant to pluralistic alignment.
Submission Instructions
We invite authors to submit anonymized papers at least 4 pages and up to 8 pages of content, plus unlimited pages of references and appendices. Submissions must follow the ICML 2026 template. Checklists are not required for submissions. Reviews will be double-blind, with at least three reviewers assigned to each paper to ensure a thorough evaluation process.
We welcome various types of papers including works in progress, position papers, policy papers, academic papers. All accepted papers will be available on the workshop website, but are to be considered non-archival.
All deadlines are 11:59 pm UTC-12h (“Anywhere on Earth”). (T) indicates a tentative schedule. We will remove this label and update the dates once they are finalized.
| March 22, 2026 | Call for Workshop Papers |
| May 3, 2026 | Paper Submission Deadline |
| May 31, 2026 (T) | Notification of Acceptance |
| June 10, 2026 (T) | Camera-Ready Version Due |
| July 10 or 11, 2026 | Workshop Date |
|---|
Please email pluralistic-alignment@googlegroups.com if you have any questions.