Pluralistic Alignment

@ ICML 2026 Workshop

July 10 or 11, 2026.
COEX Convention & Exhibition Center, Seoul, South Korea.

Pluralistic AI: Aligning with the Diversity of Human Values


Call for Papers

Welcome to the Pluralistic Alignment Workshop! Aligning AI systems with human preferences and societal values has become a critical challenge as these technologies grow more powerful and pervasive. However, current AI alignment methods have proven insufficient for capturing the full spectrum of complex—and often conflicting—real-world values held across diverse populations. This workshop addresses this gap by examining how to integrate diverse perspectives, values, and expertise into pluralistic AI alignment frameworks. We will explore novel approaches to multi-objective alignment, drawing inspiration from established governance mechanisms and consensus-building practices to navigate the value conflicts inherent in pluralistic societies. The workshop will cover technical innovations in preference elicitation and dataset collection, algorithm development for multi-stakeholder optimization, and the design of human-AI interaction workflows that authentically reflect pluralistic values across diverse communities. By convening researchers, practitioners, and domain experts from AI safety, political philosophy, social science, and human-computer interaction, this workshop aims to foster interdisciplinary collaboration that advances both the theoretical foundations and practical implementation of pluralistic AI alignment.

Stay tuned by following us on Twitter @pluralistic_ai.

Call for Papers

Our workshop aims to bring together researchers with diverse scientific backgrounds, including (but not limited to) machine learning, human-computer interaction, philosophy, and policy studies. More broadly, our workshop lies at the intersection of computer and social sciences. We welcome all interested researchers to discuss the aspects of pluralistic AI, from its definition to the technical pipeline to broad deployment and social acceptance.

We invite submissions that discuss the technical, philosophical, and societal aspects of pluralistic AI. We provide a non-exhaustive list of topics we hope to cover below. We also broadly welcome any submissions which are broadly relevant to pluralistic alignment.

  • Philosophy:
    • Definitions and frameworks for Pluralistic Alignment
    • Ethical considerations in aligning AI with diverse human values
  • Machine learning:
    • Methods for pluralistic ML training and learning algorithms
    • Methods for handling annotation disagreements
    • Evaluation metrics and datasets suitable for pluralistic AI
  • Human-computer interaction:
    • Designing human-AI interaction that reflects diverse user experiences and values
    • Integrating existing surveys on human values into AI design
    • Navigating privacy challenges in pluralistic AI systems
  • Social sciences:
    • Methods for achieving consensus and different forms of aggregation
    • Assessment and measurement of the social impact of pluralistic AI
    • Dealing with pluralistic AI representing values that are offensive to some cultural groups
  • Policy studies:
    • Policy and laws for the deployment of pluralistic AI
    • Democratic processes for incorporating diverse values into AI systems on a broad scale
  • Applications:
    • Case studies in areas such as hate speech mitigation and public health

Submission Instructions

We invite authors to submit anonymized papers at least 4 pages and up to 8 pages of content, plus unlimited pages of references and appendices. Submissions must follow the ICML 2026 template. Checklists are not required for submissions. Reviews will be double-blind, with at least three reviewers assigned to each paper to ensure a thorough evaluation process.

We welcome various types of papers including works in progress, position papers, policy papers, academic papers. All accepted papers will be available on the workshop website, but are to be considered non-archival.

Important Dates

All deadlines are 11:59 pm UTC-12h (“Anywhere on Earth”). (T) indicates a tentative schedule. We will remove this label and update the dates once they are finalized.

March 22, 2026 Call for Workshop Papers
May 3, 2026 Paper Submission Deadline
May 31, 2026 (T) Notification of Acceptance
June 10, 2026 (T) Camera-Ready Version Due
July 10 or 11, 2026 Workshop Date

Organization

Organizing Committee

Avatar

JinYeong Bak

Sungkyunkwan University

Avatar

Yohan Jo

Seoul National University

Avatar

Ruyuan Wan

Pennsylvania State University

Avatar

Liwei Jiang

University of Washington

Avatar

Maarten Sap

CMU LTI

Avatar

Dongyeop Kang

University of Minnesota

Avatar

Taylor Sorensen

University of Washington

Avatar

Kshitish Ghate

University of Washington

Avatar

Amy Zhang

University of Washington

Contact us

Please email pluralistic-alignment@googlegroups.com if you have any questions.