Re-Align: Workshop on Representational Alignment

The question of What makes a good representation? in machine learning can be addressed in one of several ways: By evaluating downstream behavior, by inspecting internal representations, or by characterizing a system’s inductive biases. Each of these methodologies involves measuring the alignment of an artificial intelligence system to a ground-truth system (usually a human or a population of humans) at some level of analysis (be it behavior, internal representation, or something in between). However, despite this shared goal, the machine learning, neuroscience, and cognitive science communities that study alignment among artificial and biological intelligence systems currently lack a shared framework for conveying insights across methodologies and disciplines.

This workshop aims to bridge this gap by defining, evaluating, and understanding the implications of representational alignment among biological & artificial systems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to contribute to this discussion in the form of invited talks, contributed papers, and structured discussions that address questions such as:

  • How can we measure representational alignment among biological and artificial intelligence (AI) systems?
  • Can representational alignment tell us if AI systems use the same strategies to solve tasks as humans do?
  • What are the consequences (positive, neutral, and negative) of representational alignment?
  • How does representational alignment connect to behavioral alignment and value alignment, as understood in AI safety and interpretability & explainability?
  • How can we increase (or decrease) representational alignment of an AI system?
  • How does the degree of representational alignment between two systems impact their ability to compete, cooperate, and communicate?

While the focus of the workshop will generally be on the representational alignment of models with humans, we also welcome submissions regarding representational alignment in other settings (e.g. alignment of models with other models).

To facilitate discussion during the workshop, the organizers prepared a reference paper highlighting key issues and publications within the topic of representational alignment. A concrete goal of the workshop is to expand this paper with any new insights generated during the workshop. The paper is available on arXiv


If you would be interested in participating, please let us know below!


For more details: https://representational-alignment.github.io/
Sign in to Google to save your progress. Learn more
Name *
Email *
Would you like us to contact you with updates about the workshop? *
Organization *
How would you be interested in participating at this workshop? (Select all that apply) *
Required
Do you plan to attend ICLR 2024? *
Is there anything else you would like to share with us?
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of Princeton University Google Apps.

Does this form look suspicious? Report