Learning Disentangled Representations:
from Perception to Control
NIPS 2017 Workshop
Long Beach Convention Center, CA
December 9, 2017
An important facet of human experience is our ability to break down what we observe and interact with, along characteristic lines. Visual scenes consist of separate objects, which may have different poses and identities within their category. In natural language, the syntax and semantics of a sentence can often be separated from one another. In planning and cognition plans can be broken down into immediate and long term goals. Inspired by this much research in deep representation learning has gone into finding disentangled factors of variation. However, this research often lacks a clear definition of what disentangling is or much relation to work in other branches of machine learning, neuroscience or cognitive science. In this workshop we intend to bring a wide swathe of scientists studying disentangled representations under one roof to try to come to a unified view of the problem of disentangling.
The workshop will address these issues through 3 focuses: What is disentangling: Are disentangled representations just the same as statistically independent representations, or is there something more? How does disentangling relate to interpretability? Can we define what it means to separate style and content, or is human judgement the final arbiter? Are disentangled representations the same as equivariant representations? How can disentangled representations be discovered: What is the current state of the art in learning disentangled representations? What are the cognitive and neural underpinnings of disentangled representations in animals and humans? Most work in disentangling has focused on perception, but we will encourage dialogue with researchers in natural language processing and reinforcement learning as well as neuroscientists and cognitive scientists. Why do we care about disentangling: What are the downstream tasks that can benefit from using disentangled representations? Does the downstream task define the relevance of the disentanglement to learn? What does disentangling get us in terms of improved prediction or behavior in intelligent agents?
Organizers
- Diane Bouchacourt - Oxford / Facebook AI Research
- Emily Denton - New York University
- Tejas Kulkarni - Deepmind
- Honglak Lee - Google / U. Michigan
- Siddharth N - Oxford
- David Pfau - DeepMind
- Josh Tenenbaum - MIT
Invited Speakers
- Yoshua Bengio - UMontreal
- Finale Doshi-Velez - Harvard
- Ahmed Elgammal - Rutgers
- Irina Higgins - DeepMind
- Pushmeet Kohli - DeepMind
- Doina Precup - McGill/DeepMind
- Stefano Soatto - UCLA
- Doris Tsao - CalTech
Schedule
This workshop is collocated with NIPS 2017 and will take place on Saturday 9th of December 2017 in Room 203 of the Long Beach Convention Center.
8:30 - 9:00 Set up Posters & Welcome: Josh Tenenbaum
9:00 - 9:30 Stefano Soatto - Emergence of Invariance and Disentangling in Deep Representations
9:30 - 10:00 Irina Higgins - Unsupervised Disentangling or How to Transfer Skills and Imagine Things
10:00 - 10:30 Finale Doshi-Velez - Counterfactually-Faithful Explanation: An Application for Disentangled Representations
10:30 - 11:00 Poster Session & Break
11:00 - 11:30 Doris Tsao - The Neural Code for Visual Objects
11:30 - 12:15 Poster Spotlights (4 minutes each):
- Chris Burgess - Understanding Disentangling in beta-VAE
- Abhishek Kumar - Variational Inference of Disentangled Latents from Unlabeled Observations
- Sergey Tulyakov - On Disentangling Motion and Content for Video Generation
- Valentin Thomas - Disentangling the independently controllable factors of variation by interacting with the world
- Charlie Nash - The Multi-Entity Variational Autoencoder
- Giambattista Parascandolo - Learning Independent Causal Mechanisms
- Cian Eastwood - A Framework for the Quantitative Evaluation of Disentangled Representations
- Hyunjik Kim - Disentangling by Factorising
Lunch Break 12:15 - 14:00
14:00 - 14:30 Doina Precup - Learning independently controllable features for temporal abstraction
14:30 - 15:00 Pushmeet Kohli - Exploring the different paths to achieving disentangled representations
15:00 - 15:30 Poster Session & Break
15:30 - 16:00 Yoshua Bengio - Priors to help automatically discover and disentangle explanatory factors
16:00 - 16:30 Ahmed Elgammal - Generalized Separation of Style and Content on Manifolds: The role of Homeomorphism
16:30 - 17:00 Final Poster Break
17:00 - 18:00 Panel discussion