On the structure of cyclic linear disentangled representations
On the structure of cyclic linear disentangled representations
Disentanglement has seen much work recently for its interpretable properties and the ease at which it can be induced in the latent representations of variational auto-encoders. As a concept, disentanglement has proven hard to precisely define, with many interpretations leading to different metrics which do not necessarily agree. Higgins et al [2018] offer a precise definition of a linear disentangled representation which is grounded in the symmetries of the data. In this work we focus on cyclic symmetry structure. We examine how VAE posterior distributions are affected by different observations of the same problem and find that cyclic structure is encouraged even when it is not explicitly observed. We then find that better prior distributions, found via normalising flows, result in faster convergence and lower encoding costs than the standard Gaussian. We also find that linear representations can be distinguished from standard ones solely through disentanglement metrics scores, possibly due to their highly structured posteriors. Finally, we find preliminary evidence that linear disentangled representations offer better data efficiency than standard disentangled representations.
Representation Learning
Painter, Matthew
69f9be70-3b73-4c81-99d8-e6ce57d2f1e1
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
15 December 2020
Painter, Matthew
69f9be70-3b73-4c81-99d8-e6ce57d2f1e1
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Painter, Matthew, Prugel-Bennett, Adam and Hare, Jonathon
(2020)
On the structure of cyclic linear disentangled representations.
NeurIPS2020 workshop on Interpretable Inductive Biases and Physically Structured Learning, Virtual, Vancouver, Canada.
12 Dec 2020.
4 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
Disentanglement has seen much work recently for its interpretable properties and the ease at which it can be induced in the latent representations of variational auto-encoders. As a concept, disentanglement has proven hard to precisely define, with many interpretations leading to different metrics which do not necessarily agree. Higgins et al [2018] offer a precise definition of a linear disentangled representation which is grounded in the symmetries of the data. In this work we focus on cyclic symmetry structure. We examine how VAE posterior distributions are affected by different observations of the same problem and find that cyclic structure is encouraged even when it is not explicitly observed. We then find that better prior distributions, found via normalising flows, result in faster convergence and lower encoding costs than the standard Gaussian. We also find that linear representations can be distinguished from standard ones solely through disentanglement metrics scores, possibly due to their highly structured posteriors. Finally, we find preliminary evidence that linear disentangled representations offer better data efficiency than standard disentangled representations.
Text
linear_workshop
- Accepted Manuscript
More information
Published date: 15 December 2020
Venue - Dates:
NeurIPS2020 workshop on Interpretable Inductive Biases and Physically Structured Learning, Virtual, Vancouver, Canada, 2020-12-12 - 2020-12-12
Keywords:
Representation Learning
Identifiers
Local EPrints ID: 452448
URI: http://eprints.soton.ac.uk/id/eprint/452448
PURE UUID: aba93cd0-21a2-4e3a-a079-8f1da6f53a88
Catalogue record
Date deposited: 11 Dec 2021 06:49
Last modified: 17 Mar 2024 03:05
Export record
Contributors
Author:
Matthew Painter
Author:
Adam Prugel-Bennett
Author:
Jonathon Hare
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics