The University of Southampton
University of Southampton Institutional Repository

Linear disentangled representations and unsupervised action estimation

Linear disentangled representations and unsupervised action estimation
Linear disentangled representations and unsupervised action estimation
Disentangled representation learning has seen a surge in interest over recent times, generally focusing on new models to optimise one of many disparate disentan- glement metrics. It was only with Symmetry Based Disentangled Representation Learning that a robust mathematical framework was introduced to define precisely what is meant by a “linear disentangled representation”. This framework deter- mines that such representations would depend on a particular decomposition of the symmetry group acting on the data, showing that actions would manifest through irreducible group representations acting on independent representational subspaces. Caselles-Dupré et al. [2019] subsequently proposed the first model to induce and demonstrate a linear disentangled representation in a VAE model. In this work we empirically show that linear disentangled representations are not present in standard VAE models and that they instead require altering the loss landscape to induce them. We proceed to show that such representations are a desirable property with regard to classical disentanglement metrics. Finally we propose a method to induce irreducible representations which forgoes the need for labelled action sequences, as was required by prior work. We explore a number of properties of this method, including the ability to learn from action sequences without knowledge of intermediate states and robustness under visual noise. We also demonstrate that it can successfully learn 4 different symmetries directly from pixels.
Painter, Matthew
69f9be70-3b73-4c81-99d8-e6ce57d2f1e1
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Painter, Matthew
69f9be70-3b73-4c81-99d8-e6ce57d2f1e1
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e

Painter, Matthew, Hare, Jonathon and Prugel-Bennett, Adam (2020) Linear disentangled representations and unsupervised action estimation. Thirty-fourth Conference on Neural Information Processing Systems, ,. 06 - 12 Dec 2020. (In Press)

Record type: Conference or Workshop Item (Paper)

Abstract

Disentangled representation learning has seen a surge in interest over recent times, generally focusing on new models to optimise one of many disparate disentan- glement metrics. It was only with Symmetry Based Disentangled Representation Learning that a robust mathematical framework was introduced to define precisely what is meant by a “linear disentangled representation”. This framework deter- mines that such representations would depend on a particular decomposition of the symmetry group acting on the data, showing that actions would manifest through irreducible group representations acting on independent representational subspaces. Caselles-Dupré et al. [2019] subsequently proposed the first model to induce and demonstrate a linear disentangled representation in a VAE model. In this work we empirically show that linear disentangled representations are not present in standard VAE models and that they instead require altering the loss landscape to induce them. We proceed to show that such representations are a desirable property with regard to classical disentanglement metrics. Finally we propose a method to induce irreducible representations which forgoes the need for labelled action sequences, as was required by prior work. We explore a number of properties of this method, including the ability to learn from action sequences without knowledge of intermediate states and robustness under visual noise. We also demonstrate that it can successfully learn 4 different symmetries directly from pixels.

Text
representations - Version of Record
Available under License Other.
Download (328kB)
Text
representations
Download (328kB)

More information

Accepted/In Press date: 6 October 2020
Venue - Dates: Thirty-fourth Conference on Neural Information Processing Systems, ,, 2020-12-06 - 2020-12-12

Identifiers

Local EPrints ID: 444801
URI: http://eprints.soton.ac.uk/id/eprint/444801
PURE UUID: e1740561-a0aa-4c39-88e3-544e79820670
ORCID for Jonathon Hare: ORCID iD orcid.org/0000-0003-2921-4283

Catalogue record

Date deposited: 05 Nov 2020 17:31
Last modified: 18 Feb 2021 17:07

Export record

Contributors

Author: Matthew Painter
Author: Jonathon Hare ORCID iD
Author: Adam Prugel-Bennett

University divisions

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×