The University of Southampton
University of Southampton Institutional Repository

Compositing foreground and background using variational autoencoders

Compositing foreground and background using variational autoencoders
Compositing foreground and background using variational autoencoders
We consider the problem of composing images by combining an arbitrary foreground object to some background. To achieve this we use a factorized latent space. Thus we introduce a model called the “Background and Foreground VAE” (BFVAE) that can combine arbitrary foreground and background from an image dataset to generate unseen images. To enhance the quality of the generated images we also propose a VAE-GAN mixed model called “Latent Space Renderer-GAN” (LSR-GAN). This substantially reduces the blurriness of BFVAE images.
Disentanglement, Representation learning, VAE
0302-9743
553-566
Springer Cham
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Zeng, Zezhen
d340d998-568a-434f-95eb-ef39ee335912
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
El Yacoubi, Mounîm
Granger, Eric
Yuen, Pong Chi
Pal, Umapada
Vincent, Nicole
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Zeng, Zezhen
d340d998-568a-434f-95eb-ef39ee335912
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
El Yacoubi, Mounîm
Granger, Eric
Yuen, Pong Chi
Pal, Umapada
Vincent, Nicole

Prugel-Bennett, Adam, Zeng, Zezhen and Hare, Jonathon (2022) Compositing foreground and background using variational autoencoders. El Yacoubi, Mounîm, Granger, Eric, Yuen, Pong Chi, Pal, Umapada and Vincent, Nicole (eds.) In Pattern Recognition and Artificial Intelligence : Third International Conference, ICPRAI 2022, Paris, France, June 1–3, 2022, Proceedings, Part I. vol. 13363, Springer Cham. pp. 553-566 . (doi:10.1007/978-3-031-09037-0_45).

Record type: Conference or Workshop Item (Paper)

Abstract

We consider the problem of composing images by combining an arbitrary foreground object to some background. To achieve this we use a factorized latent space. Thus we introduce a model called the “Background and Foreground VAE” (BFVAE) that can combine arbitrary foreground and background from an image dataset to generate unseen images. To enhance the quality of the generated images we also propose a VAE-GAN mixed model called “Latent Space Renderer-GAN” (LSR-GAN). This substantially reduces the blurriness of BFVAE images.

Text
sub_32 - Accepted Manuscript
Download (1MB)

More information

Accepted/In Press date: 15 March 2022
Published date: 2 June 2022
Additional Information: Publisher Copyright: © 2022, Springer Nature Switzerland AG.
Keywords: Disentanglement, Representation learning, VAE

Identifiers

Local EPrints ID: 468256
URI: http://eprints.soton.ac.uk/id/eprint/468256
ISSN: 0302-9743
PURE UUID: fa9ded67-5f4b-45e8-b029-2360bc82c650
ORCID for Jonathon Hare: ORCID iD orcid.org/0000-0003-2921-4283

Catalogue record

Date deposited: 09 Aug 2022 16:33
Last modified: 17 Mar 2024 07:24

Export record

Altmetrics

Contributors

Author: Adam Prugel-Bennett
Author: Zezhen Zeng
Author: Jonathon Hare ORCID iD
Editor: Mounîm El Yacoubi
Editor: Eric Granger
Editor: Pong Chi Yuen
Editor: Umapada Pal
Editor: Nicole Vincent

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×