The University of Southampton
University of Southampton Institutional Repository

An image-computable model of human visual shape similarity

An image-computable model of human visual shape similarity
An image-computable model of human visual shape similarity

Shape is a defining feature of objects, and human observers can effortlessly compare shapes to determine how similar they are. Yet, to date, no image-computable model can predict how visually similar or different shapes appear. Such a model would be an invaluable tool for neuroscientists and could provide insights into computations underlying human shape perception. To address this need, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp accurately predicts human shape similarity judgments between pairs of shapes without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that incorporating multiple ShapeComp dimensions facilitates the prediction of human shape similarity across a small number of shapes, and also captures much of the variance in the multiple arrangements of many shapes. ShapeComp outperforms both conventional pixel-based metrics and state-of-the-art convolutional neural networks, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.

1553-734X
1-34
Morgenstern, Yaniv
780d3b51-75e6-4d3c-bdf6-4a8a73fc3464
Hartmann, Frieder
744a8944-e6cb-46fb-899d-9e87084d127f
Schmidt, Filipp
edeb7a62-4e98-4399-80a5-3d7a96fe61ed
Tiedemann, Henning
2c35522a-eb72-4365-ba95-9b03bf020e7e
Prokott, Eugen
babcb924-2a38-460e-818a-c805c7a798a3
Maiello, Guido
c122b089-1bbc-4d3e-b178-b0a1b31a5295
Fleming, Roland W.
f9a60356-03e6-4931-a332-f3a7aa9f9915
Morgenstern, Yaniv
780d3b51-75e6-4d3c-bdf6-4a8a73fc3464
Hartmann, Frieder
744a8944-e6cb-46fb-899d-9e87084d127f
Schmidt, Filipp
edeb7a62-4e98-4399-80a5-3d7a96fe61ed
Tiedemann, Henning
2c35522a-eb72-4365-ba95-9b03bf020e7e
Prokott, Eugen
babcb924-2a38-460e-818a-c805c7a798a3
Maiello, Guido
c122b089-1bbc-4d3e-b178-b0a1b31a5295
Fleming, Roland W.
f9a60356-03e6-4931-a332-f3a7aa9f9915

Morgenstern, Yaniv, Hartmann, Frieder, Schmidt, Filipp, Tiedemann, Henning, Prokott, Eugen, Maiello, Guido and Fleming, Roland W. (2021) An image-computable model of human visual shape similarity. PLoS Computational Biology, 17 (6), 1-34, [1008981]. (doi:10.1371/journal.pcbi.1008981).

Record type: Article

Abstract

Shape is a defining feature of objects, and human observers can effortlessly compare shapes to determine how similar they are. Yet, to date, no image-computable model can predict how visually similar or different shapes appear. Such a model would be an invaluable tool for neuroscientists and could provide insights into computations underlying human shape perception. To address this need, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp accurately predicts human shape similarity judgments between pairs of shapes without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that incorporating multiple ShapeComp dimensions facilitates the prediction of human shape similarity across a small number of shapes, and also captures much of the variance in the multiple arrangements of many shapes. ShapeComp outperforms both conventional pixel-based metrics and state-of-the-art convolutional neural networks, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.

Text
journal.pcbi.1008981 - Version of Record
Available under License Creative Commons Attribution.
Download (4MB)

More information

Accepted/In Press date: 19 April 2021
e-pub ahead of print date: 1 June 2021
Additional Information: Funding Information: This research was funded by the DFG funded Collaborative Research Center “Cardinal Mechanisms of Perception” (222641018–SFB/TRR 135 TP C1) and the ERC Consolidator award “SHAPE” (ERC-CoG-2015-682859). G.M. was supported by a Marie-Skłodowska-Curie Actions Individual Fellowship (H2020-MSCA-IF-2017: ‘VisualGrasping’ Project ID: 793660). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Publisher Copyright: © 2021 Morgenstern et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Identifiers

Local EPrints ID: 484858
URI: http://eprints.soton.ac.uk/id/eprint/484858
ISSN: 1553-734X
PURE UUID: 8b7d934b-b6a5-4f0b-b164-bb921e382554
ORCID for Guido Maiello: ORCID iD orcid.org/0000-0001-6625-2583

Catalogue record

Date deposited: 23 Nov 2023 17:45
Last modified: 18 Mar 2024 04:11

Export record

Altmetrics

Contributors

Author: Yaniv Morgenstern
Author: Frieder Hartmann
Author: Filipp Schmidt
Author: Henning Tiedemann
Author: Eugen Prokott
Author: Guido Maiello ORCID iD
Author: Roland W. Fleming

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×