The University of Southampton
University of Southampton Institutional Repository

A space-variant model for motion interpretation across the visual field

A space-variant model for motion interpretation across the visual field
A space-variant model for motion interpretation across the visual field

We implement a neural model for the estimation of the focus of radial motion (FRM) at different retinal locations and assess the model by comparing its results with respect to the precision with which human observers can estimate the FRM in naturalistic motion stimuli. The model describes the deep hierarchy of the first stages of the dorsal visual pathway and is space variant, since it takes into account the retino-cortical transformation of the primate visual system through log-polar mapping. The log-polar transform of the retinal image is the input to the cortical motion-estimation stage, where optic flow is computed by a three-layer neural population. The sensitivity to complex motion patterns that has been found in area MST is modeled through a population of adaptive templates. The first-order description of cortical optic flow is derived from the responses of the adaptive templates. Information about self-motion (e.g., direction of heading) is estimated by combining the first-order descriptors computed in the cortical domain. The model's performance at FRM estimation as a function of retinal eccentricity neatly maps onto data from human observers. By employing equivalent-noise analysis we observe that loss in FRM accuracy for both model and human observers is attributable to a decrease in the efficiency with which motion information is pooled with increasing retinal eccentricity in the visual field. The decrease in sampling efficiency is thus attributable to receptive-field size increases with increasing retinal eccentricity, which are in turn driven by the lossy logpolar mapping that projects the retinal image onto primary visual areas. We further show that the model is able to estimate direction of heading in real-world scenes, thus validating the model's potential application to neuromimetic robotic architectures. More broadly, we provide a framework in which to model complex motion integration across the visual field in real-world scenes.

Dead-leaves stimuli, Equivalent-noise analysis, Focus-of-radial-motion estimation, Spacevariant processing, V1-MT-MST neural model
1534-7362
1-24
Chessa, Manuela
6a3ac515-782f-44a0-9a4e-85ffceb710c5
Maiello, Guido
c122b089-1bbc-4d3e-b178-b0a1b31a5295
Bex, Peter J.
6e6bd07d-1136-4163-91d1-1144ef209570
Solari, Fabio
da14eec1-53f6-45a5-913b-5c13282c1e8e
Chessa, Manuela
6a3ac515-782f-44a0-9a4e-85ffceb710c5
Maiello, Guido
c122b089-1bbc-4d3e-b178-b0a1b31a5295
Bex, Peter J.
6e6bd07d-1136-4163-91d1-1144ef209570
Solari, Fabio
da14eec1-53f6-45a5-913b-5c13282c1e8e

Chessa, Manuela, Maiello, Guido, Bex, Peter J. and Solari, Fabio (2016) A space-variant model for motion interpretation across the visual field. Journal of Vision, 16 (2), 1-24. (doi:10.1167/16.2.12).

Record type: Article

Abstract

We implement a neural model for the estimation of the focus of radial motion (FRM) at different retinal locations and assess the model by comparing its results with respect to the precision with which human observers can estimate the FRM in naturalistic motion stimuli. The model describes the deep hierarchy of the first stages of the dorsal visual pathway and is space variant, since it takes into account the retino-cortical transformation of the primate visual system through log-polar mapping. The log-polar transform of the retinal image is the input to the cortical motion-estimation stage, where optic flow is computed by a three-layer neural population. The sensitivity to complex motion patterns that has been found in area MST is modeled through a population of adaptive templates. The first-order description of cortical optic flow is derived from the responses of the adaptive templates. Information about self-motion (e.g., direction of heading) is estimated by combining the first-order descriptors computed in the cortical domain. The model's performance at FRM estimation as a function of retinal eccentricity neatly maps onto data from human observers. By employing equivalent-noise analysis we observe that loss in FRM accuracy for both model and human observers is attributable to a decrease in the efficiency with which motion information is pooled with increasing retinal eccentricity in the visual field. The decrease in sampling efficiency is thus attributable to receptive-field size increases with increasing retinal eccentricity, which are in turn driven by the lossy logpolar mapping that projects the retinal image onto primary visual areas. We further show that the model is able to estimate direction of heading in real-world scenes, thus validating the model's potential application to neuromimetic robotic architectures. More broadly, we provide a framework in which to model complex motion integration across the visual field in real-world scenes.

This record has no associated files available for download.

More information

Published date: 31 August 2016
Keywords: Dead-leaves stimuli, Equivalent-noise analysis, Focus-of-radial-motion estimation, Spacevariant processing, V1-MT-MST neural model

Identifiers

Local EPrints ID: 485062
URI: http://eprints.soton.ac.uk/id/eprint/485062
ISSN: 1534-7362
PURE UUID: 1757b3f3-4f24-4f46-9f4e-b99d2891faa4
ORCID for Guido Maiello: ORCID iD orcid.org/0000-0001-6625-2583

Catalogue record

Date deposited: 28 Nov 2023 18:05
Last modified: 18 Mar 2024 04:11

Export record

Altmetrics

Contributors

Author: Manuela Chessa
Author: Guido Maiello ORCID iD
Author: Peter J. Bex
Author: Fabio Solari

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×