Combining cues and recalibrating priors for accurate perception
Combining cues and recalibrating priors for accurate perception
Our senses allow us to identify objects, materials and events in the world around us,enabling us to interact effectively with our surroundings. However, perception is inherently ambiguous, in that each set of sensory data could have resulted from an infinite number of world states. To find statistically optimal solutions the brain uses the available sensory data together with its prior knowledge or experience. In addition, when there are multiple cues available they may be: (i) combined to improve precision through noise reduction; and/or (ii) recalibrated to improve accuracy through bias
reduction. This thesis investigates cue combination, learning and recalibration through a series of four studies, using the Bayesian framework to model cues and their interactions. The first study finds that haptic cues to material properties are combined with visual cues to affect estimates of object gloss. It also investigates how the binocular disparity of specular highlights affects gloss estimates. This is extended in the second study, which finds that the human visual system does not employ a full geometric model of specular highlight disparity when making shape and gloss estimates. The third study replicates and extends previous findings, that auditory and visual cues to temporal events are optimally combined in adults, by demonstrating that children also optimally combine auditory and visual cues. Both adults' and children's bimodal percepts are shown to be well predicted by a `coupling prior' model of optimal partial cue combination. The fourth study finds that the visual system can learn and invoke two context-specific priors for illumination direction, using haptic shape cues to provide calibratory feedback during training. It also demonstrates that colour can be learnt as a contextual cue. The results of all these studies are considered in the context of existing work, and ideas for future research are discussed.
Kerrigan, Iona
f3ff47e6-2a32-41aa-baf6-78e589dd4b67
September 2012
Kerrigan, Iona
f3ff47e6-2a32-41aa-baf6-78e589dd4b67
Adams, Wendy J.
25685aaa-fc54-4d25-8d65-f35f4c5ab688
Graf, Erich W.
1a5123e2-8f05-4084-a6e6-837dcfc66209
Kerrigan, Iona
(2012)
Combining cues and recalibrating priors for accurate perception.
University of Southampton, Psychology, Doctoral Thesis, 142pp.
Record type:
Thesis
(Doctoral)
Abstract
Our senses allow us to identify objects, materials and events in the world around us,enabling us to interact effectively with our surroundings. However, perception is inherently ambiguous, in that each set of sensory data could have resulted from an infinite number of world states. To find statistically optimal solutions the brain uses the available sensory data together with its prior knowledge or experience. In addition, when there are multiple cues available they may be: (i) combined to improve precision through noise reduction; and/or (ii) recalibrated to improve accuracy through bias
reduction. This thesis investigates cue combination, learning and recalibration through a series of four studies, using the Bayesian framework to model cues and their interactions. The first study finds that haptic cues to material properties are combined with visual cues to affect estimates of object gloss. It also investigates how the binocular disparity of specular highlights affects gloss estimates. This is extended in the second study, which finds that the human visual system does not employ a full geometric model of specular highlight disparity when making shape and gloss estimates. The third study replicates and extends previous findings, that auditory and visual cues to temporal events are optimally combined in adults, by demonstrating that children also optimally combine auditory and visual cues. Both adults' and children's bimodal percepts are shown to be well predicted by a `coupling prior' model of optimal partial cue combination. The fourth study finds that the visual system can learn and invoke two context-specific priors for illumination direction, using haptic shape cues to provide calibratory feedback during training. It also demonstrates that colour can be learnt as a contextual cue. The results of all these studies are considered in the context of existing work, and ideas for future research are discussed.
Text
IonaKerriganThesisCorrectedFinalElectronicVersion.pdf
- Other
More information
Published date: September 2012
Organisations:
University of Southampton, Psychology
Identifiers
Local EPrints ID: 360202
URI: http://eprints.soton.ac.uk/id/eprint/360202
PURE UUID: d751ebfa-0484-43c8-b82a-ffe4726eb3bd
Catalogue record
Date deposited: 06 Jan 2014 11:55
Last modified: 15 Mar 2024 03:19
Export record
Contributors
Author:
Iona Kerrigan
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics