The University of Southampton
University of Southampton Institutional Repository

Visually adaptive virtual sound imaging using loudspeakers

Visually adaptive virtual sound imaging using loudspeakers
Visually adaptive virtual sound imaging using loudspeakers
Advances in computer technology and low cost cameras open up new possibilities for three dimensional (3D) sound reproduction. The problem is to update the audio signal processing scheme for a moving listener, so that the listener perceives only the intended virtual sound image. The performance of the audio signal processing scheme is limited by the condition number of the associated inversion problem. The condition number as a function of frequency for different listener positions and rotation is examined using an analytical model. The resulting size of the "operational area" with listener head tracking is illustrated for different geometries of loudspeaker configurations together with related cross-over design techniques. An objective evaluation of cross-talk cancellation effectiveness is presented for different filter lengths and for asymmetric and symmetric listener positions. The benefit of using an adaptive system compared to a static system is also illustrated. The measurement of arguably the most comprehensive KEMAR database of head related transfer functions yet available is presented. A complete database of head related transfer functions measured without the pinna is also presented. This was performed to provide a starting point for future modelling of pinna responses. The update of the audio signal processing scheme is initiated by a visual tracking system that performs head tracking without the need for the listener to wear any sensors. The solution to the problem of updating the filters without any audible change is solved by using either a very fine mesh for the inverse filters or by using commutation techniques.
The filter update techniques are evaluated with subjective experiments and have proven to be effective both in an anechoic chamber and in a listening room, which supports the implementation of virtual sound imaging systems under realistic conditions. The design and implementation of a visually adaptive virtual sound imaging system is carried out. The system is evaluated with respect to filter update rates and cross-talk cancellation effectiveness.
Mannerheim, P.V.H.
cd42c9ed-e182-4a67-af36-f7edbd5ebaf4
Mannerheim, P.V.H.
cd42c9ed-e182-4a67-af36-f7edbd5ebaf4
Nelson, P.A.
5c6f5cc9-ea52-4fe2-9edf-05d696b0c1a9

Mannerheim, P.V.H. (2008) Visually adaptive virtual sound imaging using loudspeakers. University of Southampton, Institute of Sound and Vibration Research, Doctoral Thesis, 262pp.

Record type: Thesis (Doctoral)

Abstract

Advances in computer technology and low cost cameras open up new possibilities for three dimensional (3D) sound reproduction. The problem is to update the audio signal processing scheme for a moving listener, so that the listener perceives only the intended virtual sound image. The performance of the audio signal processing scheme is limited by the condition number of the associated inversion problem. The condition number as a function of frequency for different listener positions and rotation is examined using an analytical model. The resulting size of the "operational area" with listener head tracking is illustrated for different geometries of loudspeaker configurations together with related cross-over design techniques. An objective evaluation of cross-talk cancellation effectiveness is presented for different filter lengths and for asymmetric and symmetric listener positions. The benefit of using an adaptive system compared to a static system is also illustrated. The measurement of arguably the most comprehensive KEMAR database of head related transfer functions yet available is presented. A complete database of head related transfer functions measured without the pinna is also presented. This was performed to provide a starting point for future modelling of pinna responses. The update of the audio signal processing scheme is initiated by a visual tracking system that performs head tracking without the need for the listener to wear any sensors. The solution to the problem of updating the filters without any audible change is solved by using either a very fine mesh for the inverse filters or by using commutation techniques.
The filter update techniques are evaluated with subjective experiments and have proven to be effective both in an anechoic chamber and in a listening room, which supports the implementation of virtual sound imaging systems under realistic conditions. The design and implementation of a visually adaptive virtual sound imaging system is carried out. The system is evaluated with respect to filter update rates and cross-talk cancellation effectiveness.

Text
P2435.pdf - Other
Download (7MB)

More information

Published date: February 2008
Organisations: University of Southampton

Identifiers

Local EPrints ID: 157423
URI: http://eprints.soton.ac.uk/id/eprint/157423
PURE UUID: 6eb3ea0c-f1dc-412f-bb62-586cdae1e01a
ORCID for P.A. Nelson: ORCID iD orcid.org/0000-0002-9563-3235

Catalogue record

Date deposited: 07 Jun 2010 14:04
Last modified: 14 Mar 2024 02:32

Export record

Contributors

Author: P.V.H. Mannerheim
Thesis advisor: P.A. Nelson ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×