Analyzing auditory representations for sound classification with self-organizing neural networks


Spevak, Christian and Polfreman, Richard, (2000) Analyzing auditory representations for sound classification with self-organizing neural networks Rochesso, Signoretto (ed.) At COST G-6 Conference on Digital Audio Effects (DAFX-00). 07 - 09 Dec 2000. 6 pp, pp. 119-124.

Download

Full text not available from this repository.

Description/Abstract

Three different auditory representations—Lyon’s cochlear model, Patterson’s gammatone filterbank combined with Meddis’ inner hair cell model, and mel-frequency cepstral coefficients—are analyzed in connection with self-organizing maps to evaluate their suitability for a perceptually justified classification of sounds. The self-organizing maps are trained with a uniform set of test sounds preprocessed by the auditory representations. The structure of the resulting feature maps and the trajectories of the individual sounds are visualized and compared to one another. While MFCC proved to be a very efficient representation, the gammatone model produced the most convincing results.

Item Type: Conference or Workshop Item (Paper)
Venue - Dates: COST G-6 Conference on Digital Audio Effects (DAFX-00), 2000-12-07 - 2000-12-09
Related URLs:
Subjects:

ePrint ID: 67374
Date :
Date Event
December 2000Published
Date Deposited: 29 Sep 2009
Last Modified: 18 Apr 2017 21:26
Further Information:Google Scholar
URI: http://eprints.soton.ac.uk/id/eprint/67374

Actions (login required)

View Item View Item