On automatic emotion classification using acoustic features


Hassan, Ali (2012) On automatic emotion classification using acoustic features. University of Southampton, Faculty of Physical and Applied Sciences, Doctoral Thesis , 204pp.

Download

[img] PDF
Download (2339Kb)

Description/Abstract

In this thesis, we describe extensive experiments on the classification of emotions from speech using acoustic features. This area of research has important applications in human computer interaction. We have thoroughly reviewed the current literature and present our results on some of the contemporary emotional speech databases. The principal focus is on creating a large set of acoustic features, descriptive of different emotional states and finding methods for selecting a subset of best performing features by using feature selection methods. In this thesis we have looked at several traditional feature selection methods and propose a novel scheme which employs a preferential Borda voting strategy for ranking features. The comparative results show that our proposed scheme can strike a balance between accurate but computationally intensive wrapper methods and less accurate but computationally less intensive filter methods for feature selection. By using the selected features, several schemes for extending the binary classifiers to multiclass classification are tested. Some of these classifiers form serial combinations of binary classifiers while others use a hierarchical structure to perform this task. We describe a new hierarchical classification scheme, which we call Data-Driven Dimensional Emotion Classification (3DEC), whose decision hierarchy is based on non-metric multidimensional scaling (NMDS) of the data. This method of creating a hierarchical structure for the classification of emotion classes gives significant improvements over other methods tested.

The NMDS representation of emotional speech data can be interpreted in terms of the well-known valence-arousal model of emotion. We find that this model does not give
a particularly good fit to the data: although the arousal dimension can be identified easily, valence is not well represented in the transformed data. From the recognition
results on these two dimensions, we conclude that valence and arousal dimensions are not orthogonal to each other. In the last part of this thesis, we deal with the very difficult but important topic of improving the generalisation capabilities of speech emotion recognition (SER) systems over different speakers and recording environments. This topic has been generally overlooked in the current research in this area. First we try the traditional methods used in automatic speech recognition (ASR) systems for improving the generalisation of SER in intra– and inter–database emotion classification. These traditional methods do improve the average accuracy of the emotion classifier. In this thesis, we identify these differences in the training and test data, due to speakers and acoustic environments, as a covariate shift. This shift is minimised by using importance weighting algorithms from the emerging field of transfer learning to guide the learning algorithm towards that training data which gives better representation of testing data. Our results show that importance weighting algorithms can be used to minimise the differences between the training and testing data. We also test the effectiveness of importance weighting algorithms on inter–database and cross-lingual emotion recognition. From these results, we draw conclusions about the universal nature of emotions across different languages.

Item Type: Thesis (Doctoral)
Subjects: B Philosophy. Psychology. Religion > BF Psychology
P Language and Literature > P Philology. Linguistics
Q Science > QC Physics
Divisions: Faculty of Physical Sciences and Engineering > Electronics and Computer Science
ePrint ID: 340672
Date Deposited: 13 Aug 2012 16:42
Last Modified: 27 Mar 2014 20:23
Further Information:Google Scholar
URI: http://eprints.soton.ac.uk/id/eprint/340672

Actions (login required)

View Item View Item