Implicit Feature Selection with the Value Difference Metric
Implicit Feature Selection with the Value Difference Metric
The nearest neighbour paradigm provides an effective approach to supervised learning. However, it is especially susceptible to the presence of irrelevant attributes. Whilst many approaches have been proposed that select only the most relevant attributes within a data set, these approaches involve pre-processing the data in some way, and can often be computationally complex. The Value Difference Metric (VDM) is a symbolic distance metric used by a number of different nearest neighbour learning algorithms. This paper demonstrates how the VDM can be used to reduce the impact of irrelevant attributes on classification accuracy without the need for pre-processing the data. We illustrate how this metric uses simple probabilistic techniques to weight features in the instance space, and then apply this weighting technique to an alternative symbolic distance metric. The resulting distance metrics are compared in terms of classification accuracy, on a number of real-world and artificial data sets.
450-454
Payne, Terry R.
0bb13d45-2735-45a3-b72c-472fddbd0bb4
Edwards, Peter
5ee73a94-75a0-426f-ab1b-ce918b06a1ea
1998
Payne, Terry R.
0bb13d45-2735-45a3-b72c-472fddbd0bb4
Edwards, Peter
5ee73a94-75a0-426f-ab1b-ce918b06a1ea
Payne, Terry R. and Edwards, Peter
(1998)
Implicit Feature Selection with the Value Difference Metric.
European Conference on Artificial Intelligence, Brighton, United Kingdom.
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
The nearest neighbour paradigm provides an effective approach to supervised learning. However, it is especially susceptible to the presence of irrelevant attributes. Whilst many approaches have been proposed that select only the most relevant attributes within a data set, these approaches involve pre-processing the data in some way, and can often be computationally complex. The Value Difference Metric (VDM) is a symbolic distance metric used by a number of different nearest neighbour learning algorithms. This paper demonstrates how the VDM can be used to reduce the impact of irrelevant attributes on classification accuracy without the need for pre-processing the data. We illustrate how this metric uses simple probabilistic techniques to weight features in the instance space, and then apply this weighting technique to an alternative symbolic distance metric. The resulting distance metrics are compared in terms of classification accuracy, on a number of real-world and artificial data sets.
More information
Published date: 1998
Venue - Dates:
European Conference on Artificial Intelligence, Brighton, United Kingdom, 1998-01-01
Organisations:
Electronics & Computer Science
Identifiers
Local EPrints ID: 262997
URI: http://eprints.soton.ac.uk/id/eprint/262997
PURE UUID: 41123119-5afa-46b5-b85e-787c2ee12d19
Catalogue record
Date deposited: 20 Sep 2006
Last modified: 14 Mar 2024 07:23
Export record
Contributors
Author:
Terry R. Payne
Author:
Peter Edwards
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics