Efficient sparse kernel feature extraction based on partial least squares
Efficient sparse kernel feature extraction based on partial least squares
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on partial least squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called sparse maximal alignment (SMA) and sparse maximal covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a support vector machine.
1347-1361
Dhanjal, Charanpal
1541c1cf-3d9b-4553-9b47-1fe345a4c1f5
Gunn, Steve
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Shawe-Taylor, John
b1931d97-fdd0-4bc1-89bc-ec01648e928b
August 2009
Dhanjal, Charanpal
1541c1cf-3d9b-4553-9b47-1fe345a4c1f5
Gunn, Steve
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Shawe-Taylor, John
b1931d97-fdd0-4bc1-89bc-ec01648e928b
Dhanjal, Charanpal, Gunn, Steve and Shawe-Taylor, John
(2009)
Efficient sparse kernel feature extraction based on partial least squares.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 31 (8), .
(doi:10.1109/TPAMI.2008.171).
Abstract
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on partial least squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called sparse maximal alignment (SMA) and sparse maximal covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a support vector machine.
This record has no associated files available for download.
More information
e-pub ahead of print date: 27 June 2008
Published date: August 2009
Organisations:
Electronic & Software Systems
Identifiers
Local EPrints ID: 267141
URI: http://eprints.soton.ac.uk/id/eprint/267141
PURE UUID: 5559a9ab-1997-488d-874d-ba3b6faabcc1
Catalogue record
Date deposited: 26 Feb 2009 17:33
Last modified: 14 Mar 2024 08:43
Export record
Altmetrics
Contributors
Author:
Charanpal Dhanjal
Author:
Steve Gunn
Author:
John Shawe-Taylor
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics