Sparse Feature Extraction using Generalised Partial Least Squares
Sparse Feature Extraction using Generalised Partial Least Squares
We describe a general framework for feature extraction based on the deflation scheme used in Partial Least Squares (PLS). The framework provides many desirable properties, such as conjugacy and efficient computation of the resulting features. When the projection vectors are constrained in a certain way, the resulting features have dual representations. Using the framework, we derive two new sparse feature extraction algorithms, Sparse Maximal Covariance (SMC) and Sparse Maximal Alignment (SMA). These algorithms produce features which are competitive with those extracted by Kernel Boosting, Boosted Latent Features (BLF) and sparse kernel PLS on several UCI datasets. Furthermore, the sparse algorithms are shown to improve the performance of an SVM on a sample of the Reuters Corpus Volume 1 dataset.
machine learning, kernel methods, feature extraction, partial least squares (PLS)
27-32
Dhanjal, Charanpal
f288404d-d699-49b6-bffd-be3da40a0408
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Shawe-Taylor, John
b1931d97-fdd0-4bc1-89bc-ec01648e928b
2006
Dhanjal, Charanpal
f288404d-d699-49b6-bffd-be3da40a0408
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Shawe-Taylor, John
b1931d97-fdd0-4bc1-89bc-ec01648e928b
Dhanjal, Charanpal, Gunn, Steve R. and Shawe-Taylor, John
(2006)
Sparse Feature Extraction using Generalised Partial Least Squares.
IEEE International Workshop on Machine Learning for Signal Processing, Maynooth, Ireland.
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
We describe a general framework for feature extraction based on the deflation scheme used in Partial Least Squares (PLS). The framework provides many desirable properties, such as conjugacy and efficient computation of the resulting features. When the projection vectors are constrained in a certain way, the resulting features have dual representations. Using the framework, we derive two new sparse feature extraction algorithms, Sparse Maximal Covariance (SMC) and Sparse Maximal Alignment (SMA). These algorithms produce features which are competitive with those extracted by Kernel Boosting, Boosted Latent Features (BLF) and sparse kernel PLS on several UCI datasets. Furthermore, the sparse algorithms are shown to improve the performance of an SVM on a sample of the Reuters Corpus Volume 1 dataset.
This record has no associated files available for download.
More information
Published date: 2006
Additional Information:
Event Dates: September 2006
Venue - Dates:
IEEE International Workshop on Machine Learning for Signal Processing, Maynooth, Ireland, 2006-08-31
Keywords:
machine learning, kernel methods, feature extraction, partial least squares (PLS)
Organisations:
Electronic & Software Systems
Identifiers
Local EPrints ID: 263467
URI: http://eprints.soton.ac.uk/id/eprint/263467
PURE UUID: 26bfdff1-2a95-42a3-ac5e-9e8a2c4afa4c
Catalogue record
Date deposited: 16 Feb 2007
Last modified: 08 Jan 2022 14:48
Export record
Contributors
Author:
Charanpal Dhanjal
Author:
Steve R. Gunn
Author:
John Shawe-Taylor
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
Loading...
View more statistics