Robust Bounds on Generalization from the Margin Distribution
Robust Bounds on Generalization from the Margin Distribution
A number of results have bounded generalization of a classifier in terms of its margin on the training points. There has been some debate about whether the minimum margin is the best measure of the distribution of training set margin values with which to estimate the generalization. Freund and Schapire have shown how a different function of the margin distribution can be used to bound the number of mistakes of an on-line learning algorithm for a perceptron, as well as an expected error bound. We show that a slight generalization of their construction can be used to give a pac style bound on the tail of the distribution of the generalization errors that arise from a given sample size. Algorithms arising from the approach are related to those of Cortes and Vapnik. We generalise the basic result to function classes with bounded fat- shattering dimension and the 1-norm of the slack variables which gives rise to Vapnik's box constraint algorithm. We also extend the results to the regression case and obtain bounds on the probability that a randomly chosen test point will have error greater than a given value. The bounds apply to the $\epsilon$-insensitive loss function proposed by Vapnik for Support Vector Machine regression. A special case of this bound gives a bound on the probabilities in terms of the least squares error on the training set showing a quadratic decline in probability with margin.
Shawe-Taylor, J.
c32d0ee4-b422-491f-8c28-78663851d6db
Cristianini, N.
00885da7-7833-4f0c-b8a0-3f385d89f642
1998
Shawe-Taylor, J.
c32d0ee4-b422-491f-8c28-78663851d6db
Cristianini, N.
00885da7-7833-4f0c-b8a0-3f385d89f642
Shawe-Taylor, J. and Cristianini, N.
(1998)
Robust Bounds on Generalization from the Margin Distribution
Record type:
Monograph
(Project Report)
Abstract
A number of results have bounded generalization of a classifier in terms of its margin on the training points. There has been some debate about whether the minimum margin is the best measure of the distribution of training set margin values with which to estimate the generalization. Freund and Schapire have shown how a different function of the margin distribution can be used to bound the number of mistakes of an on-line learning algorithm for a perceptron, as well as an expected error bound. We show that a slight generalization of their construction can be used to give a pac style bound on the tail of the distribution of the generalization errors that arise from a given sample size. Algorithms arising from the approach are related to those of Cortes and Vapnik. We generalise the basic result to function classes with bounded fat- shattering dimension and the 1-norm of the slack variables which gives rise to Vapnik's box constraint algorithm. We also extend the results to the regression case and obtain bounds on the probability that a randomly chosen test point will have error greater than a given value. The bounds apply to the $\epsilon$-insensitive loss function proposed by Vapnik for Support Vector Machine regression. A special case of this bound gives a bound on the probabilities in terms of the least squares error on the training set showing a quadratic decline in probability with margin.
Text
NeuroCOLT_Technical_Report_1998_029.pdf
- Other
More information
Published date: 1998
Organisations:
Electronics & Computer Science
Identifiers
Local EPrints ID: 259747
URI: http://eprints.soton.ac.uk/id/eprint/259747
PURE UUID: 36ce2454-dfd9-4ca2-bfc6-9f740475807f
Catalogue record
Date deposited: 12 Aug 2004
Last modified: 14 Mar 2024 06:28
Export record
Contributors
Author:
J. Shawe-Taylor
Author:
N. Cristianini
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics