Comparing the Bayes and Typicalness Frameworks
Comparing the Bayes and Typicalness Frameworks
When correct priors are known, Bayesian algorithms give optimal decisions, and accurate confidence values for predictions can be obtained. If the prior is incorrect however, these confidence values have no theoretical base -- even though the algorithms' predictive performance may be good. There also exist many successful learning algorithms which only depend on the iid assumption. Often however they produce no confidence values for their predictions. Bayesian frameworks are often applied to these algorithms in order to obtain such values, however they can rely on unjustified priors. In this paper we outline the typicalness framework which can be used in conjunction with many other machine learning algorithms. The framework provides confidence information based only on the standard iid assumption and so is much more robust to different underlying data distributions. We show how the framework can be applied to existing algorithms. We also present experimental results which show that the typicalness approach performs close to Bayes when the prior is known to be correct. Unlike Bayes however, the method still gives accurate confidence values even when different data distributions are considered.
Melluish, T.
0161a35f-fa23-4e69-95c8-570637748b59
Saunders, C.
38a38da8-1eb3-47a8-80bc-b9cbb43f26e3
Nouretdinov, I.
f3ac1e9d-b100-4b5d-9de1-f2c7d9cdccc8
Vovk, V.
1feb1a01-8acd-4af5-9832-942537c296ed
2001
Melluish, T.
0161a35f-fa23-4e69-95c8-570637748b59
Saunders, C.
38a38da8-1eb3-47a8-80bc-b9cbb43f26e3
Nouretdinov, I.
f3ac1e9d-b100-4b5d-9de1-f2c7d9cdccc8
Vovk, V.
1feb1a01-8acd-4af5-9832-942537c296ed
Melluish, T., Saunders, C., Nouretdinov, I. and Vovk, V.
(2001)
Comparing the Bayes and Typicalness Frameworks
Record type:
Monograph
(Project Report)
Abstract
When correct priors are known, Bayesian algorithms give optimal decisions, and accurate confidence values for predictions can be obtained. If the prior is incorrect however, these confidence values have no theoretical base -- even though the algorithms' predictive performance may be good. There also exist many successful learning algorithms which only depend on the iid assumption. Often however they produce no confidence values for their predictions. Bayesian frameworks are often applied to these algorithms in order to obtain such values, however they can rely on unjustified priors. In this paper we outline the typicalness framework which can be used in conjunction with many other machine learning algorithms. The framework provides confidence information based only on the standard iid assumption and so is much more robust to different underlying data distributions. We show how the framework can be applied to existing algorithms. We also present experimental results which show that the typicalness approach performs close to Bayes when the prior is known to be correct. Unlike Bayes however, the method still gives accurate confidence values even when different data distributions are considered.
Text
TypBayes_TECHREP.pdf
- Other
Text
TypBayes_ECML01
- Other
More information
Published date: 2001
Organisations:
Electronics & Computer Science
Identifiers
Local EPrints ID: 258965
URI: http://eprints.soton.ac.uk/id/eprint/258965
PURE UUID: c6f2f5d7-f008-4b0d-9617-d3a4dfd40341
Catalogue record
Date deposited: 03 Mar 2004
Last modified: 14 Mar 2024 06:16
Export record
Contributors
Author:
T. Melluish
Author:
C. Saunders
Author:
I. Nouretdinov
Author:
V. Vovk
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics