Comparing the Bayes and Typicalness Frameworks


Melluish, T., Saunders, C., Nouretdinov, I. and Vovk, V. (2001) Comparing the Bayes and Typicalness Frameworks.

Download

[img] PDF
Download (299Kb)

Description/Abstract

When correct priors are known, Bayesian algorithms give optimal decisions, and accurate confidence values for predictions can be obtained. If the prior is incorrect however, these confidence values have no theoretical base -- even though the algorithms' predictive performance may be good. There also exist many successful learning algorithms which only depend on the iid assumption. Often however they produce no confidence values for their predictions. Bayesian frameworks are often applied to these algorithms in order to obtain such values, however they can rely on unjustified priors. In this paper we outline the typicalness framework which can be used in conjunction with many other machine learning algorithms. The framework provides confidence information based only on the standard iid assumption and so is much more robust to different underlying data distributions. We show how the framework can be applied to existing algorithms. We also present experimental results which show that the typicalness approach performs close to Bayes when the prior is known to be correct. Unlike Bayes however, the method still gives accurate confidence values even when different data distributions are considered.

Item Type: Monograph (Technical Report)
Related URLs:
Divisions: Faculty of Physical Sciences and Engineering > Electronics and Computer Science
ePrint ID: 258965
Date Deposited: 03 Mar 2004
Last Modified: 27 Mar 2014 20:01
Further Information:Google Scholar
URI: http://eprints.soton.ac.uk/id/eprint/258965

Actions (login required)

View Item View Item

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics