Regularisation theory applied to neurofuzzy modelling.
Full text not available from this repository.
A desirable property for any empirical model is the ability to generalise well throughout the models input space. Recent work has seen the development of neurofuzzy model construction algorithms which identify neurofuzzy models from available empirical data and expert knowledge. By matching the models structure to the underlying process represented by the data, parsimonious models are produced. Consequent parsimonious models do generalise better but due to the structural symmetry required in these models enforced by the need for model transparency, and the often sparse distribution of real data these models are still prone to poor generalisation. This report reviews and develops regularisation techniques that can be applied to identified neurofuzzy models to aid their ability to generalise. Essentially regularisation places a prior probability distribution on the weight values which consequently constrains the model output. One of the major overheads encountered when performing regularisation is deciding how much regularisation to perform. In this report great attention is given to this problem and techniques such as cross-validation and Bayesian methods are considered. The results of this work favour the Bayesian method, producing models that generalise well for both noisy and sparse training sets. Regularisation produces models that appear to perform sensibly throughout their input space. To give extra information about whether, given an input, the output is inferred from the data or the regulariser, error bars are derived for these models. The described methods are applied to conventional lattice based neurofuzzy models, and also the more parsimonious additive and multiplicative models, giving rise to local regularisation.
Actions (login required)