Parsimonious Neurofuzzy Modelling

Bossley, K.M., Brown, M. and Harris, C.J. (1995) Parsimonious Neurofuzzy Modelling s.n.


Full text not available from this repository.


Modelling has become an invaluable tool in many areas of research, particularly in the control community where it is termed system identification. System identification is the process of identifying a model of an unknown process, for the purpose of predicting and/or gaining an insight into the behaviour of the process. Due to the inherent complexity of many real processes (i.e multivariate, nonlinear and time varying), conventional modelling techniques have proved to be too restrictive. In these instances more sophisticated (intelligent) modelling techniques are required. Recently the similarities between neural networks, with their ability to learn to universally approximate any continuous nonlinear multivariate function, and fuzzy systems, with their transparent reasoning by a series of linguistic rules, have been drawn. This has lead to the development of neurofuzzy systems combining the desired attributes of both these paradigms, and hence producing a technique ideal for modelling. Neurofuzzy systems are a fuzzy system defined on a neural network type structure providing a fuzzy system to which thorough mathematical analysis can be applied. For this reason neurofuzzy systems have become an attractive powerful new modelling technique, combining the well established learning techniques of associative memory networks (AMNs) with the transparency of fuzzy systems. However, the true modelling capabilities of any given model depend heavily on its structure, and hence an important task (arguably the most important) is structure identification. Assuming the availability of empirical input/output data, known as the training set, a construction algorithm can be used to construct a model to fit this data. Due to the fuzzy system equivalence of these models, neurofuzzy model identification is amenable to another method; a priori knowledge of the process can be represented by a series of fuzzy rules which can then be used either as the system model or to initialise a construction algorithm. Also, when the dimension of the problem increases the size of the neurofuzzy model and hence the size of an adequate training set, grows exponentially. Due to this phenomenon, known as the curse of dimensionality, the use of conventional neurofuzzy models on high dimensional problems (> 4 inputs) is impractical. Hence, during model construction (in all dimensions) the following fundamental principles must be employed: Principle of data reduction: the smallest number of input variables should be used to explain a maximum amount of information; Principle of network parsimony: the best models are obtained using the simplest acceptable structures, containing the smallest number of adjustable parameters. Parsimony is obtained by exploiting structural redundancy in conventional systems, which is achieved by employing alternative neurofuzzy representations. A brief introduction to such representations is given in this paper. Another important issue when constructing models from empirical data, is the quality of this data. Ideally this data should be well distributed and noise free, two unrealistic demands (especially in high dimensions). A common approach to these problems in both statistical and neural network communities is regularisation, which penalises the weight training cost function to try and control superfluous parameters. Despite building parsimonious models, due to the poor quality of the training set some weights maybe be poorly identified, producing models that generalise badly, regularisation attempts to control these parameters. This is another issue briefly addressed in this paper.

Item Type: Monograph (Project Report)
Additional Information: 1995/6 Research Journal Address: Department of Electronics and Computer Science
Organisations: Southampton Wireless Group
ePrint ID: 250101
Date :
Date Event
Date Deposited: 04 May 1999
Last Modified: 18 Apr 2017 00:24
Further Information:Google Scholar

Actions (login required)

View Item View Item