Mills, D.J. and Harris, C.J.
(1995)
*Neurofuzzy modelling and control of a six degree of freedom AUV*
s.n.

## Abstract

Underwater vehicles are employed to perform many tasks: structure inspection, recovery, geographic surveys, etc. but most in current operation require constant human supervision. In an attempt to make these vehicles more autonomous, this research project aims to develop a multivariable, neurofuzzy network controller for application to an Autonomous Underwater Vehicle (AUV). The vehicle being studied is Ocean Voyager, an AUV based at Florida Atlantic University. Ocean Voyager is a torpedo shaped, 22 feet long and 13 inches in diameter with a displacement of 2700 lb. The design speed is 10.1 ft/s, with an overall cruise time of 2 hours. A thruster provides the forward propulsion and the vehicle motion is controlled by two surfaces, the stern plane and rudder. A simulation of the vehicle dynamics has been made available to the ISIS research group but for the purposes of this project it has been treated as a black box. Simulation of the vehicle dynamics is achieved by using the standard submarine equations of motion defined by the David Taylor Naval Ship Research and Development Centre (DTNSRDC). Before attempting any control, the problem of modelling the AUV dynamics was studied. The AUV is a nonlinear, multivariable, dynamic process which means that to produce an accurate approximation the selected model must not only be nonlinear but also capable of dealing with high dimensional inputs. The required nonlinearities can be provided by a neurofuzzy system. Neurofuzzy systems combine the positive attributes of a neural network and a fuzzy system. Neural networks have become popular largely due to their ability to universally approximate any continuous nonlinear function using only the information contained in a set of input/output training pairs. The properties of a neural network are determined by its structure and the learning rule used to adapt the weights. Common examples include the multilayer perceptron and the radial basis function network but more recently B-spline networks have been suggested due to their superior numerical properties. The B-spline network belongs to the class of Associative Memory Networks (AMNs) as it generalises locally, i.e. similar inputs map to similar outputs whereas dissimilar inputs map to independent outputs. They also have the advantage that only the output layer weights are adapted which means that well established linear training algorithms can be employed with provable behavioural characteristics. However, a major criticism of most neural networks is their opaque structure where the information stored can not be easily interpreted by the designer. A fuzzy system generally consists of a rule base composed of vague production rules such as IF (error is small) THEN (output is small). The rules are generally linguistic representations and because the information can be easily interpreted by the designer, it is said to be transparent. The power of a fuzzy system lies in the way these production rules are given a precise mathematical meaning so that the resulting system can generalise to produce an appropriate output for previously unseen inputs. However, fuzzy systems have a serious drawback when applied to many applications, their rules are often very difficult or even impossible to determine. This has motivated the development of adaptive fuzzy systems which adjust their rule base parameters via heuristic training rules about which little can be proved. Recently the similarities between neural networks and fuzzy systems have been noted, allowing the positive attributes of both approaches to be combined. The result is termed a neurofuzzy system since it embodies the well established modelling and learning capabilities of a neural networks with the transparent knowledge representation of fuzzy systems. In particular, it has been shown that B-spline networks and certain forms of fuzzy system are equivalent. Unfortunately, these conventional lattice based neurofuzzy systems are limited to problems involving only a small number of inputs. Modelling in high dimensions is difficult due to a phenomenum called the curse of dimensionality, a term first introduced by Bellman in 1961, and which has plagued researchers from many disciplines ever since. The curse can be explained by considering the number of data points need to maintain the same sampling density as the input dimension rises. For example, if the domain of interest is the unit hypercube D = [0,1]^{n} and the required resolution is 0.1 then a minimum of 10^{n} observations must be stored. Obtaining and storing such a large training set quickly becomes infeasible for n>4. There have many attempts to alleviate the curse of dimensionality and in particular, much progress has been made in the field of statistics. A straightforward and popular approach is to try to exploit structural information which is known (either *a priori* or is discovered during the training process) about the form of the desired function. For example, an ANalysis Of VAriance (ANOVA) decomposition for an n dimensional function f(x), is expressed as: f(x) = f_{0} + ... where f_{0} is the bias and the remaining terms represent the combinations of univariate, bivariate, etc. subfunctions that additively decompose the function f. An ANOVA decomposition describes the relationship between the different input variables but it is only useful if the interactions involving more than say four inputs are identically zero. This constraint could limit the potential for applying an ANOVA representation but often an adequate approximation is still obtained. The question must be asked: are all the nonlinear features important or will a model composed of simple subfunctions suffice? An immediate advantage of the ANOVA representation is that each subfunction can be a lattice based neurofuzzy system and so network transparency is retained. Also, the output is a linear function of the concatenated weight vectors for each subfunction which means the training algorithms derived for conventional neurofuzzy systems still apply. The potential reduction in the number of fuzzy rules can be illustrated by considering the approximation of Powell's function, a system with four inputs and one output. This function is an ideal candidate for an ANOVA decomposition since it contains four, two-dimensional subfunctions. If seven basis functions are used on each axis, the ANOVA system will use approximately 200 rules whereas a conventional neurofuzzy system uses over 2000, the majority of which are redundant.

Full text not available from this repository.

## More information

## Identifiers

## Catalogue record

## Export record

## Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.