Robustness analysis of an adjoint optimal iterative learning controller with experimental verification
Robustness analysis of an adjoint optimal iterative learning controller with experimental verification
A new modification to the steepest-descent algorithm for discrete-time iterative learning control is developed for plant models with multiplicative uncertainty. A theoretical analysis of the algorithm shows that if a tuning parameter is selected to be sufficiently large, the algorithm will result in monotonic convergence provided the plant uncertainty satisfies a positivity condition. This is a major improvement when compared to the standard version of this algorithm, which lacks a mechanism for finding a balance between convergence speed and robustness. The proposed algorithm has been investigated experimentally on an industrial gantry robot and found to display a high degree of robustness to both plant modelling error and initial state error. The algorithm also exhibits both long-term performance and excellent tracking performance, as demonstrated by experimental tests of up to 4000 iterations. To further examine robustness, the plant has been approximated by simple models including one consisting of an integrator and a gain. A simple tuning rule for this reduced model is proposed, which generates a stable system with a good rate of convergence. The robustness properties of the steepest descent algorithm have then been experimentally verified.
1089-1113
Ratcliffe, J D
4c80c556-9fb0-404d-8f73-6401981a7581
Hatonen, J J
b4855c75-456e-4bcd-9d0d-918865af837e
Lewin, P L
78b4fc49-1cb3-4db9-ba90-3ae70c0f639e
Rogers, E
611b1de0-c505-472e-a03f-c5294c63bb72
Owens, D H
db24b8ef-282b-47c0-9cd2-75e91d312ad7
1 July 2008
Ratcliffe, J D
4c80c556-9fb0-404d-8f73-6401981a7581
Hatonen, J J
b4855c75-456e-4bcd-9d0d-918865af837e
Lewin, P L
78b4fc49-1cb3-4db9-ba90-3ae70c0f639e
Rogers, E
611b1de0-c505-472e-a03f-c5294c63bb72
Owens, D H
db24b8ef-282b-47c0-9cd2-75e91d312ad7
Ratcliffe, J D, Hatonen, J J, Lewin, P L, Rogers, E and Owens, D H
(2008)
Robustness analysis of an adjoint optimal iterative learning controller with experimental verification.
International Journal of Robust and Nonlinear Control, 18 (10), .
Abstract
A new modification to the steepest-descent algorithm for discrete-time iterative learning control is developed for plant models with multiplicative uncertainty. A theoretical analysis of the algorithm shows that if a tuning parameter is selected to be sufficiently large, the algorithm will result in monotonic convergence provided the plant uncertainty satisfies a positivity condition. This is a major improvement when compared to the standard version of this algorithm, which lacks a mechanism for finding a balance between convergence speed and robustness. The proposed algorithm has been investigated experimentally on an industrial gantry robot and found to display a high degree of robustness to both plant modelling error and initial state error. The algorithm also exhibits both long-term performance and excellent tracking performance, as demonstrated by experimental tests of up to 4000 iterations. To further examine robustness, the plant has been approximated by simple models including one consisting of an integrator and a gain. A simple tuning rule for this reduced model is proposed, which generates a stable system with a good rate of convergence. The robustness properties of the steepest descent algorithm have then been experimentally verified.
Text
robust_adjoint.pdf
- Version of Record
Restricted to Registered users only
Request a copy
More information
Published date: 1 July 2008
Organisations:
EEE, Southampton Wireless Group
Identifiers
Local EPrints ID: 264555
URI: http://eprints.soton.ac.uk/id/eprint/264555
ISSN: 1049-8923
PURE UUID: d5de1501-61a6-4de8-98d4-dcac066630f8
Catalogue record
Date deposited: 21 Sep 2007
Last modified: 15 Mar 2024 02:43
Export record
Contributors
Author:
J D Ratcliffe
Author:
J J Hatonen
Author:
P L Lewin
Author:
E Rogers
Author:
D H Owens
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics