Model-free predictive optimal iterative learning control using reinforcement learning
Model-free predictive optimal iterative learning control using reinforcement learning
Iterative learning control (ILC) is a high-performance control design method for systems working in repetitive manner and has seen many applications in practice. Predictive optimal ILC, a well-known design algorithm, updates the input for the next trial by optimising a performance index defined over (predicted) future trials and has many appealing convergence properties, e.g. monotonic error norm convergence guarantee. This is achieved, however, using a system model which can be difficult or expensive to obtain in practice. To address this problem, this paper develops a model-free predictive optimal ILC algorithm using recent developments in reinforcement learning. The algorithm can learn the predictive optimal ILC controller without using any system model. We provide a rigorous convergence proof of the developed algorithm which is generally not trivial for reinforcement learning based control design. A numerical example is presented to demonstrate the effectiveness of the proposed algorithm.
3279-3284
Zhang, Yueqing
8c1c1b35-d4bd-43d2-8871-b351bcbd2e7c
Chu, Bing
555a86a5-0198-4242-8525-3492349d4f0f
Shu, Zhan
ea5dc18c-d375-4db0-bbcc-dd0229f3a1cb
5 September 2022
Zhang, Yueqing
8c1c1b35-d4bd-43d2-8871-b351bcbd2e7c
Chu, Bing
555a86a5-0198-4242-8525-3492349d4f0f
Shu, Zhan
ea5dc18c-d375-4db0-bbcc-dd0229f3a1cb
Zhang, Yueqing, Chu, Bing and Shu, Zhan
(2022)
Model-free predictive optimal iterative learning control using reinforcement learning.
In 2022 American Control Conference (ACC).
vol. 2022-June,
IEEE.
.
(doi:10.23919/ACC53348.2022.9867561).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Iterative learning control (ILC) is a high-performance control design method for systems working in repetitive manner and has seen many applications in practice. Predictive optimal ILC, a well-known design algorithm, updates the input for the next trial by optimising a performance index defined over (predicted) future trials and has many appealing convergence properties, e.g. monotonic error norm convergence guarantee. This is achieved, however, using a system model which can be difficult or expensive to obtain in practice. To address this problem, this paper develops a model-free predictive optimal ILC algorithm using recent developments in reinforcement learning. The algorithm can learn the predictive optimal ILC controller without using any system model. We provide a rigorous convergence proof of the developed algorithm which is generally not trivial for reinforcement learning based control design. A numerical example is presented to demonstrate the effectiveness of the proposed algorithm.
Text
Model-free_Predictive_Optimal_Iterative_Learning_Control_using_Reinforcement_Learning
- Version of Record
Restricted to Repository staff only
Request a copy
More information
e-pub ahead of print date: 8 June 2022
Published date: 5 September 2022
Additional Information:
Funding Information:
*This work was partially supported by the ZZU-Southampton Collaborative Research Project 16306/01 and the China Scholarship Council (CSC) 1Yueqing Zhang and Bing Chu are with the Faculty of Engineering and Physical Sciences, University of Southampton, SO17 1BJ Southampton, UK {yz3n17, b.chu}@soton.ac.uk 2Zhan Shu is with the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, T6G 1H9, Canada zshu1@ualberta.ca
Publisher Copyright:
© 2022 American Automatic Control Council.
Venue - Dates:
2022 American Control Conference, ACC 2022, , Atlanta, United States, 2022-06-08 - 2022-06-10
Identifiers
Local EPrints ID: 471590
URI: http://eprints.soton.ac.uk/id/eprint/471590
ISSN: 0743-1619
PURE UUID: 15c68da4-7df7-4023-8305-5392e79d8c8c
Catalogue record
Date deposited: 14 Nov 2022 17:41
Last modified: 18 Mar 2024 03:21
Export record
Altmetrics
Contributors
Author:
Yueqing Zhang
Author:
Bing Chu
Author:
Zhan Shu
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics