The University of Southampton
University of Southampton Institutional Repository

Iterative learning control: from model-based to reinforcement learning

Iterative learning control: from model-based to reinforcement learning
Iterative learning control: from model-based to reinforcement learning
High-performance control systems that repeat the same tasks have a wide range of applications. Iterative learning control (ILC) is a control method that enables such systems to achieve high-performance tracking by updating the input based on historical data. Based on whether a system model is required or not, ILC algorithms can be divided into model-based and model-free algorithms. Model-based ILC uses system dynamics (though not necessarily accurate) to update the inputs, whereas model-free ILC only uses input-output data to update. However, model-free ILC techniques typically converge slower than model-based ILC techniques, despite retaining high performance. It is noteworthy that the idea of adapting actions based on past information is also at the core of reinforcement learning (RL). More excitingly, RL provides a number of model-free methods for determining an optimal action. Despite being in different subject areas, RL and ILC have many similarities. This motivates the author to present the first in-depth study of the relationship between ILC and RL from the viewpoint of high-performance tracking problems and propose several novel model-free ILC designs. This thesis starts with a quantitative comparison of ILC and RL techniques in both model-based and model-free scenarios from a control perspective. ILC is shown to be more data-efficient when model information is unavailable, which suggests that research into the structure of the problem may lead to improved performance and opens the door for the development of RL-based ILC (RILC) algorithms. Policy gradient methods and Q-learning from reinforcement learning are used to develop new model-free ILC algorithms. The proposed algorithms achieve high-performance tracking without any model information. Their convergence performance is comparable to their model-based counterparts under certain conditions. Moreover, these algorithms are shown to have the potential to handle nonlinear dynamics. Numerical simulations are presented to demonstrate the effectiveness of the proposed design.
University of Southampton
Zhang, Yueqing
8c1c1b35-d4bd-43d2-8871-b351bcbd2e7c
Zhang, Yueqing
8c1c1b35-d4bd-43d2-8871-b351bcbd2e7c
Chu, Bing
555a86a5-0198-4242-8525-3492349d4f0f

Zhang, Yueqing (2023) Iterative learning control: from model-based to reinforcement learning. University of Southampton, Doctoral Thesis, 136pp.

Record type: Thesis (Doctoral)

Abstract

High-performance control systems that repeat the same tasks have a wide range of applications. Iterative learning control (ILC) is a control method that enables such systems to achieve high-performance tracking by updating the input based on historical data. Based on whether a system model is required or not, ILC algorithms can be divided into model-based and model-free algorithms. Model-based ILC uses system dynamics (though not necessarily accurate) to update the inputs, whereas model-free ILC only uses input-output data to update. However, model-free ILC techniques typically converge slower than model-based ILC techniques, despite retaining high performance. It is noteworthy that the idea of adapting actions based on past information is also at the core of reinforcement learning (RL). More excitingly, RL provides a number of model-free methods for determining an optimal action. Despite being in different subject areas, RL and ILC have many similarities. This motivates the author to present the first in-depth study of the relationship between ILC and RL from the viewpoint of high-performance tracking problems and propose several novel model-free ILC designs. This thesis starts with a quantitative comparison of ILC and RL techniques in both model-based and model-free scenarios from a control perspective. ILC is shown to be more data-efficient when model information is unavailable, which suggests that research into the structure of the problem may lead to improved performance and opens the door for the development of RL-based ILC (RILC) algorithms. Policy gradient methods and Q-learning from reinforcement learning are used to develop new model-free ILC algorithms. The proposed algorithms achieve high-performance tracking without any model information. Their convergence performance is comparable to their model-based counterparts under certain conditions. Moreover, these algorithms are shown to have the potential to handle nonlinear dynamics. Numerical simulations are presented to demonstrate the effectiveness of the proposed design.

Text
Yueqing_Zhang_Thesis - Version of Record
Restricted to Repository staff only until 9 September 2024.
Available under License University of Southampton Thesis Licence.
Text
Final-thesis-submission-Examination-Miss-Yueqing-Zhang
Restricted to Repository staff only

More information

Published date: September 2023

Identifiers

Local EPrints ID: 481975
URI: http://eprints.soton.ac.uk/id/eprint/481975
PURE UUID: 107fb734-abe1-4490-8b5f-506c4403c1cf
ORCID for Yueqing Zhang: ORCID iD orcid.org/0000-0003-2304-6151
ORCID for Bing Chu: ORCID iD orcid.org/0000-0002-2711-8717

Catalogue record

Date deposited: 14 Sep 2023 16:45
Last modified: 18 Mar 2024 03:21

Export record

Contributors

Author: Yueqing Zhang ORCID iD
Thesis advisor: Bing Chu ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×