A Reinforcement Learning Approach to On-line Optimal Control
An, P.E., Aslam-Mir, S., Brown, M. and Harris, C.J. (1994) A Reinforcement Learning Approach to On-line Optimal Control. Int. Conf. on Neural Networks , 2465-2471.
Full text not available from this repository.
This paper presents a hybrid control architecture for solving on-line optimal control. In this architecture, the control law is dynamically scheduled between a reinforcement controller and a stabilizing controller so that the closed-loop performance is smoothly transformed from a reactive behavior to one which can predict. Based on a modified Q-learning technique, the reinforcement controller is made of two components: policy and Q functions. The policy function is explicitly incorporated so as to bypass the minimum operator normally required for selecting actions and updating the Q function. This architecture is then applied to a repetitive operation using a second-order linear-time-variant plant with a nonlinear control structure. In this operation, the reinforcement signals are based on set-point errors and the reinforcement controller is generalized using second-order B-Splines networks. This example illustrates how, for a non-optimally tuned stabilizing controller, the closed-loop performance can be bootstrapped with the use of reinforcement learning. Results shows that the set-point performance of the hybrid controller is improved over that of the fixed structure controller by discovering better control strategies which compensate for the non-optimal gains and nonlinear control structure.
|Item Type:||Conference or Workshop Item (UNSPECIFIED)|
|Additional Information:||Organisation: IEEE Address: Orlando, Fl|
|Divisions:||Faculty of Physical Sciences and Engineering > Electronics and Computer Science > Southampton Wireless Group
|Date Deposited:||04 May 1999|
|Last Modified:||27 Mar 2014 19:51|
|Further Information:||Google Scholar|
|ISI Citation Count:||0|
|RDF:||RDF+N-Triples, RDF+N3, RDF+XML, Browse.|
Actions (login required)