An automated signalized junction controller that learns strategies by temporal difference reinforcement learning
An automated signalized junction controller that learns strategies by temporal difference reinforcement learning
This paper shows how temporal difference learning can be used to build a signalized junction controller that will learn its own strategies through experience. Simulation tests detailed here show that the learned strategies can have high performance. This work builds upon previous work where a neural network based junction controller that can learn strategies from a human expert was developed. In the simulations presented, vehicles are assumed to be broadcasting their position over WiFi giving the junction controller rich information. The vehicle’s position data are pre-processed to describe a simplified state. The state-space is classified into regions associated with junction control decisions using a neural network. This classification is the strategy and is parameterized by the weights of the neural network. The weights can be learned either through supervised learning with a human trainer or reinforcement learning by temporal difference (TD).Tests on a model of an isolated T junction show an average delay of 14.12s and 14.36s respectively for the human trained and TD trained networks. Tests on a model of a pair of closely spaced junctions show 17.44s and 20.82s respectively. Both methods of training produced strategies that were approximately equivalent in their equitable treatment of vehicles, defined here as the variance over the journey time distributions.
652-659
Box, S.
2bc3f3c9-514a-41b8-bd55-a8b34fd11113
Waterson, B.
60a59616-54f7-4c31-920d-975583953286
January 2013
Box, S.
2bc3f3c9-514a-41b8-bd55-a8b34fd11113
Waterson, B.
60a59616-54f7-4c31-920d-975583953286
Box, S. and Waterson, B.
(2013)
An automated signalized junction controller that learns strategies by temporal difference reinforcement learning.
Engineering Applications of Artificial Intelligence, 26 (1), .
(doi:10.1016/j.engappai.2012.02.013).
Abstract
This paper shows how temporal difference learning can be used to build a signalized junction controller that will learn its own strategies through experience. Simulation tests detailed here show that the learned strategies can have high performance. This work builds upon previous work where a neural network based junction controller that can learn strategies from a human expert was developed. In the simulations presented, vehicles are assumed to be broadcasting their position over WiFi giving the junction controller rich information. The vehicle’s position data are pre-processed to describe a simplified state. The state-space is classified into regions associated with junction control decisions using a neural network. This classification is the strategy and is parameterized by the weights of the neural network. The weights can be learned either through supervised learning with a human trainer or reinforcement learning by temporal difference (TD).Tests on a model of an isolated T junction show an average delay of 14.12s and 14.36s respectively for the human trained and TD trained networks. Tests on a model of a pair of closely spaced junctions show 17.44s and 20.82s respectively. Both methods of training produced strategies that were approximately equivalent in their equitable treatment of vehicles, defined here as the variance over the journey time distributions.
Text
tdpaper2012 (1).pdf
- Author's Original
More information
e-pub ahead of print date: 17 March 2012
Published date: January 2013
Organisations:
Transportation Group
Identifiers
Local EPrints ID: 336298
URI: http://eprints.soton.ac.uk/id/eprint/336298
ISSN: 0952-1976
PURE UUID: a98f336e-921a-4d51-b130-196455eb805d
Catalogue record
Date deposited: 21 Mar 2012 11:35
Last modified: 15 Mar 2024 02:58
Export record
Altmetrics
Contributors
Author:
S. Box
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics