Balancing EV demand at charging stations using multi-agent reinforcement learning
Balancing EV demand at charging stations using multi-agent reinforcement learning
This paper proposes a method for optimising the routing of electric vehicles (EVs) to charging stations via a multi-agent reinforcement learning (MARL) demand balancing system in order to reduce queuing time. This is achieved through simulations via the SUMO simulator to train and test agents to reduce demand by applying reinforcement learning algorithms. Q-learning, PPO and DQN experiments have been conducted to determine a suitable algorithm. The approaches were run on multiple test road networks and a real-world Berlin network with ten charging stations to validate the findings. Varied learning strategies are also explored to determine the appropriate behaviour patterns between the agents, including competitive and cooperative learning as well as a mix of the two. The results of the most promising DQN cooperative implementation applied to the Berlin network achieved an 88.09% reduction in the mean wait times when compared with a greedy approach. The findings of this paper demonstrate the potential for practical benefits of applying MARL systems to real-world environments.
Coulson, Rory
60c035e5-31e5-4429-8199-fa3949bd60da
Shafipour, Elnaz
a2e1dea9-d3c0-4288-afdc-197df65f2556
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Buermann, Jan
d12aad54-d71f-4eba-9510-daf0ee2c7fc3
Sharkh, Suleiman
c8445516-dafe-41c2-b7e8-c21e295e56b9
Cruden, Andrew
ed709997-4402-49a7-9ad5-f4f3c62d29ab
April 2024
Coulson, Rory
60c035e5-31e5-4429-8199-fa3949bd60da
Shafipour, Elnaz
a2e1dea9-d3c0-4288-afdc-197df65f2556
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Buermann, Jan
d12aad54-d71f-4eba-9510-daf0ee2c7fc3
Sharkh, Suleiman
c8445516-dafe-41c2-b7e8-c21e295e56b9
Cruden, Andrew
ed709997-4402-49a7-9ad5-f4f3c62d29ab
Coulson, Rory, Shafipour, Elnaz, Stein, Sebastian, Buermann, Jan, Sharkh, Suleiman and Cruden, Andrew
(2024)
Balancing EV demand at charging stations using multi-agent reinforcement learning.
The 37th International Electric Vehicle Symposium & Exhibition: Electric Waves to Future Mobility, COEX, Seoul, Korea, Republic of.
23 - 26 Apr 2024.
Record type:
Conference or Workshop Item
(Paper)
Abstract
This paper proposes a method for optimising the routing of electric vehicles (EVs) to charging stations via a multi-agent reinforcement learning (MARL) demand balancing system in order to reduce queuing time. This is achieved through simulations via the SUMO simulator to train and test agents to reduce demand by applying reinforcement learning algorithms. Q-learning, PPO and DQN experiments have been conducted to determine a suitable algorithm. The approaches were run on multiple test road networks and a real-world Berlin network with ten charging stations to validate the findings. Varied learning strategies are also explored to determine the appropriate behaviour patterns between the agents, including competitive and cooperative learning as well as a mix of the two. The results of the most promising DQN cooperative implementation applied to the Berlin network achieved an 88.09% reduction in the mean wait times when compared with a greedy approach. The findings of this paper demonstrate the potential for practical benefits of applying MARL systems to real-world environments.
Text
Balancing EV Demand at Charging Stations
More information
Published date: April 2024
Venue - Dates:
The 37th International Electric Vehicle Symposium & Exhibition: Electric Waves to Future Mobility, COEX, Seoul, Korea, Republic of, 2024-04-23 - 2024-04-26
Identifiers
Local EPrints ID: 487939
URI: http://eprints.soton.ac.uk/id/eprint/487939
PURE UUID: e25e455d-164b-423e-8605-f03411caae66
Catalogue record
Date deposited: 11 Mar 2024 17:37
Last modified: 01 Oct 2024 01:46
Export record
Contributors
Author:
Rory Coulson
Author:
Elnaz Shafipour
Author:
Sebastian Stein
Author:
Jan Buermann
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics