Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming
Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming
Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption of each base station over a complete video streaming session under the constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive power allocation policies that first predict future information with historical data and then optimize the power allocation based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, we reduce the signaling overhead between the edge server and the base stations, and make it easier to learn a good policy. To further avoid playback interruption throughout the learning process and improve the convergence speed, we exploit the partially known model of the system dynamics by integrating the concepts of safety layer, post-decision state, and virtual experiences into the basic DDPG algorithm. Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction and outperform the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model, the convergence speed can be dramatically improved.
constraint, convergence speed, deep reinforcement learning, energy efficiency, video streaming
3734-3748
Liu, Dong
889643f2-afeb-4479-bd41-3ccedd53d89d
Zhao, Jianyu
81697e20-3ea5-4990-a598-845ecce83137
Yang, Chenyang
d42a57f7-0b91-408e-97dc-e7ce7b92d000
Hanzo, Lajos
66e7266f-3066-4fc0-8391-e000acce71a1
28 January 2021
Liu, Dong
889643f2-afeb-4479-bd41-3ccedd53d89d
Zhao, Jianyu
81697e20-3ea5-4990-a598-845ecce83137
Yang, Chenyang
d42a57f7-0b91-408e-97dc-e7ce7b92d000
Hanzo, Lajos
66e7266f-3066-4fc0-8391-e000acce71a1
Liu, Dong, Zhao, Jianyu, Yang, Chenyang and Hanzo, Lajos
(2021)
Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming.
IEEE Transactions on Wireless Communications, 20 (6), , [9339899].
(doi:10.1109/TWC.2021.3053319).
Abstract
Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption of each base station over a complete video streaming session under the constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive power allocation policies that first predict future information with historical data and then optimize the power allocation based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, we reduce the signaling overhead between the edge server and the base stations, and make it easier to learn a good policy. To further avoid playback interruption throughout the learning process and improve the convergence speed, we exploit the partially known model of the system dynamics by integrating the concepts of safety layer, post-decision state, and virtual experiences into the basic DDPG algorithm. Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction and outperform the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model, the convergence speed can be dramatically improved.
Text
accepted_TWC2021
- Accepted Manuscript
More information
Accepted/In Press date: 16 January 2021
e-pub ahead of print date: 28 January 2021
Published date: 28 January 2021
Additional Information:
Funding Information:
Manuscript received November 5, 2020; revised January 15, 2021; accepted January 15, 2021. Date of publication January 28, 2021; date of current version June 10, 2021. This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 61731002. The work of Lajos Hanzo was supported in part by the Engineering and Physical Sciences Research Council Projects under Grant EP/N004558/1, Grant EP/P034284/1, Grant EP/P034284/1, and Grant EP/P003990/1 (COALESCE), in part by the Royal Society’s Global Challenges Research Fund Grant, and in part by the European Research Council’s Advanced Fellow Grant QuantCom. This article was presented in part at the IEEE Globecom 2019 [1]. The associate editor coordinating the review of this article and approving it for publication was D. Niyato. (Corresponding author: Lajos Hanzo.) Dong Liu and Lajos Hanzo are with the School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: d.liu@soton.ac.uk; lh@ecs.soton.ac.uk).
Publisher Copyright:
© 2002-2012 IEEE.
Keywords:
constraint, convergence speed, deep reinforcement learning, energy efficiency, video streaming
Identifiers
Local EPrints ID: 446613
URI: http://eprints.soton.ac.uk/id/eprint/446613
ISSN: 1536-1276
PURE UUID: f6d19ae3-55e6-4eb3-8cfb-b0ada6ebe0c9
Catalogue record
Date deposited: 16 Feb 2021 17:31
Last modified: 18 Mar 2024 02:36
Export record
Altmetrics
Contributors
Author:
Dong Liu
Author:
Jianyu Zhao
Author:
Chenyang Yang
Author:
Lajos Hanzo
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics