The University of Southampton
University of Southampton Institutional Repository

Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming

Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming
Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming
Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption of each base station over a complete video streaming session under the constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive power allocation policies that first predict future information with historical data and then optimize the power allocation based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, we reduce the signaling overhead between the edge server and the base stations, and make it easier to learn a good policy. To further avoid playback interruption throughout the learning process and improve the convergence speed, we exploit the partially known model of the system dynamics by integrating the concepts of safety layer, post-decision state, and virtual experiences into the basic DDPG algorithm. Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction and outperform the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model, the convergence speed can be dramatically improved.
deep reinforcement learning, convergence speed, constraint, energy efficiency, video streaming
1536-1276
Liu, Dong
889643f2-afeb-4479-bd41-3ccedd53d89d
Zhao, Jianyu
81697e20-3ea5-4990-a598-845ecce83137
Yang, Chenyang
d42a57f7-0b91-408e-97dc-e7ce7b92d000
Hanzo, Lajos
66e7266f-3066-4fc0-8391-e000acce71a1
Liu, Dong
889643f2-afeb-4479-bd41-3ccedd53d89d
Zhao, Jianyu
81697e20-3ea5-4990-a598-845ecce83137
Yang, Chenyang
d42a57f7-0b91-408e-97dc-e7ce7b92d000
Hanzo, Lajos
66e7266f-3066-4fc0-8391-e000acce71a1

Liu, Dong, Zhao, Jianyu, Yang, Chenyang and Hanzo, Lajos (2021) Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming. IEEE Transactions on Wireless Communications. (doi:10.1109/TWC.2021.3053319).

Record type: Article

Abstract

Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption of each base station over a complete video streaming session under the constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive power allocation policies that first predict future information with historical data and then optimize the power allocation based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, we reduce the signaling overhead between the edge server and the base stations, and make it easier to learn a good policy. To further avoid playback interruption throughout the learning process and improve the convergence speed, we exploit the partially known model of the system dynamics by integrating the concepts of safety layer, post-decision state, and virtual experiences into the basic DDPG algorithm. Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction and outperform the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model, the convergence speed can be dramatically improved.

Text
accepted_TWC2021 - Accepted Manuscript
Download (851kB)

More information

Accepted/In Press date: 16 January 2021
e-pub ahead of print date: 28 January 2021
Keywords: deep reinforcement learning, convergence speed, constraint, energy efficiency, video streaming

Identifiers

Local EPrints ID: 446613
URI: http://eprints.soton.ac.uk/id/eprint/446613
ISSN: 1536-1276
PURE UUID: f6d19ae3-55e6-4eb3-8cfb-b0ada6ebe0c9
ORCID for Dong Liu: ORCID iD orcid.org/0000-0002-0619-1480
ORCID for Lajos Hanzo: ORCID iD orcid.org/0000-0002-2636-5214

Catalogue record

Date deposited: 16 Feb 2021 17:31
Last modified: 18 Feb 2021 17:40

Export record

Altmetrics

Contributors

Author: Dong Liu ORCID iD
Author: Jianyu Zhao
Author: Chenyang Yang
Author: Lajos Hanzo ORCID iD

University divisions

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×