Client-master multiagent deep reinforcement learning for task offloading in mobile edge computing
Client-master multiagent deep reinforcement learning for task offloading in mobile edge computing
As mobile applications grow in complexity, there is an increasing need to perform computationally intensive tasks. However, user devices (UDs), such as tablets and smartphones, have limited capacity to carry out the required computations. Task offloading in mobile edge computing (MEC) is a strategy that meets this demand by distributing tasks between UDs and servers. Deep reinforcement learning (DRL) is a promising solution for this strategy because it can adapt to dynamic changes and minimize online computational complexity. However, various types of continuous and discrete resource constraints on UDs and MEC servers pose challenges to the design of an efficient DRL algorithm. Existing DRL-based task-offloading algorithms focus on the constraints of the UDs, assuming the availability of enough resources on the server. Moreover, existing Multiagent DRL (MADRL)-based task-offloading algorithms are homogeneous agents and consider homogeneous constraints as a penalty in their reward function. We propose a novel Client-Master MADRL (CMMADRL) algorithm for task offloading in MEC that uses client agents at the UDs to decide on their resource requirements and a master agent at the server to make a combinatorial action selection based on the decision of the UDs. CMMADRL is shown to achieve up to 59% improvement in performance over existing benchmark and heuristic algorithms.
Gebrekidan, Tesfay Zemuy
289d7a6a-f783-42c4-9a77-e69e0d96d66e
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Norman, Tim
663e522f-807c-4569-9201-dc141c8eb50d
Gebrekidan, Tesfay Zemuy
289d7a6a-f783-42c4-9a77-e69e0d96d66e
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Norman, Tim
663e522f-807c-4569-9201-dc141c8eb50d
Gebrekidan, Tesfay Zemuy, Stein, Sebastian and Norman, Tim
(2025)
Client-master multiagent deep reinforcement learning for task offloading in mobile edge computing.
ACM Transactions on Autonomous and Adaptive Systems.
(doi:10.1145/3768579).
(In Press)
Abstract
As mobile applications grow in complexity, there is an increasing need to perform computationally intensive tasks. However, user devices (UDs), such as tablets and smartphones, have limited capacity to carry out the required computations. Task offloading in mobile edge computing (MEC) is a strategy that meets this demand by distributing tasks between UDs and servers. Deep reinforcement learning (DRL) is a promising solution for this strategy because it can adapt to dynamic changes and minimize online computational complexity. However, various types of continuous and discrete resource constraints on UDs and MEC servers pose challenges to the design of an efficient DRL algorithm. Existing DRL-based task-offloading algorithms focus on the constraints of the UDs, assuming the availability of enough resources on the server. Moreover, existing Multiagent DRL (MADRL)-based task-offloading algorithms are homogeneous agents and consider homogeneous constraints as a penalty in their reward function. We propose a novel Client-Master MADRL (CMMADRL) algorithm for task offloading in MEC that uses client agents at the UDs to decide on their resource requirements and a master agent at the server to make a combinatorial action selection based on the decision of the UDs. CMMADRL is shown to achieve up to 59% improvement in performance over existing benchmark and heuristic algorithms.
Text
CMMADRL_TAAS_24_0278_R1
- Accepted Manuscript
More information
Accepted/In Press date: 12 September 2025
Identifiers
Local EPrints ID: 506187
URI: http://eprints.soton.ac.uk/id/eprint/506187
ISSN: 1556-4665
PURE UUID: 55271700-216a-402e-ab77-9335223a6787
Catalogue record
Date deposited: 29 Oct 2025 17:45
Last modified: 30 Oct 2025 02:46
Export record
Altmetrics
Contributors
Author:
Tesfay Zemuy Gebrekidan
Author:
Sebastian Stein
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics