3D spatial information compression based deep reinforcement learning for UAV path planning in unknown environments
3D spatial information compression based deep reinforcement learning for UAV path planning in unknown environments
In the past decade, unmanned aerial vehicles (UAVs) technology has developed rapidly, while the flexibility and low cost of UAVs make them attractive in many applications. Path planning for UAVs is crucial in most applications, where the path planning for UAVs in unknown, while complex 3Denvironments has also become an urgent challenge to mitigate. In this paper, we consider the unknown3D environment as a partially observable Markov decision process (POMDP) problem and we derive the Bellman equation without the introduction of belief state (BS) distribution. More explicitly, we use an independent emulator to model the environmental observation history, and obtain an approximate BS distribution of the state through Monte Carlo simulation in the emulator, which eliminates the need for BS calculation to improve training efficiency and path planning performance. Additionally, we propose a three-dimensional spatial information compression (3DSIC) algorithm to continuous POMDP environmentthat can compress 3D environmental information into 2D, greatly reducing the search space of the path planning algorithms. The simulation results show that our proposed 3D spatial information compression based deep deterministic policy gradient (3DSIC-DDPG) algorithm can improve the training efficiency by 95.9% compared to the traditional DDPG algorithm in unknown 3D environments. Additionally, the efficiency of combining 3DSIC with fast recurrent stochastic value gradient (FRSVG) algorithm, which can be considered as the most advanced state-of-the-art planning algorithm for the UAV, is 95% higher than that of FRSVG without 3DSIC algorithm in unknown environments.
2662-2676
Wang, Zhipeng
feb79a9c-caba-4f0c-a561-dff6447aae64
Ng, Soon Xin
e19a63b0-0f12-4591-ab5f-554820d5f78c
El-Hajjar, Mohammed
3a829028-a427-4123-b885-2bab81a44b6f
18 September 2025
Wang, Zhipeng
feb79a9c-caba-4f0c-a561-dff6447aae64
Ng, Soon Xin
e19a63b0-0f12-4591-ab5f-554820d5f78c
El-Hajjar, Mohammed
3a829028-a427-4123-b885-2bab81a44b6f
Wang, Zhipeng, Ng, Soon Xin and El-Hajjar, Mohammed
(2025)
3D spatial information compression based deep reinforcement learning for UAV path planning in unknown environments.
IEEE Open Journal of Vehicular Technology, 6, .
(doi:10.1109/OJVT.2025.3611507).
Abstract
In the past decade, unmanned aerial vehicles (UAVs) technology has developed rapidly, while the flexibility and low cost of UAVs make them attractive in many applications. Path planning for UAVs is crucial in most applications, where the path planning for UAVs in unknown, while complex 3Denvironments has also become an urgent challenge to mitigate. In this paper, we consider the unknown3D environment as a partially observable Markov decision process (POMDP) problem and we derive the Bellman equation without the introduction of belief state (BS) distribution. More explicitly, we use an independent emulator to model the environmental observation history, and obtain an approximate BS distribution of the state through Monte Carlo simulation in the emulator, which eliminates the need for BS calculation to improve training efficiency and path planning performance. Additionally, we propose a three-dimensional spatial information compression (3DSIC) algorithm to continuous POMDP environmentthat can compress 3D environmental information into 2D, greatly reducing the search space of the path planning algorithms. The simulation results show that our proposed 3D spatial information compression based deep deterministic policy gradient (3DSIC-DDPG) algorithm can improve the training efficiency by 95.9% compared to the traditional DDPG algorithm in unknown 3D environments. Additionally, the efficiency of combining 3DSIC with fast recurrent stochastic value gradient (FRSVG) algorithm, which can be considered as the most advanced state-of-the-art planning algorithm for the UAV, is 95% higher than that of FRSVG without 3DSIC algorithm in unknown environments.
Text
paper
- Accepted Manuscript
Text
3D_Spatial_Information_Compression_Based_Deep_Reinforcement_Learning_for_UAV_Path_Planning_in_Unknown_Environments
- Version of Record
More information
Accepted/In Press date: 14 September 2025
Published date: 18 September 2025
Identifiers
Local EPrints ID: 506086
URI: http://eprints.soton.ac.uk/id/eprint/506086
ISSN: 2644-1330
PURE UUID: e5a523d8-39a8-4b3f-96ed-60edbe42d297
Catalogue record
Date deposited: 28 Oct 2025 18:21
Last modified: 29 Oct 2025 03:02
Export record
Altmetrics
Contributors
Author:
Zhipeng Wang
Author:
Soon Xin Ng
Author:
Mohammed El-Hajjar
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics