The University of Southampton
University of Southampton Institutional Repository

Hybrid reinforcement learning for STAR-RISs: a coupled phase-shift model based beamformer

Hybrid reinforcement learning for STAR-RISs: a coupled phase-shift model based beamformer
Hybrid reinforcement learning for STAR-RISs: a coupled phase-shift model based beamformer
A simultaneous transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) assisted multi-user downlink multiple-input single-output (MISO) communication system is investigated. In contrast to the existing ideal STAR-RIS model assuming an independent transmission and reflection phase-shift control, a practical coupled phase-shift model is considered. Then, a joint active and passive beamforming optimization problem is formulated for minimizing the long-term transmission power consumption, subject to the coupled phase-shift constraint and the minimum data rate constraint. Despite the coupled nature of the phase-shift model, the formulated problem is solved by invoking a hybrid continuous and discrete phase-shift control policy. Inspired by this observation, a pair of hybrid reinforcement learning (RL) algorithms, namely the hybrid deep deterministic policy gradient (hybrid DDPG) algorithm and the joint DDPG & deep-Q network (DDPG-DQN) based algorithm are proposed. The hybrid DDPG algorithm controls the associated high-dimensional continuous and discrete actions by relying on the hybrid action mapping. By contrast, the joint DDPG-DQN algorithm constructs two Markov decision processes (MDPs) relying on an inner and an outer environment, thereby amalgamating the two agents to accomplish a joint hybrid control. Simulation results demonstrate that the STARRIS
has superiority over other conventional RISs in terms of its energy consumption. Furthermore, both the proposed algorithms outperform the baseline DDPG algorithm, and the joint DDPGDQN algorithm achieves a superior performance, albeit at an
increased computational complexity.
1558-0008
Zhong, Ruikang
c3d6c901-2c48-499e-aa13-b0e3c5b079f6
Liu, Yuanwei
98a4d25f-4867-4d8b-9ae0-940d3009e6e1
Mu, Xidong
ec46d072-3870-47df-92f1-367d874034b4
Chen, Yue
5f67ded3-ff93-4bf9-9f3b-17f68457de81
Wang, Xianbin
3997525e-7cd8-4964-8b17-527894204ff1
Hanzo, Lajos
66e7266f-3066-4fc0-8391-e000acce71a1
Zhong, Ruikang
c3d6c901-2c48-499e-aa13-b0e3c5b079f6
Liu, Yuanwei
98a4d25f-4867-4d8b-9ae0-940d3009e6e1
Mu, Xidong
ec46d072-3870-47df-92f1-367d874034b4
Chen, Yue
5f67ded3-ff93-4bf9-9f3b-17f68457de81
Wang, Xianbin
3997525e-7cd8-4964-8b17-527894204ff1
Hanzo, Lajos
66e7266f-3066-4fc0-8391-e000acce71a1

Zhong, Ruikang, Liu, Yuanwei, Mu, Xidong, Chen, Yue, Wang, Xianbin and Hanzo, Lajos (2022) Hybrid reinforcement learning for STAR-RISs: a coupled phase-shift model based beamformer. IEEE Journal on Selected Areas of Communications. (In Press)

Record type: Article

Abstract

A simultaneous transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) assisted multi-user downlink multiple-input single-output (MISO) communication system is investigated. In contrast to the existing ideal STAR-RIS model assuming an independent transmission and reflection phase-shift control, a practical coupled phase-shift model is considered. Then, a joint active and passive beamforming optimization problem is formulated for minimizing the long-term transmission power consumption, subject to the coupled phase-shift constraint and the minimum data rate constraint. Despite the coupled nature of the phase-shift model, the formulated problem is solved by invoking a hybrid continuous and discrete phase-shift control policy. Inspired by this observation, a pair of hybrid reinforcement learning (RL) algorithms, namely the hybrid deep deterministic policy gradient (hybrid DDPG) algorithm and the joint DDPG & deep-Q network (DDPG-DQN) based algorithm are proposed. The hybrid DDPG algorithm controls the associated high-dimensional continuous and discrete actions by relying on the hybrid action mapping. By contrast, the joint DDPG-DQN algorithm constructs two Markov decision processes (MDPs) relying on an inner and an outer environment, thereby amalgamating the two agents to accomplish a joint hybrid control. Simulation results demonstrate that the STARRIS
has superiority over other conventional RISs in terms of its energy consumption. Furthermore, both the proposed algorithms outperform the baseline DDPG algorithm, and the joint DDPGDQN algorithm achieves a superior performance, albeit at an
increased computational complexity.

Text
Hybrid Reinforcement Learning for STAR-RISs A Coupled Phase-Shift Model Based Beamformer - Accepted Manuscript
Restricted to Repository staff only until 15 August 2024.
Request a copy

More information

Accepted/In Press date: 15 April 2022

Identifiers

Local EPrints ID: 468518
URI: http://eprints.soton.ac.uk/id/eprint/468518
ISSN: 1558-0008
PURE UUID: 6acf077d-cbf5-4e86-863f-e63e6f1d60b2
ORCID for Lajos Hanzo: ORCID iD orcid.org/0000-0002-2636-5214

Catalogue record

Date deposited: 17 Aug 2022 16:31
Last modified: 17 Mar 2024 02:35

Export record

Contributors

Author: Ruikang Zhong
Author: Yuanwei Liu
Author: Xidong Mu
Author: Yue Chen
Author: Xianbin Wang
Author: Lajos Hanzo ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×