Explainable and robust artificial intelligence for trustworthy resource management in 6G networks
Explainable and robust artificial intelligence for trustworthy resource management in 6G networks
Artificial intelligence (AI) is expected to be an integral part of radio resource management (RRM) in sixth-generation (6G) networks. However, the opaque nature of complex deep learning (DL) models lacks explainability and robustness, posing a significant hindrance to adoption in practice. Furthermore, wireless communication experts and stakeholders, concerned about potential vulnerabilities, such as data privacy issues or biased decision-making, express reluctance to fully embrace these AI technologies. To this end, this article sheds light on the importance and means of achieving explainability and robustness toward trustworthy AI-based RRM solutions for 6G networks. We outline a range of explainable and robust AI techniques for feature visualization and attribution; model simplification and interpretability; model compression; and sensitivity analysis, then explain how they can be leveraged for RRM. Two case studies are presented to demonstrate the application of explainability and robustness in wireless network design. The former case focuses on exploiting explainable AI methods to simplify the model by reducing the input size of deep reinforcement learning agents for scalable RRM of vehicular networks. On the other hand, the latter case highlights the importance of providing interpretable explanations of credible and confident decisions of a DL-based beam alignment solution in massive multiple-input multiple-output systems. Analyses of these cases provide a generic explainability pipeline and a credibility assessment tool for checking model robustness that can be applied to any pre-trained DL-based RRM method. Overall, the proposed framework offers a promising avenue for improving the practicality and trustworthiness of AI-empowered RRM.
50-56
Khan, Nasir
7d3a8913-5717-456b-b2b3-6f1304f91854
Coleri, Sinem
d28e35b3-efc6-42cb-b2e2-b1d2055cbbae
Abdallah, Asmaa
86b80268-48be-4bc8-9577-c989e496e459
Celik, Abdulkadir
f8e72266-763c-4849-b38e-2ea2f50a69d0
Eltawil, Ahmed M.
5eb9e965-5ec8-4da1-baee-c3cab0fb2a72
1 April 2024
Khan, Nasir
7d3a8913-5717-456b-b2b3-6f1304f91854
Coleri, Sinem
d28e35b3-efc6-42cb-b2e2-b1d2055cbbae
Abdallah, Asmaa
86b80268-48be-4bc8-9577-c989e496e459
Celik, Abdulkadir
f8e72266-763c-4849-b38e-2ea2f50a69d0
Eltawil, Ahmed M.
5eb9e965-5ec8-4da1-baee-c3cab0fb2a72
Khan, Nasir, Coleri, Sinem, Abdallah, Asmaa, Celik, Abdulkadir and Eltawil, Ahmed M.
(2024)
Explainable and robust artificial intelligence for trustworthy resource management in 6G networks.
IEEE Communications Magazine, 62 (4), .
(doi:10.1109/MCOM.001.2300172).
Abstract
Artificial intelligence (AI) is expected to be an integral part of radio resource management (RRM) in sixth-generation (6G) networks. However, the opaque nature of complex deep learning (DL) models lacks explainability and robustness, posing a significant hindrance to adoption in practice. Furthermore, wireless communication experts and stakeholders, concerned about potential vulnerabilities, such as data privacy issues or biased decision-making, express reluctance to fully embrace these AI technologies. To this end, this article sheds light on the importance and means of achieving explainability and robustness toward trustworthy AI-based RRM solutions for 6G networks. We outline a range of explainable and robust AI techniques for feature visualization and attribution; model simplification and interpretability; model compression; and sensitivity analysis, then explain how they can be leveraged for RRM. Two case studies are presented to demonstrate the application of explainability and robustness in wireless network design. The former case focuses on exploiting explainable AI methods to simplify the model by reducing the input size of deep reinforcement learning agents for scalable RRM of vehicular networks. On the other hand, the latter case highlights the importance of providing interpretable explanations of credible and confident decisions of a DL-based beam alignment solution in massive multiple-input multiple-output systems. Analyses of these cases provide a generic explainability pipeline and a credibility assessment tool for checking model robustness that can be applied to any pre-trained DL-based RRM method. Overall, the proposed framework offers a promising avenue for improving the practicality and trustworthiness of AI-empowered RRM.
This record has no associated files available for download.
More information
Published date: 1 April 2024
Additional Information:
Publisher Copyright:
© 1979-2012 IEEE.
Identifiers
Local EPrints ID: 504504
URI: http://eprints.soton.ac.uk/id/eprint/504504
ISSN: 0163-6804
PURE UUID: b6876b99-05f4-4e00-8268-a53dc164246e
Catalogue record
Date deposited: 10 Sep 2025 15:43
Last modified: 11 Sep 2025 03:49
Export record
Altmetrics
Contributors
Author:
Nasir Khan
Author:
Sinem Coleri
Author:
Asmaa Abdallah
Author:
Abdulkadir Celik
Author:
Ahmed M. Eltawil
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics