The University of Southampton
University of Southampton Institutional Repository

Trust in AI systems for project and risk management: evaluating the role of transparency, reputation, technical competence, and reliability

Trust in AI systems for project and risk management: evaluating the role of transparency, reputation, technical competence, and reliability
Trust in AI systems for project and risk management: evaluating the role of transparency, reputation, technical competence, and reliability
Machine learning algorithms are often perceived as opaque, undermining user trust in AI systems. Explainable AI (XAI) seeks to mitigate this by enhancing transparency through clear explanations of algorithmic predictions. Furthermore, the reliability of these systems can be improved by integrating predictions from multiple algorithms. This study developed a hybrid project prediction system that merges XAI with several machine learning algorithms, thus fostering trust among project professionals and decision-makers. The theoretical model, grounded in literature on technology adoption, inter-organisational relationships, and XAI, examines four key trust factors: transparency, reputation, technical competence, and reliability. Employing a survey experiment and structural equation modelling, the research provides a nuanced understanding of how these factors influence trust in AI applications within project and risk management, contributing significantly to both academic literature and practical implementations.
Trustworthiness, AI, Explainable AI, Project Management, Machine Learning, Transparency
Hsu, Ming-Wei
1321a3d0-e965-4438-b981-95aba5d0394c
Dacre, Nicholas
90ea8d3e-d0b1-4a5a-bead-f95ab32afbd1
Senyo, P.K.
a997594f-9a9f-411a-a097-bac972452c6f
Hsu, Ming-Wei
1321a3d0-e965-4438-b981-95aba5d0394c
Dacre, Nicholas
90ea8d3e-d0b1-4a5a-bead-f95ab32afbd1
Senyo, P.K.
a997594f-9a9f-411a-a097-bac972452c6f

Hsu, Ming-Wei, Dacre, Nicholas and Senyo, P.K. (2023) Trust in AI systems for project and risk management: evaluating the role of transparency, reputation, technical competence, and reliability. In Operational Research Society.

Record type: Conference or Workshop Item (Paper)

Abstract

Machine learning algorithms are often perceived as opaque, undermining user trust in AI systems. Explainable AI (XAI) seeks to mitigate this by enhancing transparency through clear explanations of algorithmic predictions. Furthermore, the reliability of these systems can be improved by integrating predictions from multiple algorithms. This study developed a hybrid project prediction system that merges XAI with several machine learning algorithms, thus fostering trust among project professionals and decision-makers. The theoretical model, grounded in literature on technology adoption, inter-organisational relationships, and XAI, examines four key trust factors: transparency, reputation, technical competence, and reliability. Employing a survey experiment and structural equation modelling, the research provides a nuanced understanding of how these factors influence trust in AI applications within project and risk management, contributing significantly to both academic literature and practical implementations.

This record has no associated files available for download.

More information

Published date: 12 September 2023
Additional Information: This study is particularly relevant for project professionals and researchers interested in integrating AI within project and risk management frameworks. It sheds light on how trust in autonomous systems can be enhanced through factors such as transparency, technical competence, reputation, and reliability. Employing a hybrid model that combines Explainable AI (XAI) with multiple machine learning algorithms, the research offers a theoretical model for evaluating essential trust factors. These insights are salient for informed decision-making and deepen understanding of the strategic applications of AI technologies in project management.
Keywords: Trustworthiness, AI, Explainable AI, Project Management, Machine Learning, Transparency

Identifiers

Local EPrints ID: 492469
URI: http://eprints.soton.ac.uk/id/eprint/492469
PURE UUID: 371ea666-9a41-4aac-97e9-d7f383a64c8e
ORCID for Nicholas Dacre: ORCID iD orcid.org/0000-0002-9667-9331

Catalogue record

Date deposited: 29 Jul 2024 16:57
Last modified: 08 Nov 2024 02:56

Export record

Contributors

Author: Ming-Wei Hsu
Author: Nicholas Dacre ORCID iD
Author: P.K. Senyo

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×