The University of Southampton
University of Southampton Institutional Repository

Explainable recommendation; when design meets trust calibration

Explainable recommendation; when design meets trust calibration
Explainable recommendation; when design meets trust calibration

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

Explainable AI, Trust, Trust Calibration, User Centric AI
1386-145X
1857-1884
Naiseh, Mohammad
ab9d6b3c-569c-4d7c-9bfd-61bbb8983049
Al-Thani, Dena
3a5ae77f-c42c-49ca-900a-c36dc9d74944
Jiang, Nan
ef83d2d6-09b2-4d7a-845a-8ffbe8a67dd3
Ali, Raian
a8042ed0-9c68-49b2-885f-bc9932eb65b0
Naiseh, Mohammad
ab9d6b3c-569c-4d7c-9bfd-61bbb8983049
Al-Thani, Dena
3a5ae77f-c42c-49ca-900a-c36dc9d74944
Jiang, Nan
ef83d2d6-09b2-4d7a-845a-8ffbe8a67dd3
Ali, Raian
a8042ed0-9c68-49b2-885f-bc9932eb65b0

Naiseh, Mohammad, Al-Thani, Dena, Jiang, Nan and Ali, Raian (2021) Explainable recommendation; when design meets trust calibration. World Wide Web, 24 (5), 1857-1884. (doi:10.1007/s11280-021-00916-0).

Record type: Article

Abstract

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

Text
Naiseh2021_Article_ExplainableRecommendationWhenD - Version of Record
Available under License Creative Commons Attribution.
Download (1MB)

More information

Accepted/In Press date: 22 June 2021
e-pub ahead of print date: 2 August 2021
Published date: September 2021
Keywords: Explainable AI, Trust, Trust Calibration, User Centric AI

Identifiers

Local EPrints ID: 455517
URI: http://eprints.soton.ac.uk/id/eprint/455517
ISSN: 1386-145X
PURE UUID: 44049067-ddb2-4f42-9872-e4a79348529b
ORCID for Mohammad Naiseh: ORCID iD orcid.org/0000-0002-4927-5086

Catalogue record

Date deposited: 24 Mar 2022 17:33
Last modified: 18 Mar 2024 04:02

Export record

Altmetrics

Contributors

Author: Mohammad Naiseh ORCID iD
Author: Dena Al-Thani
Author: Nan Jiang
Author: Raian Ali

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×