The University of Southampton
University of Southampton Institutional Repository

Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks

Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks
Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks

With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.

Explainable artificial intelligence, Explainable recommendations, Human factors in information systems, User-centred design
1865-1348
212-228
Springer
Naiseh, Mohammad
ab9d6b3c-569c-4d7c-9bfd-61bbb8983049
Jiang, Nan
ef83d2d6-09b2-4d7a-845a-8ffbe8a67dd3
Ma, Jianbing
b7f6768c-a2d7-4e5c-8b2b-f6af2bba624f
Ali, Raian
a8042ed0-9c68-49b2-885f-bc9932eb65b0
Dalpiaz, Fabiano
Zdravkovic, Jelena
Loucopoulos, Pericles
Naiseh, Mohammad
ab9d6b3c-569c-4d7c-9bfd-61bbb8983049
Jiang, Nan
ef83d2d6-09b2-4d7a-845a-8ffbe8a67dd3
Ma, Jianbing
b7f6768c-a2d7-4e5c-8b2b-f6af2bba624f
Ali, Raian
a8042ed0-9c68-49b2-885f-bc9932eb65b0
Dalpiaz, Fabiano
Zdravkovic, Jelena
Loucopoulos, Pericles

Naiseh, Mohammad, Jiang, Nan, Ma, Jianbing and Ali, Raian (2020) Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. Dalpiaz, Fabiano, Zdravkovic, Jelena and Loucopoulos, Pericles (eds.) In Research Challenges in Information Science. RCIS 2020. vol. 385 LNBIP, Springer. pp. 212-228 . (doi:10.1007/978-3-030-50316-1_13).

Record type: Conference or Workshop Item (Paper)

Abstract

With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.

Text
Explainable_Recommendations_in_IntelligentSystems__Delivery_Methods__Modalities_andRisks29032020 - Accepted Manuscript
Download (182kB)

More information

Published date: 2020
Venue - Dates: 14th International Conference on Research Challenges in Information Sciences, RCIS 2020, , Limassol, Cyprus, 2020-09-23 - 2020-09-25
Keywords: Explainable artificial intelligence, Explainable recommendations, Human factors in information systems, User-centred design

Identifiers

Local EPrints ID: 455670
URI: http://eprints.soton.ac.uk/id/eprint/455670
ISSN: 1865-1348
PURE UUID: 8156d99d-de33-47e5-8679-98a9ca4f54a8
ORCID for Mohammad Naiseh: ORCID iD orcid.org/0000-0002-4927-5086

Catalogue record

Date deposited: 30 Mar 2022 16:43
Last modified: 23 Jul 2022 02:29

Export record

Altmetrics

Contributors

Author: Mohammad Naiseh ORCID iD
Author: Nan Jiang
Author: Jianbing Ma
Author: Raian Ali
Editor: Fabiano Dalpiaz
Editor: Jelena Zdravkovic
Editor: Pericles Loucopoulos

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×