Objective metrics for human-subjects evaluation in explainable reinforcement learning
Objective metrics for human-subjects evaluation in explainable reinforcement learning
Explanation is a fundamentally human process. Understanding the goal and audience of the explanation is vital, yet existing work on explainable reinforcement learning (XRL) routinely does not consult humans in their evaluations. Even when they do, they routinely resort to subjective metrics, such as confidence or understanding, that can only inform researchers of users' opinions, not their practical effectiveness for a given problem. This paper calls on researchers to use objective human metrics for explanation evaluations based on observable and actionable behaviour to build more reproducible, comparable, and epistemically grounded research. To this end, we curate, describe, and compare several objective evaluation methodologies for applying explanations to debugging agent behaviour and supporting human-agent teaming, illustrating our proposed methods using a novel grid-based environment. We discuss how subjective and objective metrics complement each other to provide holistic validation and how future work needs to utilise standardised benchmarks for testing to enable greater comparisons between research.
cs.AI, cs.HC, cs.RO
Gyevnar, Balint
508ff33b-9b1a-49a8-8d79-bf511fdc6e1f
Towers, Mark
18e6acc7-29c4-4d0c-9058-32d180ad4f12
31 January 2025
Gyevnar, Balint
508ff33b-9b1a-49a8-8d79-bf511fdc6e1f
Towers, Mark
18e6acc7-29c4-4d0c-9058-32d180ad4f12
[Unknown type: UNSPECIFIED]
Abstract
Explanation is a fundamentally human process. Understanding the goal and audience of the explanation is vital, yet existing work on explainable reinforcement learning (XRL) routinely does not consult humans in their evaluations. Even when they do, they routinely resort to subjective metrics, such as confidence or understanding, that can only inform researchers of users' opinions, not their practical effectiveness for a given problem. This paper calls on researchers to use objective human metrics for explanation evaluations based on observable and actionable behaviour to build more reproducible, comparable, and epistemically grounded research. To this end, we curate, describe, and compare several objective evaluation methodologies for applying explanations to debugging agent behaviour and supporting human-agent teaming, illustrating our proposed methods using a novel grid-based environment. We discuss how subjective and objective metrics complement each other to provide holistic validation and how future work needs to utilise standardised benchmarks for testing to enable greater comparisons between research.
Text
2501.19256v1
- Author's Original
More information
Published date: 31 January 2025
Keywords:
cs.AI, cs.HC, cs.RO
Identifiers
Local EPrints ID: 503222
URI: http://eprints.soton.ac.uk/id/eprint/503222
PURE UUID: e65ac7c8-5e3e-4869-a64d-05c8ee50ba59
Catalogue record
Date deposited: 24 Jul 2025 16:39
Last modified: 25 Jul 2025 02:02
Export record
Altmetrics
Contributors
Author:
Balint Gyevnar
Author:
Mark Towers
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics