Non-Markovian reward modelling from trajectory labels via Interpretable multiple instance learning
Non-Markovian reward modelling from trajectory labels via Interpretable multiple instance learning
We generalise the problem of reward modelling (RM) for reinforcement learning (RL) to handle non-Markovian rewards. Existing work assumes that human evaluators observe each step in a trajectory independently when providing feedback on agent behaviour. In this work, we remove this assumption, extending RM to capture temporal dependencies in human assessment of trajectories. We show how RM can be approached as a multiple instance learning (MIL) problem, where trajectories are treated as bags with return labels, and steps within the trajectories are instances with unseen reward labels. We go on to develop new MIL models that are able to capture the time dependencies in labelled trajectories. We demonstrate on a range of RL tasks that our novel MIL models can reconstruct reward functions to a high level of accuracy, and can be used to train high-performing agent policies.
Neural Information Processing Systems Foundation
Early, Joseph
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Bewley, Tom
47c4d28a-2396-4e46-8065-448f2adeba19
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
2022
Early, Joseph
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Bewley, Tom
47c4d28a-2396-4e46-8065-448f2adeba19
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
Early, Joseph, Bewley, Tom, Evers, Christine and Ramchurn, Sarvapali
(2022)
Non-Markovian reward modelling from trajectory labels via Interpretable multiple instance learning.
Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K. and Oh, A.
(eds.)
In Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022.
vol. 35,
Neural Information Processing Systems Foundation.
12 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
We generalise the problem of reward modelling (RM) for reinforcement learning (RL) to handle non-Markovian rewards. Existing work assumes that human evaluators observe each step in a trajectory independently when providing feedback on agent behaviour. In this work, we remove this assumption, extending RM to capture temporal dependencies in human assessment of trajectories. We show how RM can be approached as a multiple instance learning (MIL) problem, where trajectories are treated as bags with return labels, and steps within the trajectories are instances with unseen reward labels. We go on to develop new MIL models that are able to capture the time dependencies in labelled trajectories. We demonstrate on a range of RL tasks that our novel MIL models can reconstruct reward functions to a high level of accuracy, and can be used to train high-performing agent policies.
This record has no associated files available for download.
More information
Published date: 2022
Additional Information:
Funding Information:
This work was funded by the AXA Research Fund, the UKRI Trustworthy Autonomous Systems Hub (EP/V00784X/1), and an EPSRC/Thales industrial CASE award. We would also like to thank the Universities of Southampton and Bristol, as well as the Alan Turing Institute, for their support. The authors acknowledge the use of the IRIDIS (Southampton) and BlueCrystal (Bristol) high-performance computing facilities and associated support services in the completion of this work.
Venue - Dates:
36th Conference on Neural Information Processing Systems, NeurIPS 2022, , New Orleans, United States, 2022-11-28 - 2022-12-09
Identifiers
Local EPrints ID: 483696
URI: http://eprints.soton.ac.uk/id/eprint/483696
ISSN: 1049-5258
PURE UUID: 7ab08b1f-4215-4e50-ad87-dac7b6d30602
Catalogue record
Date deposited: 03 Nov 2023 17:54
Last modified: 07 Jun 2024 01:57
Export record
Contributors
Author:
Joseph Early
Author:
Tom Bewley
Author:
Christine Evers
Author:
Sarvapali Ramchurn
Editor:
S. Koyejo
Editor:
S. Mohamed
Editor:
A. Agarwal
Editor:
D. Belgrave
Editor:
K. Cho
Editor:
A. Oh
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics