The University of Southampton
University of Southampton Institutional Repository

Safe Reward Learning from Human Preferences and Justifications

Safe Reward Learning from Human Preferences and Justifications
Safe Reward Learning from Human Preferences and Justifications
We address the problem of learning autonomous safe agent behaviour with unknown dynamics and reward functions, where traditional Reinforcement Learning is impossible. We present DROPJ, a human-centred algorithm that maximises safety during both training and deployment. We first learn a world model (a learned simulation) from a set of past real-world trajectories. A user then plays the game in the simulation to draw several informative virtual trajectories. From these, we extract pairs of trajectory segments and present them to a user to elicit their preference over these segments and the reason (justification) for that preference. With this feedback, a reward model is trained, which is used to deploy the agent with Model Predictive Control. We find that generating trajectories from user trials significantly reduces the computational cost of training, and significantly improves performance during deployment. In that context, we show that the use of preferences rather than other types of feedback substantially improves the performance. We further demonstrate that the use of justifications associated with safety requirements results in safer policies.
Safe Learning from Human Preferences, Safe Reinforcement Learning, Human-Agent Interaction, Human-Robot Interaction, Human-in-the-Loop Machine Learning, Learning Human Values and Preferences
Kazantzidis, Ilias
10862613-d212-44fb-980c-ff8c6d0c4a95
Norman, Timothy
663e522f-807c-4569-9201-dc141c8eb50d
Du, Yali
0b0d4eef-0820-4753-b384-72db5058df32
Freeman, Chris
ccdd1272-cdc7-43fb-a1bb-b1ef0bdf5815
Kazantzidis, Ilias
10862613-d212-44fb-980c-ff8c6d0c4a95
Norman, Timothy
663e522f-807c-4569-9201-dc141c8eb50d
Du, Yali
0b0d4eef-0820-4753-b384-72db5058df32
Freeman, Chris
ccdd1272-cdc7-43fb-a1bb-b1ef0bdf5815

Kazantzidis, Ilias, Norman, Timothy, Du, Yali and Freeman, Chris (2026) Safe Reward Learning from Human Preferences and Justifications. 18th International Conference on Agents and Artificial Intelligence, ICAART 2026, , Marbella, Spain. 05 - 08 Mar 2026. 15 pp . (doi:10.5220/0014408700004052).

Record type: Conference or Workshop Item (Paper)

Abstract

We address the problem of learning autonomous safe agent behaviour with unknown dynamics and reward functions, where traditional Reinforcement Learning is impossible. We present DROPJ, a human-centred algorithm that maximises safety during both training and deployment. We first learn a world model (a learned simulation) from a set of past real-world trajectories. A user then plays the game in the simulation to draw several informative virtual trajectories. From these, we extract pairs of trajectory segments and present them to a user to elicit their preference over these segments and the reason (justification) for that preference. With this feedback, a reward model is trained, which is used to deploy the agent with Model Predictive Control. We find that generating trajectories from user trials significantly reduces the computational cost of training, and significantly improves performance during deployment. In that context, we show that the use of preferences rather than other types of feedback substantially improves the performance. We further demonstrate that the use of justifications associated with safety requirements results in safer policies.

Text
Safe_Reward_Learning_from_Human_Preferences_and_Justifications - Accepted Manuscript
Download (3MB)

More information

Published date: 2026
Venue - Dates: 18th International Conference on Agents and Artificial Intelligence, ICAART 2026, , Marbella, Spain, 2026-03-05 - 2026-03-08
Keywords: Safe Learning from Human Preferences, Safe Reinforcement Learning, Human-Agent Interaction, Human-Robot Interaction, Human-in-the-Loop Machine Learning, Learning Human Values and Preferences

Identifiers

Local EPrints ID: 510205
URI: http://eprints.soton.ac.uk/id/eprint/510205
PURE UUID: bc4e0907-6170-4b6a-9b19-1963741ac0d4
ORCID for Ilias Kazantzidis: ORCID iD orcid.org/0000-0002-1127-3843
ORCID for Timothy Norman: ORCID iD orcid.org/0000-0002-6387-4034
ORCID for Chris Freeman: ORCID iD orcid.org/0000-0003-0305-9246

Catalogue record

Date deposited: 20 Mar 2026 17:46
Last modified: 21 Mar 2026 03:19

Export record

Altmetrics

Contributors

Author: Ilias Kazantzidis ORCID iD
Author: Timothy Norman ORCID iD
Author: Yali Du
Author: Chris Freeman ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×