HAVA: hybrid approach to value-alignment through reward weighing for reinforcement learning
HAVA: hybrid approach to value-alignment through reward weighing for reinforcement learning
Our society is governed by a set of norms which together bring about the values we cherish such as safety, fairness or trustworthiness. The goal of value alignment is to create agents that not only do their tasks but through their behaviours also promote these values. Many of the norms are written as laws or rules (legal / safety norms) but even more remain unwritten (social norms). Furthermore, the techniques used to represent these norms also differ. Safety / legal norms are often represented explicitly, for example, in some logical language while social norms are typically learned and remain hidden in the parameter space of a neural network. There is a lack of approaches in the literature that could combine these various norm representations into a single algorithm. We propose a novel method that integrates these norms into the reinforcement learning process. Our method monitors the agent's compliance with the given norms and summarizes it in a quantity we call the agent's reputation. This quantity is used to weigh the received rewards to motivate the agent to become value aligned. We carry out a series of experiments including a continuous state space traffic problem to demonstrate the importance of the written and unwritten norms and show how our method can find the value aligned policies. Furthermore, we carry out ablations to demonstrate why it is better to combine these two groups of norms rather than using either separately.
Reinforcement Learning, Reward Shaping, Value Alignment
International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Varys, Kryspin
58c7c0ab-2c42-410a-a278-cf42adfac0e9
Cerutti, Federico
fec75499-632a-460f-987a-1a09420d8cb1
Sobey, Adam
e850606f-aa79-4c99-8682-2cfffda3cd28
Norman, Tim
663e522f-807c-4569-9201-dc141c8eb50d
Varys, Kryspin
58c7c0ab-2c42-410a-a278-cf42adfac0e9
Cerutti, Federico
fec75499-632a-460f-987a-1a09420d8cb1
Sobey, Adam
e850606f-aa79-4c99-8682-2cfffda3cd28
Norman, Tim
663e522f-807c-4569-9201-dc141c8eb50d
Varys, Kryspin, Cerutti, Federico, Sobey, Adam and Norman, Tim
(2024)
HAVA: hybrid approach to value-alignment through reward weighing for reinforcement learning.
In Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems.
International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).
9 pp
.
(In Press)
Record type:
Conference or Workshop Item
(Paper)
Abstract
Our society is governed by a set of norms which together bring about the values we cherish such as safety, fairness or trustworthiness. The goal of value alignment is to create agents that not only do their tasks but through their behaviours also promote these values. Many of the norms are written as laws or rules (legal / safety norms) but even more remain unwritten (social norms). Furthermore, the techniques used to represent these norms also differ. Safety / legal norms are often represented explicitly, for example, in some logical language while social norms are typically learned and remain hidden in the parameter space of a neural network. There is a lack of approaches in the literature that could combine these various norm representations into a single algorithm. We propose a novel method that integrates these norms into the reinforcement learning process. Our method monitors the agent's compliance with the given norms and summarizes it in a quantity we call the agent's reputation. This quantity is used to weigh the received rewards to motivate the agent to become value aligned. We carry out a series of experiments including a continuous state space traffic problem to demonstrate the importance of the written and unwritten norms and show how our method can find the value aligned policies. Furthermore, we carry out ablations to demonstrate why it is better to combine these two groups of norms rather than using either separately.
Text
paper_aamas_HAVA (3)
- Accepted Manuscript
More information
Accepted/In Press date: 19 December 2024
Keywords:
Reinforcement Learning, Reward Shaping, Value Alignment
Identifiers
Local EPrints ID: 499182
URI: http://eprints.soton.ac.uk/id/eprint/499182
PURE UUID: a0249157-bb96-4c48-a4af-b6497947f81b
Catalogue record
Date deposited: 11 Mar 2025 17:39
Last modified: 03 May 2025 01:50
Export record
Contributors
Author:
Kryspin Varys
Author:
Federico Cerutti
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics