Severity-sensitive norm-governed multi-agent planning
Severity-sensitive norm-governed multi-agent planning
In making practical decisions, agents are expected to comply with ideals of behaviour, or norms. In reality, it may not be possible for an individual, or a team of agents, to be fully compliant – actual behaviour often differs from the ideal. The question we address in this paper is how we can design agents that act in such a way that they select collective strategies to avoid more critical failures (norm violations), and mitigate the effects of violations that do occur. We model the normative requirements of a system through contrary-to-duty obligations and violation severity levels, and propose a novel multi-agent planning mechanism based on Decentralised POMDPs that uses a qualitative reward function to capture levels of compliance: N-Dec-POMDPs. We develop mechanisms for solving this type of multi-agent planning problem and show, through empirical analysis, that joint policies generated are equally as good as those produced through existing methods but with significant reductions in execution time.
norms, Multi-agent planning, Dec-POMDP
26-58
Gasparini, Luca
15ef6336-0f4e-4acf-8989-c6797c2e6120
Norman, Timothy
663e522f-807c-4569-9201-dc141c8eb50d
Kollingbaum, Martin J.
015a7895-2e8f-4e21-a2fc-a2fa1fae37df
January 2018
Gasparini, Luca
15ef6336-0f4e-4acf-8989-c6797c2e6120
Norman, Timothy
663e522f-807c-4569-9201-dc141c8eb50d
Kollingbaum, Martin J.
015a7895-2e8f-4e21-a2fc-a2fa1fae37df
Gasparini, Luca, Norman, Timothy and Kollingbaum, Martin J.
(2018)
Severity-sensitive norm-governed multi-agent planning.
Autonomous Agents and Multi-Agent Systems, 32 (1), .
(doi:10.1007/s10458-017-9372-x).
Abstract
In making practical decisions, agents are expected to comply with ideals of behaviour, or norms. In reality, it may not be possible for an individual, or a team of agents, to be fully compliant – actual behaviour often differs from the ideal. The question we address in this paper is how we can design agents that act in such a way that they select collective strategies to avoid more critical failures (norm violations), and mitigate the effects of violations that do occur. We model the normative requirements of a system through contrary-to-duty obligations and violation severity levels, and propose a novel multi-agent planning mechanism based on Decentralised POMDPs that uses a qualitative reward function to capture levels of compliance: N-Dec-POMDPs. We develop mechanisms for solving this type of multi-agent planning problem and show, through empirical analysis, that joint policies generated are equally as good as those produced through existing methods but with significant reductions in execution time.
Text
accepted-version
- Accepted Manuscript
More information
Accepted/In Press date: 19 June 2017
e-pub ahead of print date: 7 July 2017
Published date: January 2018
Keywords:
norms, Multi-agent planning, Dec-POMDP
Identifiers
Local EPrints ID: 412260
URI: http://eprints.soton.ac.uk/id/eprint/412260
ISSN: 1387-2532
PURE UUID: f05bb8d5-0ab3-43c2-b435-187e5b05200f
Catalogue record
Date deposited: 14 Jul 2017 16:30
Last modified: 16 Mar 2024 04:24
Export record
Altmetrics
Contributors
Author:
Luca Gasparini
Author:
Martin J. Kollingbaum
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics