Multi-agent patrolling under uncertainty and threats
Multi-agent patrolling under uncertainty and threats
We investigate a multi-agent patrolling problem in large stochastic
environments where information is distributed alongside threats. The information
and threat at each location are respectively modelled as a multi-state Markov
chain, whose states are not observed until the location is visited by an agent.
While agents obtain information at a location, they may suffer attacks from the
threat at that location. The goal for the agents is to gather as much information
as possible while mitigating the damage incurred. We formulate this problem as
a Partially Observable Markov Decision Process (POMDP) and propose a computationally
efficient algorithm to solve it.We empirically evaluate our algorithm
in a simulated environment, and show that it outperforms a greedy algorithm up
to 43% for 10 agents in a large graph.
multi-agent patrolling, planning under uncertainty, partially observable Markov decision process
Shaofei, Chen
eebfeea7-2b23-4557-a8ec-e13fbb1ac2c3
Feng, Wu
cef0c5d8-1cdb-4888-b977-191c4368c71a
Lincheng, Shen
c9cf529c-4a41-42c5-abaa-89f28d944d88
Jing, Chen
95af5745-bc02-4d9e-9cae-db291f95b1d2
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
5 May 2014
Shaofei, Chen
eebfeea7-2b23-4557-a8ec-e13fbb1ac2c3
Feng, Wu
cef0c5d8-1cdb-4888-b977-191c4368c71a
Lincheng, Shen
c9cf529c-4a41-42c5-abaa-89f28d944d88
Jing, Chen
95af5745-bc02-4d9e-9cae-db291f95b1d2
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
Shaofei, Chen, Feng, Wu, Lincheng, Shen, Jing, Chen and Ramchurn, Sarvapali
(2014)
Multi-agent patrolling under uncertainty and threats.
International Joint Workshop on Optimisation in Multi-Agent Systems and Distributed Constraint Reasoning, Paris, France.
05 - 06 May 2014.
Record type:
Conference or Workshop Item
(Paper)
Abstract
We investigate a multi-agent patrolling problem in large stochastic
environments where information is distributed alongside threats. The information
and threat at each location are respectively modelled as a multi-state Markov
chain, whose states are not observed until the location is visited by an agent.
While agents obtain information at a location, they may suffer attacks from the
threat at that location. The goal for the agents is to gather as much information
as possible while mitigating the damage incurred. We formulate this problem as
a Partially Observable Markov Decision Process (POMDP) and propose a computationally
efficient algorithm to solve it.We empirically evaluate our algorithm
in a simulated environment, and show that it outperforms a greedy algorithm up
to 43% for 10 agents in a large graph.
Text
optmasdcr2014_submission_16.pdf
- Other
More information
Published date: 5 May 2014
Venue - Dates:
International Joint Workshop on Optimisation in Multi-Agent Systems and Distributed Constraint Reasoning, Paris, France, 2014-05-05 - 2014-05-06
Keywords:
multi-agent patrolling, planning under uncertainty, partially observable Markov decision process
Organisations:
Agents, Interactions & Complexity
Identifiers
Local EPrints ID: 372733
URI: http://eprints.soton.ac.uk/id/eprint/372733
PURE UUID: e7e145db-0d9e-47c1-a76a-0e010dacc863
Catalogue record
Date deposited: 18 Dec 2014 13:11
Last modified: 15 Mar 2024 03:22
Export record
Contributors
Author:
Chen Shaofei
Author:
Wu Feng
Author:
Shen Lincheng
Author:
Chen Jing
Author:
Sarvapali Ramchurn
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics