The University of Southampton
University of Southampton Institutional Repository

Multi-agent patrolling under uncertainty and threats

Multi-agent patrolling under uncertainty and threats
Multi-agent patrolling under uncertainty and threats
We investigate a multi-agent patrolling problem in large stochastic
environments where information is distributed alongside threats. The information
and threat at each location are respectively modelled as a multi-state Markov
chain, whose states are not observed until the location is visited by an agent.
While agents obtain information at a location, they may suffer attacks from the
threat at that location. The goal for the agents is to gather as much information
as possible while mitigating the damage incurred. We formulate this problem as
a Partially Observable Markov Decision Process (POMDP) and propose a computationally
efficient algorithm to solve it.We empirically evaluate our algorithm
in a simulated environment, and show that it outperforms a greedy algorithm up
to 43% for 10 agents in a large graph.
multi-agent patrolling, planning under uncertainty, partially observable Markov decision process
Shaofei, Chen
eebfeea7-2b23-4557-a8ec-e13fbb1ac2c3
Feng, Wu
cef0c5d8-1cdb-4888-b977-191c4368c71a
Lincheng, Shen
c9cf529c-4a41-42c5-abaa-89f28d944d88
Jing, Chen
95af5745-bc02-4d9e-9cae-db291f95b1d2
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
Shaofei, Chen
eebfeea7-2b23-4557-a8ec-e13fbb1ac2c3
Feng, Wu
cef0c5d8-1cdb-4888-b977-191c4368c71a
Lincheng, Shen
c9cf529c-4a41-42c5-abaa-89f28d944d88
Jing, Chen
95af5745-bc02-4d9e-9cae-db291f95b1d2
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3

Shaofei, Chen, Feng, Wu, Lincheng, Shen, Jing, Chen and Ramchurn, Sarvapali (2014) Multi-agent patrolling under uncertainty and threats. International Joint Workshop on Optimisation in Multi-Agent Systems and Distributed Constraint Reasoning, Paris, France. 05 - 06 May 2014.

Record type: Conference or Workshop Item (Paper)

Abstract

We investigate a multi-agent patrolling problem in large stochastic
environments where information is distributed alongside threats. The information
and threat at each location are respectively modelled as a multi-state Markov
chain, whose states are not observed until the location is visited by an agent.
While agents obtain information at a location, they may suffer attacks from the
threat at that location. The goal for the agents is to gather as much information
as possible while mitigating the damage incurred. We formulate this problem as
a Partially Observable Markov Decision Process (POMDP) and propose a computationally
efficient algorithm to solve it.We empirically evaluate our algorithm
in a simulated environment, and show that it outperforms a greedy algorithm up
to 43% for 10 agents in a large graph.

Text
optmasdcr2014_submission_16.pdf - Other
Download (1MB)

More information

Published date: 5 May 2014
Venue - Dates: International Joint Workshop on Optimisation in Multi-Agent Systems and Distributed Constraint Reasoning, Paris, France, 2014-05-05 - 2014-05-06
Keywords: multi-agent patrolling, planning under uncertainty, partially observable Markov decision process
Organisations: Agents, Interactions & Complexity

Identifiers

Local EPrints ID: 372733
URI: http://eprints.soton.ac.uk/id/eprint/372733
PURE UUID: e7e145db-0d9e-47c1-a76a-0e010dacc863
ORCID for Sarvapali Ramchurn: ORCID iD orcid.org/0000-0001-9686-4302

Catalogue record

Date deposited: 18 Dec 2014 13:11
Last modified: 15 Mar 2024 03:22

Export record

Contributors

Author: Chen Shaofei
Author: Wu Feng
Author: Shen Lincheng
Author: Chen Jing
Author: Sarvapali Ramchurn ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×