The University of Southampton
University of Southampton Institutional Repository

A note on the reward function for PHD filters with sensor control

A note on the reward function for PHD filters with sensor control
A note on the reward function for PHD filters with sensor control
The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density function (pdf), while the reward function associated with each sensor control (action) is the information gain measured by the alpha or Rényi divergence. Assuming that both the predicted and updated state can be represented by independent identically distributed (IID) cluster random finite sets (RFSs) or, as a special case, the Poisson RFSs, this work derives the analytic expressions of the corresponding Rényi divergence based information gains. The implementation of Rényi divergence via the sequential Monte Carlo method is presented. The performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter.
1521-1529
Ristic, B.
f51eed3b-da8d-49a8-884b-725d075c1a5e
Vo, B.-N.
d19a6f68-7c1f-4af0-8069-0d457c3b66ed
Clark, D.
537f80e8-cbe6-41eb-b1d4-31af1f0e6393
Ristic, B.
f51eed3b-da8d-49a8-884b-725d075c1a5e
Vo, B.-N.
d19a6f68-7c1f-4af0-8069-0d457c3b66ed
Clark, D.
537f80e8-cbe6-41eb-b1d4-31af1f0e6393

Ristic, B., Vo, B.-N. and Clark, D. (2011) A note on the reward function for PHD filters with sensor control. IEEE Transactions on Aerospace and Electronic Systems, 47 (2), 1521-1529. (doi:10.1109/TAES.2011.5751278).

Record type: Article

Abstract

The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density function (pdf), while the reward function associated with each sensor control (action) is the information gain measured by the alpha or Rényi divergence. Assuming that both the predicted and updated state can be represented by independent identically distributed (IID) cluster random finite sets (RFSs) or, as a special case, the Poisson RFSs, this work derives the analytic expressions of the corresponding Rényi divergence based information gains. The implementation of Rényi divergence via the sequential Monte Carlo method is presented. The performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter.

This record has no associated files available for download.

More information

Published date: 15 April 2011

Identifiers

Local EPrints ID: 473605
URI: http://eprints.soton.ac.uk/id/eprint/473605
PURE UUID: b938ea62-d788-4fca-a841-b7dee79a41d8

Catalogue record

Date deposited: 24 Jan 2023 17:52
Last modified: 16 Mar 2024 23:15

Export record

Altmetrics

Contributors

Author: B. Ristic
Author: B.-N. Vo
Author: D. Clark

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×