The University of Southampton
University of Southampton Institutional Repository

Knapsack based optimal policies for budget-limited multi-armed bandits

Knapsack based optimal policies for budget-limited multi-armed bandits
Knapsack based optimal policies for budget-limited multi-armed bandits
In budget-limited multi-armed bandit (MAB) problems, the learner’s actions are costly and constrained by a fixed budget. Consequently, an optimal exploitation policy may not be to pull the optimal arm repeatedly, as is the case in other variants of MAB, but rather to pull the sequence of different arms that maximises the agent’s total reward within the budget. This difference from existing MABs means that new approaches to maximising the total reward are required. Given this, we develop two pulling policies, namely: (i) KUBE; and (ii) fractional KUBE. Whereas the former provides better performance up to 40% in our experimental settings, the latter is computationally less expensive. We also prove logarithmic upper bounds for the regret of both policies, and show that these bounds are asymptotically optimal (i.e. they only differ from the best possible regret by a constant factor).
1134-1140
Tran-Thanh, Long
e0666669-d34b-460e-950d-e8b139fab16c
Chapman, Archie
2eac6920-2aff-49ab-8d8e-a0ea3e39ba60
Rogers, Alex
f9130bc6-da32-474e-9fab-6c6cb8077fdc
Jennings, Nicholas R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30
Tran-Thanh, Long
e0666669-d34b-460e-950d-e8b139fab16c
Chapman, Archie
2eac6920-2aff-49ab-8d8e-a0ea3e39ba60
Rogers, Alex
f9130bc6-da32-474e-9fab-6c6cb8077fdc
Jennings, Nicholas R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30

Tran-Thanh, Long, Chapman, Archie, Rogers, Alex and Jennings, Nicholas R. (2012) Knapsack based optimal policies for budget-limited multi-armed bandits. Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12), Toronto, Canada. 22 Jul 2012. pp. 1134-1140 .

Record type: Conference or Workshop Item (Paper)

Abstract

In budget-limited multi-armed bandit (MAB) problems, the learner’s actions are costly and constrained by a fixed budget. Consequently, an optimal exploitation policy may not be to pull the optimal arm repeatedly, as is the case in other variants of MAB, but rather to pull the sequence of different arms that maximises the agent’s total reward within the budget. This difference from existing MABs means that new approaches to maximising the total reward are required. Given this, we develop two pulling policies, namely: (i) KUBE; and (ii) fractional KUBE. Whereas the former provides better performance up to 40% in our experimental settings, the latter is computationally less expensive. We also prove logarithmic upper bounds for the regret of both policies, and show that these bounds are asymptotically optimal (i.e. they only differ from the best possible regret by a constant factor).

Text
LTT_AAAI2012_Bandit_finalversion.pdf - Author's Original
Download (109kB)

More information

Submitted date: 24 January 2012
Published date: 17 April 2012
Venue - Dates: Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12), Toronto, Canada, 2012-07-22 - 2012-07-22
Organisations: Agents, Interactions & Complexity

Identifiers

Local EPrints ID: 337280
URI: http://eprints.soton.ac.uk/id/eprint/337280
PURE UUID: 35556909-7034-4fc3-ae91-136c7e1e3bcf
ORCID for Long Tran-Thanh: ORCID iD orcid.org/0000-0003-1617-8316

Catalogue record

Date deposited: 22 Apr 2012 07:00
Last modified: 14 Mar 2024 10:51

Export record

Contributors

Author: Long Tran-Thanh ORCID iD
Author: Archie Chapman
Author: Alex Rogers
Author: Nicholas R. Jennings

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×