The University of Southampton
University of Southampton Institutional Repository

Epsilon–First Policies for Budget–Limited Multi-Armed Bandits

Epsilon–First Policies for Budget–Limited Multi-Armed Bandits
Epsilon–First Policies for Budget–Limited Multi-Armed Bandits
We introduce the budget–limited multi–armed bandit (MAB), which captures situations where a learner’s actions are costly and constrained by a fixed budget that is incommensurable with the rewards earned from the bandit machine, and then describe a first algorithm for solving it. Since the learner has a budget, the problem’s duration is finite. Consequently an optimal exploitation policy is not to pull the optimal arm repeatedly, but to pull the combination of arms that maximises the agent’s total reward within the budget. As such, the rewards for all arms must be estimated, because any of them may appear in the optimal combination. This difference from existing MABs means that new approaches to maximising the total reward are required. To this end, we propose an epsilon–first algorithm, in which the first epsilon of the budget is used solely to learn the arms’ rewards (exploration), while the remaining 1 ? epsilon is used to maximise the received reward based on those estimates (exploitation). We derive bounds on the algorithm’s loss for generic and uniform exploration methods, and compare its performance with traditional MAB algorithms under various distributions of rewards and costs, showing that it outperforms the others by up to 50%.
1211-1216
Tran-Thanh, Long
e0666669-d34b-460e-950d-e8b139fab16c
Chapman, Archie
2eac6920-2aff-49ab-8d8e-a0ea3e39ba60
Munoz De Cote Flores Luna, Jose Enrique
afb4bdd8-8511-4961-a639-15521220a213
Rogers, Alex
f9130bc6-da32-474e-9fab-6c6cb8077fdc
Jennings, Nicholas R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30
Tran-Thanh, Long
e0666669-d34b-460e-950d-e8b139fab16c
Chapman, Archie
2eac6920-2aff-49ab-8d8e-a0ea3e39ba60
Munoz De Cote Flores Luna, Jose Enrique
afb4bdd8-8511-4961-a639-15521220a213
Rogers, Alex
f9130bc6-da32-474e-9fab-6c6cb8077fdc
Jennings, Nicholas R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30

Tran-Thanh, Long, Chapman, Archie, Munoz De Cote Flores Luna, Jose Enrique, Rogers, Alex and Jennings, Nicholas R. (2010) Epsilon–First Policies for Budget–Limited Multi-Armed Bandits. Twenty-Fourth AAAI Conference on Artificial Intelligence, Atlanta, USA, Georgia. 11 - 15 Jul 2010. pp. 1211-1216 .

Record type: Conference or Workshop Item (Paper)

Abstract

We introduce the budget–limited multi–armed bandit (MAB), which captures situations where a learner’s actions are costly and constrained by a fixed budget that is incommensurable with the rewards earned from the bandit machine, and then describe a first algorithm for solving it. Since the learner has a budget, the problem’s duration is finite. Consequently an optimal exploitation policy is not to pull the optimal arm repeatedly, but to pull the combination of arms that maximises the agent’s total reward within the budget. As such, the rewards for all arms must be estimated, because any of them may appear in the optimal combination. This difference from existing MABs means that new approaches to maximising the total reward are required. To this end, we propose an epsilon–first algorithm, in which the first epsilon of the budget is used solely to learn the arms’ rewards (exploration), while the remaining 1 ? epsilon is used to maximise the received reward based on those estimates (exploitation). We derive bounds on the algorithm’s loss for generic and uniform exploration methods, and compare its performance with traditional MAB algorithms under various distributions of rewards and costs, showing that it outperforms the others by up to 50%.

Text
LTT_AAAI2010_Bandit.pdf - Accepted Manuscript
Download (108kB)
Text
AAAI2010_Tran-Thanh.pdf - Version of Record
Download (371kB)

More information

Published date: 6 April 2010
Additional Information: Event Dates: 11 - 15 July, 2010
Venue - Dates: Twenty-Fourth AAAI Conference on Artificial Intelligence, Atlanta, USA, Georgia, 2010-07-11 - 2010-07-15
Organisations: Agents, Interactions & Complexity

Identifiers

Local EPrints ID: 270806
URI: http://eprints.soton.ac.uk/id/eprint/270806
PURE UUID: 8c22dc28-c4a4-403f-9f44-f71e86429e1e
ORCID for Long Tran-Thanh: ORCID iD orcid.org/0000-0003-1617-8316

Catalogue record

Date deposited: 06 Apr 2010 16:45
Last modified: 14 Mar 2024 09:16

Export record

Contributors

Author: Long Tran-Thanh ORCID iD
Author: Archie Chapman
Author: Jose Enrique Munoz De Cote Flores Luna
Author: Alex Rogers
Author: Nicholas R. Jennings

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×