The University of Southampton
University of Southampton Institutional Repository

Solving Markov decision processes via state space decomposition and time aggregation

Solving Markov decision processes via state space decomposition and time aggregation
Solving Markov decision processes via state space decomposition and time aggregation
Although there are techniques to address large scale Markov decision processes (MDP), a computationally adequate solution of the so-called curse of dimensionality still eludes, in many aspects, a satisfactory treatment. In this paper, we advance in this issue by introducing a novel multi-subset partitioning scheme to allow for a distributed evaluation of the MDP, aiming to accelerating convergence and enable distributed policy improvement across the state space,
whereby the value function and the policy improvement step can be performed independently, one subset at a time. The scheme’s innovation hinges on a design that induces communication properties that allow us to evaluate time
aggregated trajectories via absorption analysis, thereby limiting the computational effort. The paper introduces and proves the convergence of a class of distributed time aggregation algorithms that combine the partitioning scheme with two-phase time aggregation to distribute the computations and accelerate convergence. In addition, we make use of Foster’s sufficient conditions for stochastic stability to develop a new theoretical result which underpins a partition design that guarantees that large regions of the state space are rarely visited and have a marginal effect on the system’s
performance. This enables the design of approximate algorithms to find near-optimal solutions to large scale systems by focusing on the most visited regions of the state space. We validate the approach in a series of experiments featuring
production and inventory and queuing applications. The results highlight the potential of the proposed algorithms to rapidly approach the optimal solution under different problem settings.
Dynamic programming, Foster's stochastic stability conditions, Markov decision processes, Markov processes, Time aggregation
0377-2217
Alexandre, Rodrigo e Alvim
c439f66a-0079-4d8a-9be8-dd461b9fd547
Fragoso, Marcelo
7f484139-de97-4458-aa6b-dc3249811a08
Ferreira Filho, Virgílio José Martins
6853ee2e-9675-43be-afcb-77d1677716e9
Arruda, Edilson F.
8eb3bd83-e883-4bf3-bfbc-7887c5daa911
Alexandre, Rodrigo e Alvim
c439f66a-0079-4d8a-9be8-dd461b9fd547
Fragoso, Marcelo
7f484139-de97-4458-aa6b-dc3249811a08
Ferreira Filho, Virgílio José Martins
6853ee2e-9675-43be-afcb-77d1677716e9
Arruda, Edilson F.
8eb3bd83-e883-4bf3-bfbc-7887c5daa911

Alexandre, Rodrigo e Alvim, Fragoso, Marcelo, Ferreira Filho, Virgílio José Martins and Arruda, Edilson F. (2025) Solving Markov decision processes via state space decomposition and time aggregation. European Journal of Operational Research. (doi:10.1016/j.ejor.2025.01.037).

Record type: Article

Abstract

Although there are techniques to address large scale Markov decision processes (MDP), a computationally adequate solution of the so-called curse of dimensionality still eludes, in many aspects, a satisfactory treatment. In this paper, we advance in this issue by introducing a novel multi-subset partitioning scheme to allow for a distributed evaluation of the MDP, aiming to accelerating convergence and enable distributed policy improvement across the state space,
whereby the value function and the policy improvement step can be performed independently, one subset at a time. The scheme’s innovation hinges on a design that induces communication properties that allow us to evaluate time
aggregated trajectories via absorption analysis, thereby limiting the computational effort. The paper introduces and proves the convergence of a class of distributed time aggregation algorithms that combine the partitioning scheme with two-phase time aggregation to distribute the computations and accelerate convergence. In addition, we make use of Foster’s sufficient conditions for stochastic stability to develop a new theoretical result which underpins a partition design that guarantees that large regions of the state space are rarely visited and have a marginal effect on the system’s
performance. This enables the design of approximate algorithms to find near-optimal solutions to large scale systems by focusing on the most visited regions of the state space. We validate the approach in a series of experiments featuring
production and inventory and queuing applications. The results highlight the potential of the proposed algorithms to rapidly approach the optimal solution under different problem settings.

Text
EJOR2025_AlvimEtAl - Accepted Manuscript
Available under License Creative Commons Attribution.
Download (569kB)

More information

Accepted/In Press date: 28 January 2025
e-pub ahead of print date: 5 February 2025
Additional Information: Publisher Copyright: © 2025 The Authors
Keywords: Dynamic programming, Foster's stochastic stability conditions, Markov decision processes, Markov processes, Time aggregation

Identifiers

Local EPrints ID: 498716
URI: http://eprints.soton.ac.uk/id/eprint/498716
ISSN: 0377-2217
PURE UUID: 8c5f9e2f-be14-4e83-aa16-b75a3847a7f5
ORCID for Edilson F. Arruda: ORCID iD orcid.org/0000-0002-9835-352X

Catalogue record

Date deposited: 25 Feb 2025 18:11
Last modified: 26 Feb 2025 03:03

Export record

Altmetrics

Contributors

Author: Rodrigo e Alvim Alexandre
Author: Marcelo Fragoso
Author: Virgílio José Martins Ferreira Filho
Author: Edilson F. Arruda ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×