The University of Southampton
University of Southampton Institutional Repository

Offline reinforcement learning for safer blood glucose control in people with type 1 diabetes

Offline reinforcement learning for safer blood glucose control in people with type 1 diabetes
Offline reinforcement learning for safer blood glucose control in people with type 1 diabetes

The widespread adoption of effective hybrid closed loop systems would represent an important milestone of care for people living with type 1 diabetes (T1D). These devices typically utilise simple control algorithms to select the optimal insulin dose for maintaining blood glucose levels within a healthy range. Online reinforcement learning (RL) has been utilised as a method for further enhancing glucose control in these devices. Previous approaches have been shown to reduce patient risk and improve time spent in the target range when compared to classical control algorithms, but are prone to instability in the learning process, often resulting in the selection of unsafe actions. This work presents an evaluation of offline RL for developing effective dosing policies without the need for potentially dangerous patient interaction during training. This paper examines the utility of BCQ, CQL and TD3-BC in managing the blood glucose of the 30 virtual patients available within the FDA-approved UVA/Padova glucose dynamics simulator. When trained on less than a tenth of the total training samples required by online RL to achieve stable performance, this work shows that offline RL can significantly increase time in the healthy blood glucose range from 61.6±0.3% to 65.3±0.5% when compared to the strongest state-of-art baseline (p<0.001). This is achieved without any associated increase in low blood glucose events. Offline RL is also shown to be able to correct for common and challenging control scenarios such as incorrect bolus dosing, irregular meal timings and compression errors. The code for this work is available at: https://github.com/hemerson1/offline-glucose.

Artificial pancreas, Glucose control, Reinforcement learning, Type 1 diabetes
1532-0464
104376
Emerson, Harry
6884db88-d18b-4e42-9c91-ad87fd443a70
Guy, Matthew
1a40b2ed-3aec-4fce-9954-396840471c28
McConville, Ryan
53c42f1a-e9d2-4220-a9a6-8ad63023d158
Emerson, Harry
6884db88-d18b-4e42-9c91-ad87fd443a70
Guy, Matthew
1a40b2ed-3aec-4fce-9954-396840471c28
McConville, Ryan
53c42f1a-e9d2-4220-a9a6-8ad63023d158

Emerson, Harry, Guy, Matthew and McConville, Ryan (2023) Offline reinforcement learning for safer blood glucose control in people with type 1 diabetes. Journal of Biomedical Informatics, 142, 104376, [104376]. (doi:10.1016/j.jbi.2023.104376).

Record type: Article

Abstract

The widespread adoption of effective hybrid closed loop systems would represent an important milestone of care for people living with type 1 diabetes (T1D). These devices typically utilise simple control algorithms to select the optimal insulin dose for maintaining blood glucose levels within a healthy range. Online reinforcement learning (RL) has been utilised as a method for further enhancing glucose control in these devices. Previous approaches have been shown to reduce patient risk and improve time spent in the target range when compared to classical control algorithms, but are prone to instability in the learning process, often resulting in the selection of unsafe actions. This work presents an evaluation of offline RL for developing effective dosing policies without the need for potentially dangerous patient interaction during training. This paper examines the utility of BCQ, CQL and TD3-BC in managing the blood glucose of the 30 virtual patients available within the FDA-approved UVA/Padova glucose dynamics simulator. When trained on less than a tenth of the total training samples required by online RL to achieve stable performance, this work shows that offline RL can significantly increase time in the healthy blood glucose range from 61.6±0.3% to 65.3±0.5% when compared to the strongest state-of-art baseline (p<0.001). This is achieved without any associated increase in low blood glucose events. Offline RL is also shown to be able to correct for common and challenging control scenarios such as incorrect bolus dosing, irregular meal timings and compression errors. The code for this work is available at: https://github.com/hemerson1/offline-glucose.

Text
1-s2.0-S1532046423000977-main - Version of Record
Available under License Creative Commons Attribution.
Download (754kB)

More information

Accepted/In Press date: 28 April 2023
e-pub ahead of print date: 4 May 2023
Published date: June 2023
Additional Information: Funding Information: This work was supported by the EPSRC Digital Health and Care Centre for Doctoral Training (CDT) at the University of Bristol (UKRI grant no. EP/S023704/1 ). Publisher Copyright: © 2023 The Author(s)
Keywords: Artificial pancreas, Glucose control, Reinforcement learning, Type 1 diabetes

Identifiers

Local EPrints ID: 486223
URI: http://eprints.soton.ac.uk/id/eprint/486223
ISSN: 1532-0464
PURE UUID: a9b2e187-2f82-4749-87d8-0226de607148
ORCID for Matthew Guy: ORCID iD orcid.org/0000-0002-6818-2010

Catalogue record

Date deposited: 15 Jan 2024 17:33
Last modified: 21 Sep 2024 02:15

Export record

Altmetrics

Contributors

Author: Harry Emerson
Author: Matthew Guy ORCID iD
Author: Ryan McConville

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×