Adaptive and hierarchical run-time manager for energy-aware thermal management of embedded systems
Adaptive and hierarchical run-time manager for energy-aware thermal management of embedded systems
Modern embedded systems execute applications, which interacts with the operating system and hardware differently depending on type of workload. These cross-layer interactions result in wide variations of chipwide thermal profile. In this paper, a reinforcement learning-based run-time manager is proposed that guarantees application-specific performance requirements and controls the POSIX thread allocation and voltage/frequency scaling for energy-efficient thermal management. This controls three thermal aspects – peak temperature, average temperature and thermal cycling. Contrary to existing learning-based run-time approaches that optimize energy and temperature individually, the proposed run-time manager is the first approach to combine the two objectives, simultaneously addressing all three thermal aspects. However, determining thread allocation and core frequencies to optimize energy and temperature is an NP-hard problem. This leads to an exponential growth in the learning table (significant memory overhead) and a corresponding increase in the exploration time to learn the most appropriate thread allocation and core frequency for a particular application workload. To confine the learning space and to minimize the learning cost, the proposed run-time manager is implemented in a two-stage hierarchy: a heuristic-based thread allocation at a longer time interval to improve thermal cycling, followed by a learning-based hardware frequency selection at a much finer interval to improve average temperature, peak temperature and energy consumption. This enables finer control on temperature in an energy-efficient manner, while simultaneously addressing scalability, which is a crucial aspect for multi-/many-core embedded systems. The proposed hierarchical run-time manager is implemented for Linux running on nVidia’s Tegra SoC, featuring four ARM Cortex-A15 cores. Experiments conducted with a range of embedded and cpu intensive applications demonstrate that the proposed run-time manager not only reduces energy consumption by an average 15% with respect to Linux, but also improves all the thermal aspects – average temperature by 14°C, peak temperature by 16°C and thermal cycling by 54%.
embedded systems, linux operating system
Das, Anup
2a0d6cea-309b-4053-a62e-234807f89306
Al-Hashimi, Bashir
0b29c671-a6d2-459c-af68-c4614dce3b5d
Merrett, Geoff
89b3a696-41de-44c3-89aa-b0aa29f54020
January 2016
Das, Anup
2a0d6cea-309b-4053-a62e-234807f89306
Al-Hashimi, Bashir
0b29c671-a6d2-459c-af68-c4614dce3b5d
Merrett, Geoff
89b3a696-41de-44c3-89aa-b0aa29f54020
Das, Anup, Al-Hashimi, Bashir and Merrett, Geoff
(2016)
Adaptive and hierarchical run-time manager for energy-aware thermal management of embedded systems.
ACM Transactions on Embedded Computing Systems, 15 (2), [24].
(doi:10.1145/2834120).
Abstract
Modern embedded systems execute applications, which interacts with the operating system and hardware differently depending on type of workload. These cross-layer interactions result in wide variations of chipwide thermal profile. In this paper, a reinforcement learning-based run-time manager is proposed that guarantees application-specific performance requirements and controls the POSIX thread allocation and voltage/frequency scaling for energy-efficient thermal management. This controls three thermal aspects – peak temperature, average temperature and thermal cycling. Contrary to existing learning-based run-time approaches that optimize energy and temperature individually, the proposed run-time manager is the first approach to combine the two objectives, simultaneously addressing all three thermal aspects. However, determining thread allocation and core frequencies to optimize energy and temperature is an NP-hard problem. This leads to an exponential growth in the learning table (significant memory overhead) and a corresponding increase in the exploration time to learn the most appropriate thread allocation and core frequency for a particular application workload. To confine the learning space and to minimize the learning cost, the proposed run-time manager is implemented in a two-stage hierarchy: a heuristic-based thread allocation at a longer time interval to improve thermal cycling, followed by a learning-based hardware frequency selection at a much finer interval to improve average temperature, peak temperature and energy consumption. This enables finer control on temperature in an energy-efficient manner, while simultaneously addressing scalability, which is a crucial aspect for multi-/many-core embedded systems. The proposed hierarchical run-time manager is implemented for Linux running on nVidia’s Tegra SoC, featuring four ARM Cortex-A15 cores. Experiments conducted with a range of embedded and cpu intensive applications demonstrate that the proposed run-time manager not only reduces energy consumption by an average 15% with respect to Linux, but also improves all the thermal aspects – average temperature by 14°C, peak temperature by 16°C and thermal cycling by 54%.
Text
tecs15.pdf
- Accepted Manuscript
More information
e-pub ahead of print date: January 2016
Published date: January 2016
Keywords:
embedded systems, linux operating system
Organisations:
Electronics & Computer Science
Identifiers
Local EPrints ID: 382853
URI: http://eprints.soton.ac.uk/id/eprint/382853
PURE UUID: d5b4c87b-9978-4918-bad0-1ee81425dfd3
Catalogue record
Date deposited: 16 Oct 2015 15:15
Last modified: 15 Mar 2024 03:23
Export record
Altmetrics
Contributors
Author:
Anup Das
Author:
Bashir Al-Hashimi
Author:
Geoff Merrett
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics