The University of Southampton
University of Southampton Institutional Repository

Exploring system performance of continual learning for mobile and embedded sensing applications

Exploring system performance of continual learning for mobile and embedded sensing applications
Exploring system performance of continual learning for mobile and embedded sensing applications

Continual learning approaches help deep neural network models adapt and learn incrementally by trying to solve catastrophic forgetting. However, whether these existing approaches, applied traditionally to image-based tasks, work with the same efficacy to the sequential time series data generated by mobile or embedded sensing systems remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the performance of three predominant continual learning schemes (i.e., regularization, replay, and replay with examples) on six datasets from three mobile and embedded sensing applications in a range of scenarios having different learning complexities. More specifically, we implement an end-to-end continual learning framework on edge devices. Then we investigate the generalizability, trade-offs between performance, storage, computational costs, and memory footprint of different continual learning methods. Our findings suggest that replay with exemplars-based schemes such as iCaRL has the best performance trade-offs, even in complex scenarios, at the expense of some storage space (few MBs) for training examples (1% to 5%). We also demonstrate for the first time that it is feasible and practical to run continual learning on-device with a limited memory budget. In particular, the latency on two types of mobile and embedded devices suggests that both incremental learning time (few seconds - 4 minutes) and training time (1 - 75 minutes) across datasets are acceptable, as training could happen on the device when the embedded device is charging thereby ensuring complete data privacy. Finally, we present some guidelines for practitioners who want to apply a continual learning paradigm for mobile sensing tasks.

Activity Recognition, Continual Learning, Emotion Recognition, Empirical Evaluation, Gesture Recognition, Incremental Learning, Lifelong Learning, Performance
319-332
IEEE
Kwon, Young D.
3e8c3dcd-214c-4771-90f4-b36ede48d763
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Kumar, Abhishek
23078539-c9f9-4955-96ff-3ba9d1bd8bf5
Hkust, Pan Hui
58e18b99-48fb-4523-b535-d6e698b76007
Mascolo, Cecilia
e4a7bcf7-72c8-43b7-b6b3-4f8980da245d
Kwon, Young D.
3e8c3dcd-214c-4771-90f4-b36ede48d763
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Kumar, Abhishek
23078539-c9f9-4955-96ff-3ba9d1bd8bf5
Hkust, Pan Hui
58e18b99-48fb-4523-b535-d6e698b76007
Mascolo, Cecilia
e4a7bcf7-72c8-43b7-b6b3-4f8980da245d

Kwon, Young D., Chauhan, Jagmohan, Kumar, Abhishek, Hkust, Pan Hui and Mascolo, Cecilia (2021) Exploring system performance of continual learning for mobile and embedded sensing applications. In 2021 IEEE/ACM Symposium on Edge Computing (SEC). IEEE. pp. 319-332 . (doi:10.1145/3453142.3491285).

Record type: Conference or Workshop Item (Paper)

Abstract

Continual learning approaches help deep neural network models adapt and learn incrementally by trying to solve catastrophic forgetting. However, whether these existing approaches, applied traditionally to image-based tasks, work with the same efficacy to the sequential time series data generated by mobile or embedded sensing systems remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the performance of three predominant continual learning schemes (i.e., regularization, replay, and replay with examples) on six datasets from three mobile and embedded sensing applications in a range of scenarios having different learning complexities. More specifically, we implement an end-to-end continual learning framework on edge devices. Then we investigate the generalizability, trade-offs between performance, storage, computational costs, and memory footprint of different continual learning methods. Our findings suggest that replay with exemplars-based schemes such as iCaRL has the best performance trade-offs, even in complex scenarios, at the expense of some storage space (few MBs) for training examples (1% to 5%). We also demonstrate for the first time that it is feasible and practical to run continual learning on-device with a limited memory budget. In particular, the latency on two types of mobile and embedded devices suggests that both incremental learning time (few seconds - 4 minutes) and training time (1 - 75 minutes) across datasets are acceptable, as training could happen on the device when the embedded device is charging thereby ensuring complete data privacy. Finally, we present some guidelines for practitioners who want to apply a continual learning paradigm for mobile sensing tasks.

This record has no associated files available for download.

More information

Published date: 16 February 2021
Venue - Dates: 6th ACM/IEEE Symposium on Edge Computing, SEC 2021, , San Jose, United States, 2021-12-14 - 2021-12-17
Keywords: Activity Recognition, Continual Learning, Emotion Recognition, Empirical Evaluation, Gesture Recognition, Incremental Learning, Lifelong Learning, Performance

Identifiers

Local EPrints ID: 491131
URI: http://eprints.soton.ac.uk/id/eprint/491131
PURE UUID: 749d65c4-5db8-4761-abb9-e81888499cd1

Catalogue record

Date deposited: 13 Jun 2024 16:38
Last modified: 14 Jun 2024 17:17

Export record

Altmetrics

Contributors

Author: Young D. Kwon
Author: Jagmohan Chauhan
Author: Abhishek Kumar
Author: Pan Hui Hkust
Author: Cecilia Mascolo

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×