The University of Southampton
University of Southampton Institutional Repository

FastICARL: fast incremental classifier and representation learning with efficient budget allocation in audio sensing applications

FastICARL: fast incremental classifier and representation learning with efficient budget allocation in audio sensing applications
FastICARL: fast incremental classifier and representation learning with efficient budget allocation in audio sensing applications

Various incremental learning (IL) approaches have been proposed to help deep learning models learn new tasks/classes continuously without forgetting what was learned previously (i.e., avoid catastrophic forgetting). With the growing number of deployed audio sensing applications that need to dynamically incorporate new tasks and changing input distribution from users, the ability of IL on-device becomes essential for both efficiency and user privacy. However, prior works suffer from high computational costs and storage demands which hinders the deployment of IL ondevice. In this work, to overcome these limitations, we develop an end-to-end and on-device IL framework, FastICARL, that incorporates an exemplar-based IL and quantization in the context of audio-based applications. We first employ k-nearestneighbor to reduce the latency of IL. Then, we jointly utilize a quantization technique to decrease the storage requirements of IL. We implement FastICARL on two types of mobile devices and demonstrate that FastICARL remarkably decreases the IL time up to 78-92% and the storage requirements by 2-4 times without sacrificing its performance. FastICARL enables complete on-device IL, ensuring user privacy as the user data does not need to leave the device.

Continual learning, Emotion recognition, Incremental learning, Quantization, Sound classification
2308-457X
4585-4589
International Speech Communication Association
Kwon, Young D.
3e8c3dcd-214c-4771-90f4-b36ede48d763
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Mascolo, Cecilia
e4a7bcf7-72c8-43b7-b6b3-4f8980da245d
Kwon, Young D.
3e8c3dcd-214c-4771-90f4-b36ede48d763
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Mascolo, Cecilia
e4a7bcf7-72c8-43b7-b6b3-4f8980da245d

Kwon, Young D., Chauhan, Jagmohan and Mascolo, Cecilia (2021) FastICARL: fast incremental classifier and representation learning with efficient budget allocation in audio sensing applications. In 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021. vol. 6, International Speech Communication Association. pp. 4585-4589 . (doi:10.21437/Interspeech.2021-1091).

Record type: Conference or Workshop Item (Paper)

Abstract

Various incremental learning (IL) approaches have been proposed to help deep learning models learn new tasks/classes continuously without forgetting what was learned previously (i.e., avoid catastrophic forgetting). With the growing number of deployed audio sensing applications that need to dynamically incorporate new tasks and changing input distribution from users, the ability of IL on-device becomes essential for both efficiency and user privacy. However, prior works suffer from high computational costs and storage demands which hinders the deployment of IL ondevice. In this work, to overcome these limitations, we develop an end-to-end and on-device IL framework, FastICARL, that incorporates an exemplar-based IL and quantization in the context of audio-based applications. We first employ k-nearestneighbor to reduce the latency of IL. Then, we jointly utilize a quantization technique to decrease the storage requirements of IL. We implement FastICARL on two types of mobile devices and demonstrate that FastICARL remarkably decreases the IL time up to 78-92% and the storage requirements by 2-4 times without sacrificing its performance. FastICARL enables complete on-device IL, ensuring user privacy as the user data does not need to leave the device.

This record has no associated files available for download.

More information

Published date: 2021
Venue - Dates: 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021, , Brno, Czech Republic, 2021-08-30 - 2021-09-03
Keywords: Continual learning, Emotion recognition, Incremental learning, Quantization, Sound classification

Identifiers

Local EPrints ID: 491037
URI: http://eprints.soton.ac.uk/id/eprint/491037
ISSN: 2308-457X
PURE UUID: eb262bfe-7352-4ce9-9de3-3f2d1cb85755

Catalogue record

Date deposited: 11 Jun 2024 16:43
Last modified: 11 Jun 2024 16:43

Export record

Altmetrics

Contributors

Author: Young D. Kwon
Author: Jagmohan Chauhan
Author: Cecilia Mascolo

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×