Exploring on-device learning using few shots for audio classification
Exploring on-device learning using few shots for audio classification
Few shot learning (FSL) improves the generalization of neural network classifiers to unseen classes and tasks using small annotated samples of data. Recently, there have been attempts to apply few shot learning in the audio domain for various applications. However, the focus has been mainly on accuracy. Here, we take a holistic view and investigate various system aspects such as latency, storage and memory requirements of few shot learning methods in addition to improving the accuracy with very deep learning models for the tasks of audio classification. To this end, we not only compare the performance of different few shot learning methods but also, for the first time, design an end-to-end framework for smartphones and wearables which can run such methods completely on-device. Our results indicate the need to collect large datasets with more classes as we show much higher gains can be obtained with very deep learning models on big datasets. Surprisingly, metric-based methods such as ProtoTypical Networks can be realized practically on-device and quantization helps further (50%) in reducing the resource requirements, while having no impact on accuracy for the audio classification tasks.
Acoustic Event Classification, Few Shot Learning, Keyword Spotting, On-Device Learning, Performance
424-428
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Kwon, Young D.
3e8c3dcd-214c-4771-90f4-b36ede48d763
Mascolo, Cecilia
e4a7bcf7-72c8-43b7-b6b3-4f8980da245d
18 October 2022
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Kwon, Young D.
3e8c3dcd-214c-4771-90f4-b36ede48d763
Mascolo, Cecilia
e4a7bcf7-72c8-43b7-b6b3-4f8980da245d
Chauhan, Jagmohan, Kwon, Young D. and Mascolo, Cecilia
(2022)
Exploring on-device learning using few shots for audio classification.
In 30th European Signal Processing Conference, EUSIPCO.
vol. 2022-August,
IEEE.
.
(doi:10.23919/EUSIPCO55093.2022.9909551).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Few shot learning (FSL) improves the generalization of neural network classifiers to unseen classes and tasks using small annotated samples of data. Recently, there have been attempts to apply few shot learning in the audio domain for various applications. However, the focus has been mainly on accuracy. Here, we take a holistic view and investigate various system aspects such as latency, storage and memory requirements of few shot learning methods in addition to improving the accuracy with very deep learning models for the tasks of audio classification. To this end, we not only compare the performance of different few shot learning methods but also, for the first time, design an end-to-end framework for smartphones and wearables which can run such methods completely on-device. Our results indicate the need to collect large datasets with more classes as we show much higher gains can be obtained with very deep learning models on big datasets. Surprisingly, metric-based methods such as ProtoTypical Networks can be realized practically on-device and quantization helps further (50%) in reducing the resource requirements, while having no impact on accuracy for the audio classification tasks.
This record has no associated files available for download.
More information
Published date: 18 October 2022
Venue - Dates:
30th European Signal Processing Conference, EUSIPCO 2022, , Belgrade, Serbia, 2022-08-29 - 2022-09-02
Keywords:
Acoustic Event Classification, Few Shot Learning, Keyword Spotting, On-Device Learning, Performance
Identifiers
Local EPrints ID: 491128
URI: http://eprints.soton.ac.uk/id/eprint/491128
ISSN: 2219-5491
PURE UUID: 2914db82-04e6-4edb-97b3-22a513f7c550
Catalogue record
Date deposited: 13 Jun 2024 16:37
Last modified: 13 Jun 2024 16:37
Export record
Altmetrics
Contributors
Author:
Jagmohan Chauhan
Author:
Young D. Kwon
Author:
Cecilia Mascolo
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics