FS-SS: few-shot learning for fast and accurate spike sorting of high-channel count probes
FS-SS: few-shot learning for fast and accurate spike sorting of high-channel count probes
There is a need for fast adaptation in spike sorting algorithms to implement brain-machine interface (BMIs) in different applications. Learning and adapting the functionality of the sorting process in real-time can significantly improve the performance. However, deep neural networks (DNNs) depend on large amounts of data for training models and their performance sustainability decreases when data is limited. Inspired by meta-learning, this paper proposes a few-shot spike sorting framework (FS-SS) with variable network model size that requires minimal learning training and supervision. The framework is not only compatible with few-shot adaptations, but also it uses attention mechanism and dilated convolutional neural networks. This allows scaling the network parameters to learn the important features of spike signals and to quickly generalize the learning ability to new spike waveforms in recording channels after few observations. The FS-SS was evaluated by using freely accessible datasets, also compared with the other state-of-the-art algorithms. The average classification accuracy of the proposed method is 99.28%, which shows extreme robustness to background noise and similarity of the spike waveforms. When the number of training samples is reduced by 90%, the parameter scale is reduced by 68.2%, while the accuracy only decreased by 0.55%. The paper also visualizes the model's attention distribution under spike sorting tasks of different difficulty levels. The attention distribution results show that the proposed model has clear interpretability and high robustness.
Fang, Tao
0f69fdd5-54ac-4238-872b-50d03b1ac71f
Zamani, Majid
431788cc-0702-4fa9-9709-f5777a2d0d25
Fang, Tao
0f69fdd5-54ac-4238-872b-50d03b1ac71f
Zamani, Majid
431788cc-0702-4fa9-9709-f5777a2d0d25
[Unknown type: UNSPECIFIED]
Abstract
There is a need for fast adaptation in spike sorting algorithms to implement brain-machine interface (BMIs) in different applications. Learning and adapting the functionality of the sorting process in real-time can significantly improve the performance. However, deep neural networks (DNNs) depend on large amounts of data for training models and their performance sustainability decreases when data is limited. Inspired by meta-learning, this paper proposes a few-shot spike sorting framework (FS-SS) with variable network model size that requires minimal learning training and supervision. The framework is not only compatible with few-shot adaptations, but also it uses attention mechanism and dilated convolutional neural networks. This allows scaling the network parameters to learn the important features of spike signals and to quickly generalize the learning ability to new spike waveforms in recording channels after few observations. The FS-SS was evaluated by using freely accessible datasets, also compared with the other state-of-the-art algorithms. The average classification accuracy of the proposed method is 99.28%, which shows extreme robustness to background noise and similarity of the spike waveforms. When the number of training samples is reduced by 90%, the parameter scale is reduced by 68.2%, while the accuracy only decreased by 0.55%. The paper also visualizes the model's attention distribution under spike sorting tasks of different difficulty levels. The attention distribution results show that the proposed model has clear interpretability and high robustness.
Text
1278708
- Author's Original
More information
Submitted date: 24 March 2025
Identifiers
Local EPrints ID: 500400
URI: http://eprints.soton.ac.uk/id/eprint/500400
PURE UUID: 23fb4e42-26f8-4af0-a5ab-e4a3a116fdf8
Catalogue record
Date deposited: 29 Apr 2025 16:35
Last modified: 22 Aug 2025 02:41
Export record
Altmetrics
Contributors
Author:
Tao Fang
Author:
Majid Zamani
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics