FedTM: Memory and Communication Efficient Federated Learning with Tsetlin Machine
FedTM: Memory and Communication Efficient Federated Learning with Tsetlin Machine
Federated Learning has been an exciting development in machine learning, promising collaborative learning without compromising privacy. However, the resource-intensive nature of Deep Neural Networks (DNN) has made it difficult to implement FL on edge devices. In a bold step towards addressing this challenge, we present FedTM, the first FL framework to utilize Tsetlin Machine, a low-complexity machine learning alternative. We proposed a two-step aggregation scheme for combining local parameters at the server which addressed challenges such as data heterogeneity, varying participating client ratio and bit-based aggregation. Compared to conventional Federated Averaging (FedAvg) with Convolutional Neural Networks (CNN), on average, FedTM provides a substantial reduction in communication costs by 30.5× and 36.6× reduction in storage memory footprint. Our results demonstrate that FedTM outperforms BiFL-BiML (SOTA) in every FL setting while providing 1.37 − 7.6× reduction in communication costs and 2.93 − 7.2× reduction in run-time memory on our evaluated datasets, making it a promising solution for edge devices.
How Shi Qi, Shannon
f0753707-bb20-42e3-a1aa-bebf9f6c9545
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Merrett, Geoff V.
89b3a696-41de-44c3-89aa-b0aa29f54020
Hare, Jonathan
65ba2cda-eaaf-4767-a325-cd845504e5a9
August 2023
How Shi Qi, Shannon
f0753707-bb20-42e3-a1aa-bebf9f6c9545
Chauhan, Jagmohan
831a12dc-6df9-40ea-8bb3-2c5da8882804
Merrett, Geoff V.
89b3a696-41de-44c3-89aa-b0aa29f54020
Hare, Jonathan
65ba2cda-eaaf-4767-a325-cd845504e5a9
How Shi Qi, Shannon, Chauhan, Jagmohan, Merrett, Geoff V. and Hare, Jonathan
(2023)
FedTM: Memory and Communication Efficient Federated Learning with Tsetlin Machine.
International Symposium on the Tsetlin Machine (ISTM), Newcastle University , Newcastle, United Kingdom.
29 - 30 Aug 2023.
8 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
Federated Learning has been an exciting development in machine learning, promising collaborative learning without compromising privacy. However, the resource-intensive nature of Deep Neural Networks (DNN) has made it difficult to implement FL on edge devices. In a bold step towards addressing this challenge, we present FedTM, the first FL framework to utilize Tsetlin Machine, a low-complexity machine learning alternative. We proposed a two-step aggregation scheme for combining local parameters at the server which addressed challenges such as data heterogeneity, varying participating client ratio and bit-based aggregation. Compared to conventional Federated Averaging (FedAvg) with Convolutional Neural Networks (CNN), on average, FedTM provides a substantial reduction in communication costs by 30.5× and 36.6× reduction in storage memory footprint. Our results demonstrate that FedTM outperforms BiFL-BiML (SOTA) in every FL setting while providing 1.37 − 7.6× reduction in communication costs and 2.93 − 7.2× reduction in run-time memory on our evaluated datasets, making it a promising solution for edge devices.
Text
FedTM
- Accepted Manuscript
More information
Accepted/In Press date: 9 June 2023
Published date: August 2023
Venue - Dates:
International Symposium on the Tsetlin Machine (ISTM), Newcastle University , Newcastle, United Kingdom, 2023-08-29 - 2023-08-30
Identifiers
Local EPrints ID: 481860
URI: http://eprints.soton.ac.uk/id/eprint/481860
PURE UUID: 4cb7b5f7-ed1d-4be0-a291-82dc5e18cda8
Catalogue record
Date deposited: 11 Sep 2023 17:13
Last modified: 18 Mar 2024 03:03
Export record
Contributors
Author:
Shannon How Shi Qi
Author:
Jagmohan Chauhan
Author:
Geoff V. Merrett
Author:
Jonathan Hare
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics