Privacy-preserving federated learning framework
Privacy-preserving federated learning framework
Federated Learning (FL) enables multiple parties to collaboratively train a shared model without exposing raw data. However, FL often suffers from degraded performance due to three core challenges: non-IID data distributions, residual privacy risks, and limited data volume per node. This work explores the privacy–performance trade-offs arising from these challenges and proposes a unified framework combining partial data sharing, anonymisation, and data augmentation.
We first investigate how non-IID distributions impair standard FL convergence, and propose a Cluster-of-Trust (CoT) mechanism that groups mutually trusted clients for hierarchical aggregation. Experiments on the IoT-23 intrusion detection dataset show that CoT significantly improves model accuracy and convergence speed, closing most of the gap to centralised training.
Next, we compare combinations of privacy-preserving techniques—including differential privacy, data anonymisation, and CoT-based grouping—highlighting how these impact model utility. Our results demonstrate that strong privacy guarantees can be achieved with only modest accuracy loss and reduced communication overhead.
To address the limited data volume problem, we introduce data augmentation strategies and test their effectiveness in a privacy-sensitive Human Activity Recognition (HAR) task. We find that combining augmentation with trust-aware aggregation improves generalisation under strict privacy constraints.
Across two domains—cybersecurity and HAR—we systematically evaluate this framework and demonstrate its effectiveness and generalisability. Our results show that strong privacy can be reconciled with competitive performance, offering practical insights for deploying FL in real-world, heterogeneous environments.
University of Southampton
Huang, Wenxuan
af0d2f48-661c-49c0-8d75-fc3001f97fa5
June 2025
Huang, Wenxuan
af0d2f48-661c-49c0-8d75-fc3001f97fa5
Tiropanis, Thanassis
d06654bd-5513-407b-9acd-6f9b9c5009d8
Konstantinidis, George
f174fb99-8434-4485-a7e4-bee0fef39b42
Huang, Wenxuan
(2025)
Privacy-preserving federated learning framework.
University of Southampton, Doctoral Thesis, 177pp.
Record type:
Thesis
(Doctoral)
Abstract
Federated Learning (FL) enables multiple parties to collaboratively train a shared model without exposing raw data. However, FL often suffers from degraded performance due to three core challenges: non-IID data distributions, residual privacy risks, and limited data volume per node. This work explores the privacy–performance trade-offs arising from these challenges and proposes a unified framework combining partial data sharing, anonymisation, and data augmentation.
We first investigate how non-IID distributions impair standard FL convergence, and propose a Cluster-of-Trust (CoT) mechanism that groups mutually trusted clients for hierarchical aggregation. Experiments on the IoT-23 intrusion detection dataset show that CoT significantly improves model accuracy and convergence speed, closing most of the gap to centralised training.
Next, we compare combinations of privacy-preserving techniques—including differential privacy, data anonymisation, and CoT-based grouping—highlighting how these impact model utility. Our results demonstrate that strong privacy guarantees can be achieved with only modest accuracy loss and reduced communication overhead.
To address the limited data volume problem, we introduce data augmentation strategies and test their effectiveness in a privacy-sensitive Human Activity Recognition (HAR) task. We find that combining augmentation with trust-aware aggregation improves generalisation under strict privacy constraints.
Across two domains—cybersecurity and HAR—we systematically evaluate this framework and demonstrate its effectiveness and generalisability. Our results show that strong privacy can be reconciled with competitive performance, offering practical insights for deploying FL in real-world, heterogeneous environments.
Text
Privacy_Preserving_Federated_Learning_Framework__2_
- Version of Record
Text
Final-thesis-submission-Examination-Mr-Wenxuan-Huang
Restricted to Repository staff only
More information
Published date: June 2025
Identifiers
Local EPrints ID: 510212
URI: http://eprints.soton.ac.uk/id/eprint/510212
PURE UUID: 0e769624-c63f-48b5-85ba-b1586072d384
Catalogue record
Date deposited: 23 Mar 2026 17:35
Last modified: 24 Mar 2026 03:05
Export record
Contributors
Author:
Wenxuan Huang
Thesis advisor:
Thanassis Tiropanis
Thesis advisor:
George Konstantinidis
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics