The University of Southampton
University of Southampton Institutional Repository

Privacy-preserving federated learning framework

Privacy-preserving federated learning framework
Privacy-preserving federated learning framework
Federated Learning (FL) enables multiple parties to collaboratively train a shared model without exposing raw data. However, FL often suffers from degraded performance due to three core challenges: non-IID data distributions, residual privacy risks, and limited data volume per node. This work explores the privacy–performance trade-offs arising from these challenges and proposes a unified framework combining partial data sharing, anonymisation, and data augmentation.
We first investigate how non-IID distributions impair standard FL convergence, and propose a Cluster-of-Trust (CoT) mechanism that groups mutually trusted clients for hierarchical aggregation. Experiments on the IoT-23 intrusion detection dataset show that CoT significantly improves model accuracy and convergence speed, closing most of the gap to centralised training.
Next, we compare combinations of privacy-preserving techniques—including differential privacy, data anonymisation, and CoT-based grouping—highlighting how these impact model utility. Our results demonstrate that strong privacy guarantees can be achieved with only modest accuracy loss and reduced communication overhead.
To address the limited data volume problem, we introduce data augmentation strategies and test their effectiveness in a privacy-sensitive Human Activity Recognition (HAR) task. We find that combining augmentation with trust-aware aggregation improves generalisation under strict privacy constraints.
Across two domains—cybersecurity and HAR—we systematically evaluate this framework and demonstrate its effectiveness and generalisability. Our results show that strong privacy can be reconciled with competitive performance, offering practical insights for deploying FL in real-world, heterogeneous environments.
University of Southampton
Huang, Wenxuan
af0d2f48-661c-49c0-8d75-fc3001f97fa5
Huang, Wenxuan
af0d2f48-661c-49c0-8d75-fc3001f97fa5
Tiropanis, Thanassis
d06654bd-5513-407b-9acd-6f9b9c5009d8
Konstantinidis, George
f174fb99-8434-4485-a7e4-bee0fef39b42

Huang, Wenxuan (2025) Privacy-preserving federated learning framework. University of Southampton, Doctoral Thesis, 177pp.

Record type: Thesis (Doctoral)

Abstract

Federated Learning (FL) enables multiple parties to collaboratively train a shared model without exposing raw data. However, FL often suffers from degraded performance due to three core challenges: non-IID data distributions, residual privacy risks, and limited data volume per node. This work explores the privacy–performance trade-offs arising from these challenges and proposes a unified framework combining partial data sharing, anonymisation, and data augmentation.
We first investigate how non-IID distributions impair standard FL convergence, and propose a Cluster-of-Trust (CoT) mechanism that groups mutually trusted clients for hierarchical aggregation. Experiments on the IoT-23 intrusion detection dataset show that CoT significantly improves model accuracy and convergence speed, closing most of the gap to centralised training.
Next, we compare combinations of privacy-preserving techniques—including differential privacy, data anonymisation, and CoT-based grouping—highlighting how these impact model utility. Our results demonstrate that strong privacy guarantees can be achieved with only modest accuracy loss and reduced communication overhead.
To address the limited data volume problem, we introduce data augmentation strategies and test their effectiveness in a privacy-sensitive Human Activity Recognition (HAR) task. We find that combining augmentation with trust-aware aggregation improves generalisation under strict privacy constraints.
Across two domains—cybersecurity and HAR—we systematically evaluate this framework and demonstrate its effectiveness and generalisability. Our results show that strong privacy can be reconciled with competitive performance, offering practical insights for deploying FL in real-world, heterogeneous environments.

Text
Privacy_Preserving_Federated_Learning_Framework__2_ - Version of Record
Available under License University of Southampton Thesis Licence.
Download (3MB)
Text
Final-thesis-submission-Examination-Mr-Wenxuan-Huang
Restricted to Repository staff only

More information

Published date: June 2025

Identifiers

Local EPrints ID: 510212
URI: http://eprints.soton.ac.uk/id/eprint/510212
PURE UUID: 0e769624-c63f-48b5-85ba-b1586072d384
ORCID for Wenxuan Huang: ORCID iD orcid.org/0000-0002-2613-2672
ORCID for Thanassis Tiropanis: ORCID iD orcid.org/0000-0002-6195-2852

Catalogue record

Date deposited: 23 Mar 2026 17:35
Last modified: 24 Mar 2026 03:05

Export record

Contributors

Author: Wenxuan Huang ORCID iD
Thesis advisor: Thanassis Tiropanis ORCID iD
Thesis advisor: George Konstantinidis

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×