Skip navigation

Enhancing Federated Learning robustness in adversarial environment through clustering Non-IID features

Enhancing Federated Learning robustness in adversarial environment through clustering Non-IID features

Li, Yanli, Yuan, Dong, Sani, Abubakar Sadiq and Bao, Wei (2023) Enhancing Federated Learning robustness in adversarial environment through clustering Non-IID features. Computers and Security, 132:103319. pp. 1-13. ISSN 0167-4048 (doi:https://doi.org/10.1016/j.cose.2023.103319)

[img]
Preview
PDF (Publisher VoR)
43039_SANI_ Enhancing_federated_learning_robustness_in_adversarial_environment_through_clustering_non_IID_features.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (2MB) | Preview

Abstract

Federated Learning (FL) enables many clients to train a joint model without sharing the raw data. While many byzantine-robust FL methods have been proposed, FL remains vulnerable to security attacks such as poisoning attacks and evasion attacks due to its distributed adversarial environment. Additionally, real-world training data used in FL are usually Non-Independent and Identically Distributed (Non-IID), which further weakens the robustness of the existing FL methods (such as Krum, Median, Trimmed-Mean, etc.), thereby making it possible for a global model in FL to be broken in extreme Non-IID scenarios. In this work, we mitigate the aforementioned weaknesses of existing FL methods in Non-IID and adversarial scenarios by proposing a new FL framework called Mini-Federated Learning (Mini-FL). Mini-FL follows the general FL approach but considers the Non-IID sources of FL and aggregates the gradients by groups. Specifically, Mini-FL first performs unsupervised learning for the gradients received to define the grouping policy. Then, the server divides the gradients received into different groups according to the grouping policy defined and performs byzantine-robust aggregation. Finally, the server calculates the weighted mean of gradients from each group to update the global model. Owning the strong generality, Mini-FL can utilize the most existing byzantine-robust method. We demonstrate that Mini-FL effectively enhances FL robustness and achieves greater global accuracy than existing FL methods when against security attacks and in Non-IID settings.

Item Type: Article
Uncontrolled Keywords: Federated learning (FL); non-independent and identically distributed (Non-IID); Byzantine-robust aggregation; untargeted model attack
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science
Faculty of Engineering & Science > Internet of Things and Security Research Centre (ISEC)
Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Last Modified: 22 Jun 2023 11:33
URI: http://gala.gre.ac.uk/id/eprint/43039

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics