Skip navigation

Enhancing Federated Learning robustness through non-IID features

Enhancing Federated Learning robustness through non-IID features

Li, Yanli, Sani, Abubakar Sadiq, Yuan, Dong and Bao, Wei (2022) Enhancing Federated Learning robustness through non-IID features. Computer Vision – ACCV 2022 Workshops. Lecture Notes in Computer Science . UNSPECIFIED, pp. 41-55. ISBN 9783031270666 (doi:https://doi.org/10.1007/978-3-031-27066-6_4)

[img]
Preview
PDF (Published workshop paper)
38390_SANI_Enhancing_Federated_Learning_robustness_through_clustering_non-IID_features.pdf - Published Version

Download (826kB) | Preview

Abstract

Federated Learning (FL) enables many clients to train a joint model without sharing the raw data. While many byzantine-robust FL methods have been proposed, FL remains vulnerable to security attacks (such as poisoning attacks and evasion attacks) because of its distributed nature. Additionally, real-world training data used in FL are usually Non-Independent and Identically Distributed (Non-IID), which further weakens the robustness of the existing FL methods (such as Krum, Median, Trimmed-Mean, etc.), thereby making it possible for a global model in FL to be broken in extreme Non-IID scenarios. In this work, we mitigate the vulnerability of existing FL methods in Non-IID scenarios by proposing a new FL framework called Mini-Federated Learning (Mini-FL). Mini-FL follows the general FL approach but considers the Non-IID sources of FL and aggregates the gradients by groups. Specifically, Mini-FL first performs unsupervised learning for the gradients received to define the grouping policy. Then, the server divides the gradients received into different groups according to the grouping policy defined and performs byzantine-robust aggregation. Finally, the server calculates the weighted mean of gradients from each group to update the global model. Owning the strong generality, Mini-FL can utilize the most existing byzantine-robust method. We demonstrate that Mini-FL effectively enhances FL robustness and achieves greater global accuracy than existing FL methods when against the security attacks and in Non-IID settings.

Item Type: Book Section
Additional Information: The ACCV 2022 workshop papers, provided here by the Computer Vision Foundation, are the author-created versions. The content of the papers is identical to the content of the officially published ACCV 2022 LNCS version of the papers as available on SpringerLink: https://link.springer.com/conference/accv. Intellectual property rights, copyright and all rights therein are retained by authors, by Springer as the publisher of the official ACCV 2022 proceedings or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.
Uncontrolled Keywords: Federated Learning (FL); Byzantine-robust aggregation; untargeted model attack
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science
Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Related URLs:
Last Modified: 25 Sep 2023 13:12
URI: http://gala.gre.ac.uk/id/eprint/38390

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics