Skip navigation

A random deep feature selection approach to mitigate transferable adversarial attacks

A random deep feature selection approach to mitigate transferable adversarial attacks

Nowroozi, Ehsan ORCID logoORCID: https://orcid.org/0000-0002-5714-8378, Mohammadreza, Mohammadi, Rahdari, Ahmad, Taheri, Rahim and Conti, Mauro (2025) A random deep feature selection approach to mitigate transferable adversarial attacks. IEEE Transactions on Network and Service Management. ISSN 1932-4537 (Online) (doi:10.1109/TNSM.2025.3594253)

[thumbnail of Author's Accepted Manuscript]
Preview
PDF (Author's Accepted Manuscript)
50898 NOWROOZI_A_Random_Deep_Feature_Selection_Approach_To_Mitigate_Transferable_Adversarial_Attacks_(AAM)_2025.pdf - Accepted Version

Download (2MB) | Preview

Abstract

Machine learning and deep learning are transformative forces reshaping our networks, industries, services, and ways of life. However, the susceptibility of these intelligent systems to adversarial attacks remains a significant issue. On the one hand, recent studies have demonstrated the potential transferability of adversarial attacks across diverse models. On the other hand, existing defense mechanisms are vulnerable to advanced attacks or are often limited to certain attack types. This study proposes a random deep feature selection approach to mitigate such transferability and improve the robustness of models against adversarial manipulations. Our approach is designed to strengthen deep models against poisoning (e.g., label flipping) and exploratory (e.g., DeepFool, BIM, FGSM, I-FGSM, L-BFGS, C&W, JSMA, and PGD) attacks that are applied in both the training and testing stages, and Transfer Learning-Based Adversarial Attacks. We consider scenarios involving perfect and semi-knowledgeable attackers. The performance of our approach is evaluated through extensive experiments on the renowned UNSW-NB15 dataset, including both real-world and synthetic data, covering a wide range of modern attack behaviors and benign activities. The results indicate that our approach boosts the effectiveness of the target network to over 80% against labelflipping poisoning attacks and over 60% against all major types of exploratory attacks.

Item Type: Article
Uncontrolled Keywords: adversarial machine learning, cyber security, machine learning
Subjects: H Social Sciences > HD Industries. Land use. Labor > HD61 Risk Management
Q Science > Q Science (General)
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science
Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Last Modified: 05 Aug 2025 09:28
URI: https://gala.gre.ac.uk/id/eprint/50898

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics