Skip navigation

MeetSafe: enhancing robustness against white-box adversarial examples

MeetSafe: enhancing robustness against white-box adversarial examples

Stenhuis, Ruben, Liu, Dazhuang, Qiao, Yanqi, Conti, Mauro, Panaousis, Manos ORCID logoORCID: https://orcid.org/0000-0001-7306-4062 and Liang, Kaitai (2025) MeetSafe: enhancing robustness against white-box adversarial examples. Frontiers in Computer Science, 7:1631561. ISSN 2624-9898 (Online) (doi:10.3389/fcomp.2025.1631561)

[thumbnail of Open Access Article]
Preview
PDF (Open Access Article)
51601 PANAOUSIS_MeetSafe_Enhancing_Robustness_Against_White-Box_Adversarial_Examples_(OA)_2025.pdf - Published Version
Available under License Creative Commons Attribution.

Download (1MB) | Preview

Abstract

Convolutional neural networks (CNNs) are vulnerable to adversarial attacks in computer vision tasks. Current adversarial detections are ineffective against white-box attacks and inefficient when deep CNNs generate high-dimensional hidden features. This study proposes MeetSafe, an effective and scalable adversarial example (AE) detection against white-box attacks. MeetSafe identifies AEs using critical hidden features rather than the entire feature space. We observe a non-uniform distribution of Z-scores between clean samples and adversarial examples (AEs) among hidden features and propose two utility functions to select those most relevant to AEs. We process critical hidden features using feature engineering methods: local outlier factor (LOF), feature squeezing, and whitening, which estimate feature density relative to its k-neighbors, reduce redundancy, and normalize features. To deal with the curse of dimensionality and smooth statistical fluctuations in high-dimensional features, we propose local reachability density (LRD). Our LRD iteratively selects a bag of engineered features with random cardinality and quantifies their average density by its k-nearest neighbors. Finally, MeetSafe constructs a Gaussian Mixture Model (GMM) with the processed features and detects AEs if it is seen as a local outlier, shown by a low density from GMM. Experimental results show that MeetSafe achieves 74%, 96%, and 79% of detection accuracy against adaptive, classic, and white-box attacks, respectively, and at least 2.3× faster than comparison methods.

Item Type: Article
Uncontrolled Keywords: adversarial attack, convolutional neural network, Gaussian Mixture Model, adversarial example, local reachability density
Subjects: Q Science > Q Science (General)
Q Science > QA Mathematics
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Faculty of Engineering & Science
Last Modified: 17 Nov 2025 15:02
URI: https://gala.gre.ac.uk/id/eprint/51601

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics