Skip navigation

Adversarial Example Detection and Mitigation Using Machine Learning

Adversarial Example Detection and Mitigation Using Machine Learning

Nowroozi, Ehsan ORCID logoORCID: https://orcid.org/0000-0002-5714-8378, Taheri, Rahim and Cordeiro, Lucas (2026) Adversarial Example Detection and Mitigation Using Machine Learning. https://doi.org/10.1007/978-3-031-99447-0 . Springer Nature, Cham, Switzerland. ISBN 978-3031994463

[thumbnail of Edited Volume] PDF (Edited Volume)
525024 NOWROOZI_Adversarial_Example_Detection_And_Mitigation_Using_Machine_Learning_(EDITED VOLUME VoR)_2026.pdf - Published Version
Restricted to Repository staff only

Download (31MB) | Request a copy

Abstract

This book offers a comprehensive exploration of the emerging threats and defense strategies in adversarial machine learning and AI security. It covers a broad range of topics, from federated learning attacks, adversarial defenses, biometric vulnerabilities, and security weaknesses in generative AI to quantum threats and ethical considerations. It also brings together leading researchers to provide an in-depth and multifaceted perspective. As artificial intelligence systems become increasingly integrated into critical sectors such as healthcare, finance, transportation, and national security, understanding and mitigating adversarial risks has never been more crucial. Each chapter delivers not only a detailed analysis of current challenges, but it also includes insights into practical mitigation techniques, future trends, and real-world applications. This book is intended for researchers and graduate students working in machine learning, cybersecurity, and related disciplines. Security professionals will also find this book to be a valuable reference for understanding the latest advancements, defending against sophisticated adversarial threats, and contributing to the development of more robust, trustworthy AI systems. By bridging theoretical foundations with practical applications, this book serves as both a scholarly reference and a catalyst for innovation in the rapidly evolving field of AI security.

Item Type: Book
Uncontrolled Keywords: adversarial machine learning, adversarial examples, backdoor attacks, data poisoning, model extraction, membership inference, data reconstruction, privacy attacks, federated learning security, explainable AI, probabilistic robustness, AI security, cybersecurity, denial-of-service attacks, sponge attacks, machine learning robustness, quantum adversarial AI, IoT security, cyber-physical systems, deep learning security
Subjects: Q Science > Q Science (General)
Q Science > QA Mathematics
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science
Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Last Modified: 23 Feb 2026 12:39
URI: https://gala.gre.ac.uk/id/eprint/52524

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics