Skip navigation

A taxonomy and survey of attacks against machine learning

A taxonomy and survey of attacks against machine learning

Pitropakis, Nikolaos, Panaousis, Emmanouil ORCID: 0000-0001-7306-4062, Giannetsos, Thanassis, Anastasiadis, Eleftherios and Loukas, George (2019) A taxonomy and survey of attacks against machine learning. Computer Science Review, 34:100199. ISSN 1574-0137 (doi:https://doi.org/10.1016/j.cosrev.2019.100199)

[img] PDF (Author's Accepted Manuscript)
25226 LOUKAS_Taxonomy_And_Survey_Of_Attacks_Against_Machine_Learning_(AAM)_2019.pdf - Accepted Version
Restricted to Repository staff only until 23 October 2020.
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (934kB) | Request a copy

Abstract

The majority of machine learning methodologies operate with the assumption that their environment is benign. However, this assumption does not always hold, as it is often advantageous to adversaries to maliciously modify the training (poisoning attacks) or test data (evasion attacks). Such attacks can be catastrophic given the growth and the penetration of machine learning applications in society. Therefore, there is a need to secure machine learning enabling the safe adoption of it in adversarial cases, such as spam filtering, malware detection, and biometric recognition. This paper presents a taxonomy and survey of attacks against systems that use machine learning. It organizes the body of knowledge in adversarial machine learning so as to identify the aspects where researchers from different fields can contribute to. The taxonomy identifies attacks which share key characteristics and as such can potentially be addressed by the same defense approaches. Thus, the proposed taxonomy makes it easier to understand the existing attack landscape towards developing defence mechanisms, which are not investigated in this survey. The taxonomy is also leveraged to identify open problems that can lead to new research areas within the field of adversarial machine learning.

Item Type: Article
Uncontrolled Keywords: adversarial machine learning, cyber security, artificial intelligence
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Faculty / Department / Research Group: Faculty of Liberal Arts & Sciences
Faculty of Liberal Arts & Sciences > Department of Computing & Information Systems
Faculty of Liberal Arts & Sciences > Internet of Things and Security (ISEC)
Last Modified: 31 Oct 2019 10:23
Selected for GREAT 2016: None
Selected for GREAT 2017: None
Selected for GREAT 2018: None
Selected for GREAT 2019: None
URI: http://gala.gre.ac.uk/id/eprint/25226

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics