Skip navigation

A review of trustworthy and eXplainable Artificial Intelligence (XAI)

A review of trustworthy and eXplainable Artificial Intelligence (XAI)

Chamola, Vinay, Hassija, Vikas, Sulthana Abdul Kareem, Razia ORCID: 0000-0001-5331-1310 , Ghosh, Debshishu, Dhingra, Divyansh and Sikdar, Biplab (2023) A review of trustworthy and eXplainable Artificial Intelligence (XAI). IEEE Access, 11. 78994 -79015. ISSN 2169-3536 (Online) (doi:https://doi.org/10.1109/ACCESS.2023.3294569)

[img]
Preview
PDF (Publisher VoR)
43129_SULTHANA_A_review_of_Trustworthy_and_Explainable_Artificial_Intelligence_XAI.pdf - Published Version
Available under License Creative Commons Attribution.

Download (5MB) | Preview

Abstract

The advancement of Artificial Intelligence (AI) technology has accelerated the development of several systems that are elicited from it. This boom has made the systems vulnerable to security attacks and allows considerable bias in order to handle errors in the system. This puts humans at risk and leaves machines, robots, and data defenseless. Trustworthy AI (TAI) guarantees human value and the environment. In this paper, we present a comprehensive review of the state-of-the-art on how to build a Trustworthy and eXplainable AI, taking into account that AI is a black box with little insight into its underlying structure. The paper also discusses various TAI components, their corresponding bias, and inclinations that make the system unreliable. The study also discusses the necessity for TAI in many verticals, including banking, healthcare, autonomous system, and IoT. We unite the ways of building trust in all fragmented areas of data protection, pricing, expense, reliability, assurance, and decision-making processes utilizing TAI in several diverse industries and to differing degrees. It also emphasizes the importance of transparent and post hoc explanation models in the construction of an eXplainable AI and lists the potential drawbacks and pitfalls of building eXplainable AI. Finally, the policies for developing TAI in the autonomous vehicle construction sectors are thoroughly examined and eclectic ways of building a reliable, interpretable, eXplainable, and Trustworthy AI systems are explained to guarantee safe autonomous vehicle systems.

Item Type: Article
Uncontrolled Keywords: Artificial Intelligence(AI); Trustworthy AI (TAI); eXplainable AI (XAI); autonomous vehicles, Healthcare, IoT
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science
Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Related URLs:
Last Modified: 23 Nov 2023 13:51
URI: http://gala.gre.ac.uk/id/eprint/43129

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics