Skip navigation

Recent advances in Trustworthy and eXplainable Artificial Intelligence: status, challenges, and perspectives

Recent advances in Trustworthy and eXplainable Artificial Intelligence: status, challenges, and perspectives

Chamola, Vinay, Hassija, Vikas, Sulthana Abdul Kareem, Razia ORCID: 0000-0001-5331-1310, Ghosh, Debshishu, Dhingra, Divyansh and Sikdar, Biplab (2023) Recent advances in Trustworthy and eXplainable Artificial Intelligence: status, challenges, and perspectives. IEEE Access. ISSN 2169-3536 (Online) (In Press)

[img] PDF (Publisher VoR)
43129_SULTHANA_Recent_advances_in_Trustworthy_and_Explainable_AI.pdf - Published Version
Restricted to Repository staff only
Available under License Creative Commons Attribution.

Download (17MB) | Request a copy

Abstract

The advancement of Artificial Intelligence (AI) technology has accelerated the development of several systems that are elicited from it. This boom had made the systems vulnerable to security attacks and gives freedom to considerable bias to handle errors in the system. This puts humans at risk and leaves machines, robots, and data defenseless. So, every technological advancement intended to benefit humans can put them in peril and potentially endanger them. Trustworthy AI guarantees human value and the environment. In this paper, we present a comprehensive state-of-the-art on how to build an Trustworthy and eXplainable AI, taking into account that the AI is a black box with little insight into its underlying structure. The paper also discusses various TAI components, their corresponding bias, and inclinations that make the system unreliable. The study also discusses the necessity for TAI in many verticals, including banking, healthcare, autonomous system, and IoT. We unite the ways of building trust in all fragmented areas of data protection, pricing, expense, reliability, assurance, and decision-making processes utilizing TAI in several diverse industries and to differing degrees. It also emphasizes the importance of transparent and post hoc explanation models in the construction of an eXplainable AI and list the potential drawback and pitfalls of building an eXplainable AI. Finally, the policies for developing TAI in the autonomous vehicle construction sectors are thoroughly examined and eclectic ways of building a reliable and interpretable eXplainable and Trustworthy AI systems are explained to guarantee safe autonomous vehicle systems.

Item Type: Article
Uncontrolled Keywords: Artificial Intelligence(AI); Trustworthy AI (TAI); eXplainable AI (XAI); autonomous vehicles, Healthcare, IoT
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science
Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Last Modified: 06 Sep 2023 09:33
URI: http://gala.gre.ac.uk/id/eprint/43129

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics