|

The role of explainable Artificial Intelligence in high-stakes decision-making systems: a systematic review.

Researchers

Journal

Modalities

Models

Abstract

A high-stakes event is an extreme risk with a low probability of occurring, but severe consequences (e.g., life-threatening conditions or economic collapse). The accompanying lack of information is a source of high-stress pressure and anxiety for emergency medical services authorities. Deciding on the best proactive plan and action in this environment is a complicated process, which calls for intelligent agents to automatically produce knowledge in the manner of human-like intelligence. Research in high-stakes decision-making systems has increasingly focused on eXplainable Artificial Intelligence (XAI), but recent developments in prediction systems give little prominence to explanations based on human-like intelligence. This work investigates XAI based on cause-and-effect interpretations for supporting high-stakes decisions. We review recent applications in the first aid and medical emergency fields based on three perspectives: available data, desirable knowledge, and the use of intelligence. We identify the limitations of recent AI, and discuss the potential of XAI for dealing with such limitations. We propose an architecture for high-stakes decision-making driven by XAI, and highlight likely future trends and directions.© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *