Skip main navigation

What is explainable AI?

Explainable AI enables users to understand AI better. The article explains explainability and why it is required in healthcare.
Female doctor explaining something to a patient
© AIProHealth Project
AI technologies are a powerful tool to face the rising labor shortage of healthcare and give professional workers tools to do their jobs in better conditions. They are an integrated part of numerous real-life technologies, such as facial recognition or social media monitoring. Nevertheless, understanding AI decision-making processes can be challenging or even impossible.

Complex AI programs in which decision-making routes and internal coding are hard to comprehend are considered black boxes. They do not provide interpretable or accessible information to their users. The consequence of such black-box models is a lack of transparency and interpretability that spurs wariness in the general public and healthcare professionals alike. The lack thereof is a major drawback in healthcare where reliable decisions are a requirement.

Explainability

Explainability has become a critical topic in healthcare as mistakes and imprecision in the models can severely impact human life. In healthcare, AI systems address the broadest probable scope of users and are involved in high-stakes decisions. Thus, all users should be able to access and benefit from AI services, regardless of their age, gender, abilities, or characteristics. Healthcare professionals and patients need to recognize the precision and the shortcomings of healthcare-associated AI systems. However, as most practitioners and patients are not skilled in the field of AI, they need an explanation of the AI system and output to make informed medical decisions.

The European Commission states that explicability is the fourth ethical principle, anchored in fundamental rights, in the context of AI systems development (under respect for human autonomy, prevention of harm, and fairness). Explainability is the cornerstone to forging and securing users’ trust in AI systems. Explainability requires a human-friendly account of:

1) the technical methods implemented by an AI system
By understanding the system’s underlying processes, end-users can better apprehend its capacities and limitations and thus appropriately trust, or distrust, their outcomes. Understanding AI processes helps reduce fear of AI and opens the door to innovative healthcare processes.

Providing this explanation regarding the processes of the AI system can be done with the use of model cards or model facts labels. These tools allow for a standard reporting of machine learning models, including the type of model, the data it has been trained on, its performance, on which population it can be used, and the risks it brings.

2) its human-related decisions
Another level of explanation relates to the decisions the AI system makes. More and more tools and frameworks are being developed to help the systems’ users to understand and interpret these outcomes. For example, several tools provide scores for the extent to which a particular factor affected the final result, reflecting the patterns the model found in the data. This is called the feature importance. In the case of image processing, this can be displayed as heatmaps called saliency maps, which allow for understanding which parts of pixels of the image have contributed most to the final output of the model.

However, interpreting these types of explanations might still not always be easy, because the model might focus on aspects that a human observer would not expect. That is, even though the explanation might show that a model prediction was triggered by a certain factor, it does not show why it was triggered. Human reasoning is still required to try and explain this why-question.


Prototype of explainable AI radiology assistant Chester, released by the Mila Medical research group. Chester helps to diagnose disease in chest X-ray images. As you can see, the model indicates which regions have influenced the prediction, without explaining why.

Comeback of Explainable AI

Explainable AI models were designed in the eighties and are currently a re-emerging trend as concerns are rising over AI’s lack of transparency and undesired system biases. They provide solutions and structures to help developers and end-users to understand, unravel and explain their predictions and decisions. Their internal logic and operations are transparent and interpretable: their system is intelligible to non-experts.

Among other things, explainable AI discloses the systems’ strengths and caveats, workflow and real-time score, showing

  1. how much a factor influenced the outcome and
  2. the confidence levels of each possible output.

As such, explainable AI ensures that all inputs are meaningful variables, guaranteeing that the system follows a reliable program course.

Explainable AI makes it easier for humans to make sense of a given outcome and allows them to know when a result should be second-guessed. Subsequently, explainable AI is highly beneficial for the health field where stakeholders need to be confident enough to treat a patient based on the AI decision. Explainable AI plays a fundamental role in implementing AI in healthcare by encouraging AI acceptance by both patients and healthcare professionals.

Do you think that in healthcare, all AI should be designed to be explainable? Can you also think of some risks or challenges in developing such explainable systems? Leave your ideas in the discussion section and explore this topic with fellow learners.

© AIProHealth Project
This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education