Skip main navigation

Acceptance of AI

Explainability can help healthcare professionals and patients to accept AI. Experts explain how.
Can explainability affect the acceptance of AI by healthcare professionals and patients?
ERIK RANSCHAERT: Yes, the explainability is a very important issue. And this is certainly also– this is part of the trust that needs to be created among users. We are confronted with the fact that many clinicians do have questions about, what is this algorithm doing? And why is it giving us this result. So yes, the explainability should be on the top priority list of all those, let’s say, not only vending or selling, but also using the AI applications.
MARKO TOPALOVIC: Well, I believe that it can, because typical opinion is that AI is a black box. And that’s a problem in accepting the AI or actually trusting the AI. So explainability is an extra layer of safety to actually try to solve that trust issue between developers of AI and the users of AI, because explainability can pinpoint to why a decision has been made and even sometimes intellectually challenge or satisfy the users.
MEREL HUISMAN: Yes, definitely, because humans, or at least human doctors, use logic to care for their patients. So if me, as a doctor, as I understand the ways of the algorithm decision-making, I am more prone to use it than if I wouldn’t understand it at all. I’m pretty sure, yes.
PETER VAN OOIJEN: I think explainability can indeed increase the acceptance. What we see as explainability is showing, for example, on the images, what was the focus of the algorithm– so where did it base its decision on? And showing that can help to convince the user that it’s actually a correct interpretation.
RENATO CUOCOLO: So yes, I believe explainability would go a long way to help in the introduction of AI in clinical practice, both from the health care practitioner point of view, as well as the patients. The issue there is that the tools for explainability are currently very limited, especially in computer vision, so in the field of radiology, which is mostly dedicated to that, because for example, there aren’t saliency maps or other tools that are currently used, but which actually gave us very little insights on how the model actually works and how the predictions are made. So the answer is yes, but I’m not sure that the degree of explainability that would be required is currently feasible. So that’s the main issue.
Explainable AI gives access to why and how every input shapes a particular outcome provided by the AI, making it easier to understand the results. It allows healthcare professionals and patients to avoid typical pitfalls such as bias. We asked experts in the field how this explainability can affect acceptability.

In the field of healthcare, black box systems, which provide very little visibility, raise trust issues. In this video, experts explain how transparency of these systems would affect the acceptability of healthcare professionals and patients in the systems.

Do you agree with the experts? Would a “black box” be an issue for you, and would explainability help you to accept the predictions made by an AI system? How much explainability would you require from AI systems? Share your opinion with fellow learners in the discussion section.

This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education