Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only T&Cs apply

Find out more

Acceptance of AI

Explainability can help healthcare professionals and patients to accept AI. Experts explain how.
6
Can explainability affect the acceptance of AI by healthcare professionals and patients?
10.7
ERIK RANSCHAERT: Yes, the explainability is a very important issue. And this is certainly also– this is part of the trust that needs to be created among users. We are confronted with the fact that many clinicians do have questions about, what is this algorithm doing? And why is it giving us this result. So yes, the explainability should be on the top priority list of all those, let’s say, not only vending or selling, but also using the AI applications.
43.7
MARKO TOPALOVIC: Well, I believe that it can, because typical opinion is that AI is a black box. And that’s a problem in accepting the AI or actually trusting the AI. So explainability is an extra layer of safety to actually try to solve that trust issue between developers of AI and the users of AI, because explainability can pinpoint to why a decision has been made and even sometimes intellectually challenge or satisfy the users.
75.1
MEREL HUISMAN: Yes, definitely, because humans, or at least human doctors, use logic to care for their patients. So if me, as a doctor, as I understand the ways of the algorithm decision-making, I am more prone to use it than if I wouldn’t understand it at all. I’m pretty sure, yes.
97.1
PETER VAN OOIJEN: I think explainability can indeed increase the acceptance. What we see as explainability is showing, for example, on the images, what was the focus of the algorithm– so where did it base its decision on? And showing that can help to convince the user that it’s actually a correct interpretation.
119.1
RENATO CUOCOLO: So yes, I believe explainability would go a long way to help in the introduction of AI in clinical practice, both from the health care practitioner point of view, as well as the patients. The issue there is that the tools for explainability are currently very limited, especially in computer vision, so in the field of radiology, which is mostly dedicated to that, because for example, there aren’t saliency maps or other tools that are currently used, but which actually gave us very little insights on how the model actually works and how the predictions are made. So the answer is yes, but I’m not sure that the degree of explainability that would be required is currently feasible. So that’s the main issue.

Explainable AI gives access to why and how every input shapes a particular outcome provided by the AI, making it easier to understand the results. It allows healthcare professionals and patients to avoid typical pitfalls such as bias. We asked experts in the field how this explainability can affect acceptability.

In the field of healthcare, black box systems, which provide very little visibility, raise trust issues. In this video, experts explain how transparency of these systems would affect the acceptability of healthcare professionals and patients in the systems.

Do you agree with the experts? Would a “black box” be an issue for you, and would explainability help you to accept the predictions made by an AI system? How much explainability would you require from AI systems? Share your opinion with fellow learners in the discussion section.

This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now