Skip main navigation

What are the big issues of AI in healthcare?

Technical and regulatory challenges of AI in healthcare are often related to the trustworthiness and explainability of the systems.
Young man looking troubled at laptop
© AIProHealth Project

Although artificial intelligence in healthcare is relatively new, its potential is widely recognized. In fact, healthcare is currently one of the largest investors in AI. However, challenges in development and adoption in clinical practice are still prevailing. Most of these challenges fall under one of three categories: the trustworthiness or the explainability of the systems. This article will introduce the big legal and regulatory issues within these categories, which will be discussed more thoroughly in the upcoming activities.

Trustworthiness

One significant challenge in the development of artificial intelligence in general is its dependency on diverse, high-quality data and in a vast majority of cases corresponding labels or annotations that provide the outcome measure that the model has to be trained on. Not having the proper data, or having inaccurately labeled data could result in bias into the model (i.e., an unfair model, for example when women are underrepresented in the dataset) which can, in turn, result in wrong predictions. However, collecting good-quality medical data is expensive, and data privacy protection plays an important restrictive role, especially with this type of sensitive data. That is, although the General Data Protection Regulation (GDPR) protects patients’ personal information, it also makes it difficult for researchers and companies to get access to and share such data. If the lack or inaccessibility of good quality data were to result in biased AI, the system could not be considered trustworthy. Yet, having trustworthy systems is particularly important in the field of healthcare, as the predictions of these systems might be able to impact the patient’s diagnosis or therapy and therefore health.

Another factor that can influence the trustworthiness of an AI system is its proneness to so-called adversarial attacks. Such adversarial attacks could be done by hackers or other people that want to do harm, very much like the now quite common hijacking of computer systems or denial of service (DoS) attacks. More specifically, adversarial attacks consist of small, invisible modifications to the input instances that cause the machine learning model to make incorrect predictions. Examples of adversarial attacks could be making a human-invisible change to every pixel in an image, introducing innocent alterations to the text, or substituting synonyms for words. A recent study has shown that medical deep learning systems, in particular, can be compromised by these attacks. This, too, raises concerns about the deployment of these systems in clinical settings.

An AI system can also not be considered trustworthy if it is able to have any damaging effects in case of threats. It should be technically robust, and it has to be compliant with the security standards such as the ones covered in the Medical Device Regulation (MDR). Accomplishing this compliance is a very time-consuming process, which is why there are but a few systems that actually enable adoption in healthcare.

The next activity, “What is trustworthy AI?” covers the requirements for trustworthy AI in more detail, as well as how trustworthiness should be achieved.

Explainability

Next to the trustworthiness of AI, another factor that can influence its adoption in healthcare is the lack of understanding of these systems. According to a qualitative survey, 87% of healthcare professionals do not know the distinction between machine learning and deep learning, let alone understand how such systems make certain predictions. Educating healthcare professionals in artificial intelligence is of great importance in order to minimize this lack of understanding. However, for an AI system to fully support healthcare professionals in their work, it should also be able to convey its reasoning in a clear and meaningful way. The systems should be adapted more closely to the user and explain their decisions. Unfortunately, AI systems are often treated as black boxes, as they usually do not provide such reasoning. This lack of explainability can lead to a lack of trust from the user.

The activity “What is explainable AI?” later this week covers the topic of explainability more thoroughly.

Responsibility

Besides critically reviewing AI on legal and regulatory grounds, it should also be reviewed with regard to ethical and social ones. This brings us to the topic of responsible AI. What happens if, despite all the precautions taken to reach trustworthy and explainable AI, the system makes any mistakes or is interpreted incorrectly? Who should be considered responsible for the consequences in that case? This topic will be discussed in more detail in Week 3 of the course, “Ethical and social aspects of AI in healthcare”.

© AIProHealth Project
This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now