Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £29.99 £19.99. New subscribers only. T&Cs apply

Find out more

The Challenges of Artificial Intelligence in Healthcare

vidoe
8.6
Hello, my name is Alon Dagan. I’m gonna be discussing with you today the challenges of artificial intelligence in healthcare. So as we’ve kind of been probably hearing artificial intelligence has made quite a bit of press recently in terms of being an opportunity to completely transform the healthcare system. This has been picked up across magazines and newspapers around the world saying that you know your doctor is going to be transformed into this robotic healthcare provider. We’ve seen headlines about how it’s transforming healthcare solving all of our problems that it’s redesigning healthcare and that it’s going to replace doctors and other professionals but the truth is… as always a little bit more complex.
57.3
So what I’ll be touching on are just some of the basic challenges that we face in artificial intelligence in medicine that are a little bit different than some of the challenges that have been faced in artificial intelligence and other fields. And it can kind of help tamper some of the excitement that has been that has been brewing regarding this field. So some of the topics I’ll be touching on. Some of the technical challenges. Why medicine is not a video game.
84.9
Bias, which i think is really important to to keep in the fore front of our minds including kind of this illusion of impartiality, investigator degrees of freedom and intrinsic bias and then sort of some of the opportunities to address these issues through transparency and explainability. So, I think that it’s it’s one of the first real success stories and in terms of artificial intelligence and reinforcement learning has been some of these different game applications where we’re able to train computers to to go from really just the pixel information of specific games into developing a solution. This started famously with the Atari experiments.
129.5
Then moved on to games like doom and starcraft and has even kind of moved forward into the game of go and all of these examples have been really exciting opportunities where you’re able to take a computer system and allow it to develop a solution without really providing it with the knowledge of the intrinsic rules of the game. and people have been very quick to kind of think that this could be applied to health care problems as well. But the reality is a little bit more complex. So, health care system and and the hospital, hospital care systems is different in a few very important ways. one is that this is a high-risk environment.
171.3
All of the previous examples of game kind of solutions were a result of extreme lengths of trial and error. Thousands and thousands of hours of simulated gaming was required to develop these solutions. And of course when we’re talking about taking care of patients, there is, there really is, it’s you don’t have the same opportunity for trial and error. You really want to be very aware that these these problems that we’re trying to solve are human ones. And the there really isn’t an opportunity to be wrong, in the same way that is necessary in terms of training these other environments. In addition, these are incredibly complex and dynamic interactions and much is always unknown about every particular scenario.
216.3
So for example, if I’m taking care of a patient in the emergency department. I’m only operating with a very limited amount of the complete data. I only have the basic history that the patient is telling me. Whatever physical exam findings that I am able to to get on my examination and I need to start making decisions right away and this is a very different situation than one in which you have a static set of data and you have a complete set of data and you’re able to train a model using sort of 100% of the data.
249
It is rare to impossible to have 100% of the data on any kind of human interaction both because of time constraints and any kind of patient interaction and also just because not everything is known about the physiology and the pathophysiology of the human body. So this is a really important thing to keep in mind. In addition to these sort of technical challenges. There’s also sort of this illusion of impartiality. And essentially what this means is that when we’re using an artificial intelligence approach, the thought is always you know a this is a impartial observer because there isn’t a human involved in developing these decisions at least not directly.
291.9
We think that this means that it’s going to be completely free of bias. But this is certainly not the case. The pitfalls of Big Data are no different from the pitfalls of statistics except they’re magnified. What this means is that given exactly the same data and exactly the same question many different analysts can come to very different answers depending on sort of the degrees of freedom inherent in developing these studies so one example of this is there was a very simple question which was when provided with a great deal of data on the referee decisions in football, a group of 29 different research teams were asked “Is there a statistically significant bias of referees to giving red cards to dark-skinned players?”
339.9
And you can see in this graph here that there was a really broad spread of answers that were sort of developed by these very well-meaning and ophisticated research teams. Now, the reason that we sort of, end up with all of these different answers to the same question with the same data set is because as we’re going through, and creating going through the steps of this research process or making decisions, decisions about which factors to include; decisions about what our exclusion criteria are going to be and each one of these decisions has the opportunity to change the outcome of our study.
380.1
And so this is a way that even without realizing it or without being in tune to it, we can be kind of making these biases as we’re developing decision support tools even with the thought that this is completely sort of free of human interference. An example of this that’s kind of been seen in the real world already is… I think this may have been touched on in one of your earlier lectures as well. But there’s a machine, there’s a machine learning algorithm that’s used in the United States to help determine whether or not certain inmates are likely to be re-arrested after being released on parole.
424.2
and this was fed with a large dataset I kind of gleaned from sort of the society and is being used in sort of the determining the fate of real American citizens and what was found on secondary analysis is that it was biased against black prisoners. Essentially what it was doing is that it was falsely determining that certain black inmates had a high risk for recidivism, for committing another crime and it was falsely lowering the risk of white inmates. And you know the thought is how could this be possible? Because this was thought to be an impartial system and the truth is is that it was based on a system that already had inherent bias and so while there was no…
471.5
you know race wasn’t specified in the development of this algorithm because it was based on data that was biased. It inherently had bias and sort of the the real danger here is that people’s perception is that there isn’t any bias and then really this bias is just being hidden.
493
A big contributing factor to this is that much of these artificial intelligence algorithms are a black box. and this is particularly important to address in terms of healthcare applications. So interpretability is… it can be a challenge for these for these algorithms because often the algorithm is developed without human input. And it can be very challenging to understand… sort of how a computer is able to come up with a solution. This lack of interpretability is is a huge barrier in terms of adoption in the medical field and understanding how a decision is made is often just as crucial as understanding what the the result of that decision is.
535.1
People affected by decisions of AI, certainly we’ll want to know why a system was designed that way and there’s also now legislation in the European Union that has kind of implemented this rate to explanation. Where you, the user can demand an explanation for an algorithmic decision which is impossible in some of these algorithms. So there are several different approaches that are being used to try and address these kinds of limitations and explain ability.
563
I’m not going to go into the detail here but needless to say there this is a very exciting area of research in this example there’s a kind of image in the upper left and it’s being classified by a black box, black box a artificial intelligence system that’s able to correctly classify this as a rooster but then using sort of these other different techniques you are able to try and explain which portions of this picture sort of in this heat map here and that you can see on the left. you can see that there’s a kind of a attempt to explain which portions of this picture are triggering the black box system to identify this as a rooster.
605.7
In these similar examples you can see that this has been applied not only to image recognition kind of in that an example a but also to text document classification, video classification as well and this is a really important field particularly in medicine where the stakes are so high.
625.4
I think that I’d like to end kind of with this quotation here which is saying that: As people are hearing about this more and more in the late press people are concerned about this idea that artificial intelligence is going to take over medicine that it’s going to replace doctors but that the really the pressing ethical questions in machine learning are not about the machines becoming self-aware are taking over the world, taking our jobs. But it’s about how people are going to be able to exploit other people. Either through carelessness or through kind of intentional introduction of immoral behavior into automated systems.
658.6
And I think that this is why we really need to be kind of aware of these challenges and artificial intelligence Particularly in the field of healthcare, I think that at the end of the day healthcare is always going to be about caring for other people, and that’s not something that artificial intelligence or machine is ever going to be able to succeed at executing over over a human being. Thank you very much

Dr. Alon Dagan is an Emergency Medicine Physician with a background in Biomedical Engineering. In addition to working clinically, he is an Instructor of Emergency Medicine at Harvard Medical School and an MIT Research Affiliate. The major challenges faced in the application of AI in healthcare have been described in the video.

Dr. Dagan provides researches that proved algorithms is not completely free of bias. How do you think to prevent this from happening?

This article is from the free online

Artificial Intelligence for Healthcare: Opportunities and Challenges

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now