Skip main navigation

£199.99 £139.99 for one year of Unlimited learning. Offer ends on 28 February 2023 at 23:59 (UTC). T&Cs apply

Find out more

Possibilities of Artificial Intelligence in Healthcare

by Leo Anthony Celi
Hello, my name is Leo Anthony Celi I’m an intensive care physician at Beth Israel Deaconess Medical Center in Boston. but I’m also a research scientist at the Massachusetts Institute of Technology Today I’m gonna talk about artificial intelligence in healthcare I’m gonna start of by defining artificial intelligence or AI. AI is defined by the late Marvin Minsky at the science of making machines new things that would require intelligence if done by man. A computer would perform a task by following the rules that it is fed. but an AI would perform a task by learning the rules from data that is being fed.
To the machine learning is the main ingredient the main building block of AI rather than defining what machine learning is I would like to give you the goals of machine learning. The first is classification Is this picture that of a dog or a cat? The second one is prediction. Is this patient going to live or die? And the third one is optimization. What dose of a medication will achieve a therapeutic effect that is desired at the shortest possible time? So to apply machine learning in healthcare, patients become data points that exist in a multi-dimensional space and each dimension represents a feature or a variable such as blood pressure whether they’re taking a medication or not .
What their diagnosis would include? And the task would be predicting an outcome such as survival or death; response to treatment or risk of arm using a formula. Over the past two years we’ve heard successes of artificial intelligence in healthcare. Two years ago we learned about a computer system beating skin doctors or dermatologists in diagnosing cancer when shown photos of visions of the skin. We also heard about computer systems beating eye specialists in diagnosing diabetic retinopathy which is an eye complication of diabetes compared to specialists in the eyes or ophthalmologist. but I could argue that image classification is a low-hanging fruit when it comes to healthcare. I’m not discounting the value of image classification.
There are places around the world where they have a low number of radiologists or ophthalmologist where this algorithms could be really helpful. But I would say that the value of machine learning and artificial intelligence in healthcare would be in a day to day complex decisions that clinicians are faced with.
It’s important to know that building artificial intelligence would require: number one data that is objective
and number two: the theories of a ground truth but medicine is surprisingly subjective endeavor with less than clear-cut definitions of concepts.
It’s also important to know that in medicine ground truth is a moving target. A great example is if you open a textbook from 1978. A textbook that is called Harrison which is considered the Bible of internal medicine. And you look at the chapter on heart attacks. This is what it will tell you in terms of how to take care of patients with Myocardial Infarctionor heart attack. The patient should stay in bed for six weeks. They’re not even allowed to use a toilet for two weeks Avoid medications called beta-blockers and do not take them to the cardiac catheterization lab because the patient is too unstable. Fast forward to 2018, we don’t follow any of those recommendations.
People would probably laugh at you if you try to follow this guidelines. It’s possible that the studies where this recommendations were based on aware flawed but it’s also possible that that was the ground truth back in 1978. But over time the description of patients who develop heart attacks would have changed and also with the advent of new tests and treatments, it’s very possible that those guidelines are now obsolete in 2018. The key message here is that we constantly need to look at the models. We constantly need to repeat analysis to make sure that the findings are still accurate. There’s another good example of a story that proves the importance of constantly evaluating the algorithms.
Professor Michael Jordan, who is a computer scientist at Berkeley, he and his wife were pregnant in 2004 and they had an ultrasound and for the ultrasound of the baby they found some white spots around the heart and there was a geneticist in the room who recommended that they should perform more tests because these white spots are associated with a one is to 20 risk of Down syndrome and the only way to confirm the diagnosis is to an amniocentesis or taking sample of the amniotic fluid. but that procedure is not without risk its associated with death in one is to 300 chance. Professor Michael Jordan is a brilliant guy.
He wanted to see where the original study was done, where was the original data that was analyzed that led to this recommendation and it turned out that that guideline was based on a study that was performed in the United Kingdom in 1994 and he pointed out astutely that the ultrasound machines that were used back then and now were very different in terms of resolution. So his intuition was that this is what we call a false positive and for that reason they did not go for an amniocentesis And a few months later, they had a healthy baby girl.
The point of that story is that models and analysis of data they need to be constantly redone to make sure that there is still providing the same accuracy. Another important concept in machine learning and artificial intelligence is the concept of machine bias. We think that computers are always objective but it turns out that if you feed computers biased data they would also spit out biased algorithms. There was a landmark investigative report that was published in 2016 where they looked at a software that is being used by judicial courts across the United States. And this algorithm is supposed to give recommendation to the judicial courts in terms of parole what is the likelihood that this prisoner will recommit a crime.
It will also give them advice about a reasonable amount of bail for a certain person while waiting for trial. And there are some courts in the United States that would run the algorithm and would follow the recommendations while there are courts that would run the algorithm but will ignore the suggestion. So there’s an opportunity to find out how the algorithm is performing and what they learn is that the algorithm was particularly likely to falsely flag black defendants as future criminals wrongly labeling them at almost twice the rate as white defendants. And white defendants were mislabeled as low-risk more often than black defendants. How could this play out in health care?
We could have algorithms that we run to see if a patient will respond to chemotherapy and that algorithm might be biased against certain groups and may suggest withholding that chemotherapy from those groups who might as well benefit from that therapeutic intervention. In the intensive care unit, you could have algorithms that would predict the risk of death. and again they could be biased against certain groups and could suggest premature termination of treatments. I would like to quote Nick Bostrom when he said the pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world but about how people and exploit other people or through carelessness introduced immoral behavior in the automated systems.
Lastly I want to introduce the concept of intelligent infrastructure so the goal of AI is to produce machines that perform better than humans. And right now there is a race to develop self-driving cars that are as good as humans. It seems like there there is little effort to design a transportation system that will likely resemble the air traffic control to manage the self-driving cars.
One thing to know is that even if the self-driving car is less competent than a good human driver, an intelligent network of connected self-driving cars would still be much safer compared to human drivers who would kill 1.25 million a year because of human errors so we could say that even the self-driving car is not as competent as a human driver at least a self-driving car will not text while driving;
will not fall asleep while driving because of being tired so in the end what we we don’t need AI that will imitate human intelligence what we actually need is intelligent infrastructure that will correct human mistakes

Dr. Leo Anthony Celi has practiced medicine on three continents, giving him broad perspectives in healthcare delivery. He founded and co-directed Sana at the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. The possibilities of AI in healthcare have been described in this video.

Professor Celi highlights the fact that algorithms used in AI are not neutral and need constant re-evaluation. He talks about a study published in 2016 that found what algorithms used in the US criminal justice system were particularly likely to falsely flag black defendants as future criminals. In a healthcare setting, what biases might creep in? Are there particular groups that might be vulnerable to such biases? Discuss this idea in the comments below.

This article is from the free online

Artificial Intelligence for Healthcare: Opportunities and Challenges

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education