Skip main navigation

How do people in the field deal with bias?

Voxpop showing how people in the field dealt with cases of bias.
Have you ever experienced bias in AI? How did you deal with this?
STÉPHANIE LOPEZ: Well, right now, for lung cancer screening, I’m using an NLST data set, which is an American data set taken 20 years ago during a clinical experience. My AI must have pretty good results to be acceptable on this data set, but maybe it’s not robust to another population– let’s say, your European population. Indeed, maybe American people have not the same smoking habits than European. Then probably the lung CT scanners will show different characteristics. And so the AI may not have the capacity to extend the rule to detect lung nodules and characterise correctly malignancy.
So that’s why we have to test on external data set compared with the training data set, to be able to evaluate the robustness of the AI.
PETER VAN OOIJEN: Yes, we regularly run into questions of bias when we use data sets in training and model. An example is that we train the model to detect disease. But because of the data set and the way it was built, it actually detected the treatment of the disease in patients instead of the disease itself. So it was biassed toward something that we did not want to find in the data. Solving this is by extending the data set, for example, or increasing its quality.
ANNAMARIA ANNICCHIARICO: Yes, in particular, in our last study concerning COVID and in particular the relation between this disease and other characteristics of the person and of the past diseases that he had before COVID. In this case, when we arrived to the conclusion, saying, oh, there are very important genomics aspect of this disease, we discovered at the same time that we didn’t consider something else as much important as those parameters that we have already inserted in the study. So we came back. We corrected. And we obtained, in any case, interesting results, but considering a larger number of factors.
SOTIRIOS BISDASO: I haven’t experienced, personally, bias, but to be honest, I’m very careful in order to avoid taking decisions based on AI which is biassed. For this reason, in my clinical practice, for example, I inquire a lot with the AI developer about how the algorithm was created and validated, whether, for example, the right sample from the population has been examined, and the data has been post-processed in an appropriate way.
In the previous steps, you have learned what bias is and how it can affect the trustworthiness of an AI system. It can cause AI systems to produce prejudiced results and can affect the trustworthiness of an AI system. To learn whether this actually occurred in real life practices, we turned to people in the field and asked them a few questions about their experience.

We asked several experts to explain if they encountered bias and how they dealt with it. Do you understand their reasoning? Could you think of alternatives to the way in which they dealt with the particular types of bias they encountered? Discuss your ideas with fellow learners in the discussion section.

This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education