Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

Artificial intelligence and bias

Bias is a potential limitation of AI, which can introduce discriminatory factors into decisions.

Now let’s look at another potential limitation of AI.

Bias refers to a preference or unfairness that can influence decisions or actions in a way that is not objective or neutral. Basically, this means that decisions are made in a way which isn’t fair as it is based on some discriminatory factor.

People often talk about computers being unable to discriminate or being objective, but this isn’t always true. This is because systems which use machine learning are trained using existing data sets. For example, imagine a robot which learns from reading books. If it only reads books which say that ‘pandas are the best animal’, then it will assume that this is true because it has been said by multiple sources. It will begin to state this as a fact if it is asked what the best animal is, and this might lead to people being biased by the tool.

This can cause problems when the algorithms (the steps the system takes to make something happen) start using this data to make decisions.

Example: Curriculum vitae (CV) sorter

A company is designing an artificial intelligence (AI) tool which filters job applicants based on their CV. This tool has been designed to save time, as this initial sift phase sometimes takes hundreds of hours of work when there are lots of applicants. It automatically rejects anyone whose CV is categorised as ‘bad’.

The tool uses machine learning to decide whether a CV is ‘good’ or ‘bad’. This machine learning algorithm is trained by giving it all of the previous CVs which have been submitted to the company and whether the candidate was successful or not. It then uses the information in the CVs to try and create ‘rules’ for what makes a CV ‘good’ or ‘bad’. Usually, this involves looking for patterns in the data; for example, if there is a common keyword or phrase used in ‘good’ or ‘bad’ CVs.

Because the CVs were judged by humans, there is likely to be some bias in the training data. The algorithm might also make its own bias accidentally, for example it might start to look at patterns in names or addresses, and this might create unintentional bias.  For example, if there have been lots of ‘good’ applicants named ‘Alan’ it might begin to assume that anyone named Alan is good!

A company that uses this CV sorter may eventually notice that CVs from certain groups of people are being rejected even when they are actually good, but sometimes this takes a very long time to notice and at that point, the damage has already been done. The inbuilt bias in the training data may have rejected perfectly good candidates in favour of candidates that share commonalities with previous candidates, because the system is biased.

Next step

So the potential for bias is one thing – now let’s move on to look at another ethical consideration with AI: environmental impact.

This article is from the free online

How to Get Into AI

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now