Skip main navigation

How racism is embedded in AI and algorithms

In this article, learn about how racism is embedded in AI and algorithms, and look at examples of AI systems that produce racist outcomes.
an illustration of a series of shapes connected by arrows, in the middle of which is a large rectangle with an exclamation mark in the centre.
© Creative Computing Institute

Many examples of technologies that reinforce racism are underpinned by Artificial Intelligence with racial biases embedded into the algorithms that power them, so let’s look at this in some more detail.

Racial bias in algorithms

When we consider racial bias in algorithms, we really need to look at algorithms in artificial intelligence (AI) or machine learning. In machine learning, applications are able to learn from data and improve their accuracy over time without being reprogrammed.

What is AI?

Artificial Intelligence (AI) is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. The field of AI promises to solve problems by leveraging the speed and power of technology as well as or better than humans can.

What is machine learning?

Machine Learning is the use and development of computer systems that are able to learn and adapt without following explicit instructions by using algorithms and statistical models to analyse and draw inferences from patterns in data. This learning and adaptation are often referred to as training.

Machine learning is a common way that Artificial Intelligence is applied, and it is heavily reliant on data. When biased data is used to train a computer system, the outcomes will also be biased, as the computer system has made inferences from patterns in the data.

AI systems that produce racist outcomes

Here are some examples of AI systems that produce racist outcomes:

1. Object detection is used in driverless cars to detect human figures

A 2019 study by researchers from Georgia Tech found that the algorithm used to detect human figures was more likely to fail to detect people with darker skin (1).

2. Racial bias in an algorithm used to predict risk levels and needs in the US healthcare system

In this example, healthcare costs were used as a proxy for a patient’s health. In the US, Black patients have historically had challenges in accessing healthcare. As a result, Black patients’ healthcare needs were consistently minimised in comparison to healthier white patients (2).

3. Racist Image Cropping on Twitter

In 2020, Twitter users began noticing that large images which portrayed people of different races would crop out darker-skinned people and focus on lighter-skinned people (3).

Tony Arcieri experimented to see whether Twitter’s algorithm would prefer Mitch McConnell or Barack Obama and found it cropped out the image of Obama. Jordan Simonovski found the same results happened even with cartoon characters.

The algorithm also produced the same outcome when applied to dogs of different colours. Later in the course, we will share more examples of applied AI that have serious implications on people’s lives.

4. Beauty AI

Beauty AI was the first beauty contest to be judged by AI (5, 6). In 2016 there were 60,000 applications from over 100 countries, and entrants weren’t allowed to have makeup, glasses or a beard.

Robot judges were using parameters like wrinkles, face symmetry, skin colour, gender, age group and ethnicity to determine winners. Although 11% of the entrants were black, of the 45 winners, none were black. Thoughts are this was because of the algorithms used to perform the analysis for a multitude of reasons

  • Inconsistent lighting when analysing darker skin meaning images were excluded
  • Human bias when creating the algorithms to perform the analysis (consider coding to determine whether wrinkles increase or decrease beauty level)

If this can occur with a beauty contest, imagine the impact when using facial data for other types of analysis or human bias.

The autosuggestions feature listing some less than desirable results when entering ‘Why are Nigerians so…’.

This is also explored further in Algorithms of Oppression: How Search Engines Reinforce Racism (7, 8). How could this happen?

  • Search algorithms reinforce racist views already in society
  • The people making algorithms don’t reflect people in society

Algorithms have since been updated to actively remove negative results, or enable users to limit the type of data that is returned.

References:

  1. Benjamin Wilson, Judy Hoffman & Jamie Morgenstern, 2019. Predictive inequity in object detection Cornell University.
  2. Ziad Obermeyer, Brian Powers, Christine Vogeli & Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations, Science Magazine.
  3. Alex Hern, 2020. Twitter apologies for ‘racist’ image cropping algorithm, The Guardian. Further reading
  4. Ruha Benjamin, 2019. ‘Default Discrimination: Is the Glitch Systemic?’ in Race After Technology, pages 77 – 96.
  5. Cision, 2016. Beauty.AI 1.0 announces the first humans judged by a robot jury; Beauty.AI 2.0 to be launched soon.
  6. Sage Lazzaro, 2016. Here are the winners of the first beauty contest judged by artificial intelligence, The Observer.
  7. Safiya Noble, 2018. Algorithms of oppression.
  8. Sean Illing, 2018. How search engines are making us more racist, Vox.

Further Resources:

  1. Ruha Benjamin, 2019. Race after technology: abolitionist tools for the new Jim code
© Creative Computing Institute
This article is from the free online

Anti-Racist Approaches in Technology

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now