Skip main navigation

Risks & bias

This section describes the risks and biases of AI in education, highlighting how algorithms may have inherent bias

AI bias is defined as systems that, “produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality” (IBM, 2023). We have discusses this in depth in the earlier section, Introduction to AI (Week 1).

There is always risk when using AI algorithms in any context because AI systems “can perpetuate and magnify existing biases, prejudices, and discrimination” (Mhlanga, 2023). In essence, algorithms might be biased against a cohort of people and therefore the risk is that these people do not receive fair treatment as a result. 

Who does this impact?  

Marginalised groups are disproportionately impacted by AI bias. These groups include but are not limited to, non-native English speakers, ethnic minorities, women, children, people with disabilities, and the socio-economically disadvantaged.  

Possible bias might include certain assumptions, for example: 

  • Gender: Men are leaders and women are teachers  
  • Ethnicity: White men are principals and men of colour are custodians
  • Language: English is the universal language and all cultural references are Anglocentric.
  • Age: Janitors are older men. 

Here is an example of possible AI bias in practice:

ChatGPT was promoted to create two images. One of a teacher, the other of a principal. Below are the images it produced: 

AI image of a teacherChatGPT image of a Teacher

ChatGPT image of principalChatGPT image of a Principal

ChatGPT was then asked to produce an image of a custodian. Below is the image produced:

ChatGPT image of custodianChatGPT image of a Custodian

TASK: Open ChatGPT in another browser. Prompt it to produce an image related to education. Use non-descriptive language such as ‘produce an image of a teacher’. Observe the image produced. Ask yourself, are there biases within the image? What are the biases? How might these be harmful?

Watch this TED Institute talk “Can we protect AI from our biases?” by Robin Hauser

This is an additional video, hosted on YouTube.

Another potential AI risk, perceived by some, is that it will eventually replace teachers altogether. 

As AI becomes more widely used, there is a fear amongst some in education, that it could potentially replace teachers, taking jobs. Although this risk is valid, the consensus view is that “[AI] ought to be regarded as a device that supplements classroom instruction and student learning rather than one that supplants it” (Mhlanga, 2023). AI algorithms have ample information and ability to teach certain subjects but lack real understanding, reflection or empathy, all important qualities for a teacher. With the understanding that AI can be a beneficial support within classroom settings without usurping jobs, it can be used in a complimentary way to give students the support they need to thrive. 

Why AI won’t replace teachers altogether? 

  • The human connection: Good teachers build effective relationships with students that help them thrive as learners and citizens. Teachers are often much more than subject educators. Without this special social aspect to learning, students lose an incredible facet of their overall education. 
  • Emotional intelligence: AI does not have the capacity to understand human emotions or show empathy. Indeed AI does not “understand” in the human sense at all, it predicts probability, and students are more complex than algorithmic predictions.  
This article is from the free online

AI Ethics, Inclusion & Society

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now