Skip main navigation

Algorithmic bias

Algorithmic bias is an increasingly important concern in the age of AI and machine learning. Prof. Ching-Fu Lin discusses it in this article.
In today’s digital age, where artificial intelligence (AI) and machine learning are becoming integral to societal functions, the phenomenon of algorithmic bias emerges as a major concern.

At its core, algorithmic bias is a reflection of systemic errors in the algorithm’s outcomes, originating from various sources such as biased training data, flawed feature selection, and even the very design of the algorithm itself. For instance, if an AI system is trained on historical data that inadvertently favors a particular demographic due to past prejudices, the algorithm will likely reproduce and amplify those biases, leading to skewed and potentially harmful results.

Algorithmic bias

This concern escalates significantly when biased algorithms find applications in public domains. When public authorities utilize these AI tools, especially in critical decision-making scenarios, the embedded biases can perpetuate existing inequalities, undermining the very essence of fairness and justice. An example that brings this challenge to light is predictive policing, where algorithms predict crime patterns based on historical data. But if this data is rooted in biased policing practices of the past, the algorithms can further entrench these biases, leading to a vicious cycle of prejudice and discrimination.

predictive policing

© Ching-Fu Lin and NTHU, proofread by ChatGPT
This article is from the free online

AI Ethics, Law, and Policy

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now