Skip main navigation

Algorithmic bias article

In this activity, we will talk about “algorithmic bias.”
a keyboard
© Harini Suresh, John V. Guttag

In this activity, we’ll be talking about “algorithmic bias.”

Please download and read this article below, and we will discuss this topic via the audio lecture in the following steps.

Abstract

As machine learning increasingly affects people and society, it is important that we strive for a comprehensive and unified understanding of potential sources of unwanted consequences. For instance, downstream harms to particular groups are often blamed on “biased data,” but this concept encompass too many issues to be useful in developing solutions. In this paper, we provide a framework that partitions sources of downstream harm in machine learning into six distinct categories spanning the data generation and machine learning pipeline. We describe how these issues arise, how they are relevant to particular applications, and how they motivate different solutions. In doing so, we aim to facilitate the development of solutions that stem from an understanding of application-specific populations and data generation processes, rather than relying on general statements about what may or may not be “fair.”

Full article

Harini Suresh & John V. Guttag, A Framework for Understanding Unintended Consequences of Machine Learning (2020), v3, https://arxiv.org/pdf/1901.10002.pdf

Page 2 to 6 are helpful to answer the quiz and test.

© Harini Suresh, John V. Guttag
This article is from the free online

AI for Legal Professionals (I): Law and Policy

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now