Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

How does AI make decisions?

It involves more than data, maths and ‘black box’ models. In fact, all kinds of value judgments will be made during the development.

Here, we discuss how AI systems arrive at a decision. It involves more than data, maths and ‘black box’ models. In fact, all kinds of value judgments will be made during the development and deployment of AI systems.

It is not always very straightforward how to define goals, mutually exclusive categories, and relevant attributes. But algorithms do need this. Moreover, AI systems also require energy, and have a physical dimension, and involve human labour, and politics. We challenge you to think about what kind of choices are made and by whom.

Before an AI system will make any decision, many decisions have already been made. Explicitly or implicitly; What is the system intended to do? Who gets to decide what acceptable error rates are? Whose values are informing the system? Who does it affect? How can one check whether the system is accurate and fair? How can users dispute the decision? How was the system designed to be used? What if we change our minds about the specifications and values underlying the system?

Users of AI-systems may have a tendency to ‘trust’ AI-systems for making “rational” or “objective” decisions. And it can be very difficult to scrutinise a decision made by AI, because of its complexity. Moreover, once you are used to the convenience and accuracy of the outcome – its decision or recommendations – you might become less alert to spotting errors or less keen to review the results critically.

However, we still expect that someone can account for AI decisions – that we can go somewhere to get an explanation about why the system arrived at a certain decision. Accountability is a major theme in AI ethics.

Read more

  • Roos de Jong contributed to a piece of the Rathenau Instituut explaining how constitutional issues arise because AI systems sometimes operate in non-transparent ways and risk violating human rights. → Grip op algoritmische besluitvorming bij de overheid (Dutch only).
  • Another Rathenau-reports might be of interest as well in the context of this course. In Urgent Upgrade the Rathenau Instituut demonstrates that digitalisation (including AI) can lead to a wide range of social and ethical issues. Next to privacy and security, issues such as control of technology, justice, human dignity, and unequal power relationships are also pivotal.
  • Kate Crawfort explains how AI-systems both reflect and produce social relations and understandings of the world in her book “The Atlas of AI”. → Crawford, K. (2021). The Atlas of AI. Yale University Press.
  • Maranke Wieringa gives a systematic review of the work that has been done in the field of ‘algorithmic accountability’ → Wieringa, M. (2020). “What to account for when accounting for algorithms. A systematic literature review on algorithmic accountability.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency: 1-18.
  • Salon Barocas and Andrew D. Selbst address sources of unintentional discrimination. → Barocas, S. en A. Selbst (2016). “Big Data’s Disparate Impact.” Californian Law Review 104, 671.
This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now