Currently set to Index
Currently set to Follow
Skip main navigation

Who is in charge of ‘AI’ developments?

Roos de Jong explaining how AI systems come to decision making
As we have discussed before, Artificial Intelligence is not just a single technology. Various technologies can be involved to enable AI to observe, analyze and act and also to learn from doing so. These technologies can be, for example, sensors, big data, robotics and digital platforms and devices connected to the Internet. In the right environments an AI system can exhibit a certain degree of intelligent behavior and carry out actions. When it comes to very specific skills AI systems can now even outclass people. Yet, most often the aim of AI systems is to augment human decision making by means of human AI collaborations. The goal could be to overcome inefficiency or arbitrariness and bias of human decisions.
Although AI techniques are based on mathematics, this does not mean that they are socially and politically neutral. So let’s take a closer look at the question. How do AI systems arrive at a decision?
Determining the system’s goals and objectives is an important first step. For an AI chess player, it is quite straightforward to win the game. There are clear rules and strategies that can be written out. For the game ‘GO’ (see link) it’s maybe more complicated, but there is always a winner For facial recognition software success can also be easily checked. Even though the inner workings of such systems can be very complex and has many layers, it either is or is not able to identify your face. And you can test the system with also a great number and variety of faces, and you can then decide what error rates are acceptable and when the system is good enough.
Sometimes, though, it is very difficult to decide what the AI system is supposed to do and to take into account. When an AI system needs to function in more complex and dynamic environments, lots of scenarios need to be anticipated. Many variables are at play, and people can think differently about priorities and strategies. Just think of this other autonomous car or driving assistants. What is a desirable driving behavior and who should the car protect? Only the driver or fellow road users as well?
Then predicting social outcomes, for instance, how a child’s life will develop or if people are likely to show fraudulent behavior in the future. And this may be even more challenging because it is very hard to determine which factors or attributes are actually relevant. Moreover, interpreting data can be very tricky. Which data are actually needed and which are available? And how should you interpret correlations. Just the fact that two things correlate and does not necessarily mean that the one also causes the other.
So developing an AI system requires numerous human decisions. Decisions about goals and objectives are not quite neutral and can actually be even contested, as are the ways in which an AI system is trained and tested and later evaluated after maybe some period of time.
However, it’s not only the software of AI that is sometimes a ‘black box’. AI systems also require natural resources and energy. They have an ecological impact. Moreover, the development of technology also requires a lot of human labor. There are all kinds of microjobs involved to train and monitor algorithms. And labels and classifications used in AI systems can effect people differently. So it is really crucial to think about what kind of decisions have been made in a broader context and also think about who has made them, what is being optimized and for whom?


24 Reviews
In this video, we discuss how AI systems arrive at a decision. It involves more than data, math and ‘black box’ models. In fact, all kinds of value judgments will be made during the development and deployment of AI systems. It is not always very straightforward how to define goals, mutually exclusive categories, and relevant attributes. But algorithms do need this. Moreover, AI systems also require energy, and have a physical dimension, and involve human labour, and politics. We challenge you to think about what kind of choices are made and by whom.
Before an AI system will make any decision, many decisions have already been made. Explicitly or implicitly; What is the system intended to do? Who gets to decide what acceptable error rates are? Whose values are informing the system? Who does it affect? How can one check whether the system is accurate and fair? How can users dispute the decision? How was the system designed to be used? What if we change our minds about the specifications and values underlying the system?
Users of AI-systems may have a tendency to ‘trust’ AI-systems for making “rational” or “objective” decisions. And it can be very difficult to scrutinise a decision made by AI, because of its complexity. Moreover, once you are used to the convenience and accuracy of the outcome – its decision or recommendations – you might become less alert to spot errors or keen on reviewing the results critically.
However, we still expect that someone can account for AI decisions – that we can go somewhere to get an explanation about why the system arrived at a certain decision. Accountability is a major theme in AI ethics. We will delve into this during the next steps of this course.

Read more

  • Roos de Jong contributed to a piece of the Rathenau Instituut explaining how constitutional issues arise because AI systems sometimes operate in non-transparent ways and risk violating human rights. → Grip op algoritmische besluitvorming bij de overheid (Dutch only).
  • Another Rathenau-reports might be of interest as well in the context of this course. In Urgent Upgrade the Rathenau Instituut demonstrates that digitalisation (including AI) can lead to a wide range of social and ethical issues. Next to privacy and security, issues such as control of technology, justice, human dignity, and unequal power relationships are also pivotal.
  • Kate Crawfort explains how AI-systems both reflect and produce social relations and understandings of the world in her book “The Atlas of AI”. → Crawford, K. (2021). The Atlas of AI. Yale University Press.
  • Maranke Wieringa gives a systematic review of the work that has been done in the field of ‘algorithmic accountability’ → Wieringa, M. (2020). “What to account for when accounting for algorithms. A systematic literature review on algorithmic accountability.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency: 1-18.
  • Salon Barocas and Andrew D. Selbst address sources of unintentional discrimination. → Barocas, S. en A. Selbst (2016). “Big Data’s Disparate Impact.” Californian Law Review 104, 671.
This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education