Currently set to Index
Currently set to Follow
Skip main navigation

Examples of ethical questions

The central ethical challenges connected to two examples of AI: self-driving cars and AI-based recruitment
Keyboard with key 'find job'
© University of Twente

4.8

24 Reviews
This article investigates the central ethical challenges connected to two examples of AI: self-driving cars and AI-based recruitment. In both examples, the ethical issues are connected to the ways in which AI systems either make decisions on their own or help to shape human decision-making processes. AI appears to bring issues of potential bias, transparency, and accountability.
After investigating the various ways in which artificial intelligence has an impact on human beings and society, we now move to the ethical analysis of these impacts. In discussions on how to shape our digital society in a socially and ethically responsible way, many ethical issues play a role, ranging from privacy, autonomy, and security to control, human dignity, equity, and power. ‘AI Ethics’ has in fact become a big field over the past years: many companies and organisations have developed an ethical code or framework regarding AI. Let us take a look at two examples that show the need for ‘ethical AI’.

Self-driving cars and the ‘trolley problem’

Our first example is the ethics of automated decision-making in self-driving cars. When developing the artificial intelligence that enables autonomous vehicles to participate in traffic, it is inevitable that decisions are made about the behavior of the vehicle in critical situations. A standard example here is the imaginary situation in which a car takes a curve at high speed in a very narrow street and suddenly someone crosses the road. For self-driving cars, ‘accidents’ do not exist: the algorithms on which they function will respond to any situation and cannot act ‘in a panic’. Unless they refuse to make a decision, and make a random decision – but also this, then, is an intentional act of the people designing the car. What should the car decide to do: avoid the pedestrian and hit a wall, risking the lives of the passengers in the car, or save the lives of its passengers, risking the life of the person crossing the road?
This dilemma resembles a classical thought experiment in ethics: the so-called ‘trolley dilemma’. Imagine five people who are tied up on the railway track, while a trolley is approaching and you are never a lever that can let the trolley switch to a set of tracks on which only one person is tied up – is it morally justifiable to pull the lever? There is not one single answer to dilemma’s like this: it depends on fundamental ethical beliefs and frameworks. One line of argumentation is that it is better to sacrifice the life of only one person than to allow five people to be killed. Another one is that the value of human lives does not allow for calculation: every life is valuable in itself and may never be sacrificed. A third line focuses more on the responsibilities of the individual who has to make a decision: also the decision not to choose is a decision and can be criticised. The trolley dilemma shows how necessary and also how difficult it is to install ethical decision-making in AI systems.

AI-based recruitment: good intentions and persistent bias

Another example is the development of AI-based recruiting tools. As the report “Valued at work – Limits to digital monitoring at the workplace using data, algorithms and AI” of the Rathenau Instituut explains, such tools can, in principle, pass fairer judgment than human beings: algorithms always ‘measure’ the same aspects and process all input the same way. AI systems can therefore not only ensure a fair procedure, but also fair outcomes. Yet, it appears quite hard to achieve this fairness, since the AI systems have to be trained with the help of datasets, and therefore the diversity of these datasets, and the training method to recognise suitable candidates, plays a key role in the output of the system. When a system uses as a benchmark the top-thirty employees in an organisation where women have long been underrepresented, it will keep repeating its discriminatory legacy. A concrete example here is Amazon, which had to suspend a system for selecting applicants automatically because it put women’s applications at the bottom of the pile, as a result of its training with historical data: more men than women work in the IT industry. Bias is a severe problem in AI systems. What if biases start to play a role in other domains, such as banking (getting a loan), or policing (profiling potential criminals), etc.

Read more

© University of Twente
This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education

close