Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

Can there be ethics in things?

Ethical implications of technological mediation explained by Verbeek
Hello everybody. Welcome to the third week of our online course. After having explored over the past week the idea of technological mediation, this week, we will focus on its ethical implications. Can there be ethics in technologies? Can there be ethics in things? It might sound totally odd. I mean, ethics is something that humans do, not that things do, right? But still, ethics is typically about the questions of ‘how to act?’ or ‘how to live our lives?’. And the idea of mediation shows that technologies play a role in how we act, how we live our lives.
So that means that technologies help us to answer the ethical questions that we can ask ourselves, and that suggests that technologies are at least somehow ethically significant. Let’s go back to the example of the coin lock used in the supermarket carts to make sure that people return their carts to the place where they got it. And there’s a clear norm in such a lock, the norm to return the car to the place where you got it so that you don’t leave it on the parking place. So there can be norms in technologies. The same thing holds true for values. People have argued that there can be gender stereotypes, gender bias in shavers.
Some of the lady shavers that you can buy, cannot be opened. There are no screws. They are sealed, as it were. So if they stop functioning, you have to throw them away and buy a new one. So implicitly, it embodies a different set of values for women than values for men. It stereotypes women as technologically not competent. We can also go one layer deeper. We can also examine the example of ultrasound again, that we saw last week. Making an ultrasound scan of a fetus changes the relation between mother and child. In the old days, a doctor had to ask the mother what she felt and had to feel the fetus through the belly of the mother.
But now the mother has become the environment of the fetus, as it were, and the fetus appears on the screen almost as a full blown baby, even when it’s only 12 weeks old. The fetus also appears in medical terms as a potential patient about whose life the parents might even need to make a decision. So ultrasound changes what a fetus is for us, what it means to expect a child. And it also raises all new kinds of responsibilities for expecting parents, especially in countries where abortion is legal. Technologies mediate moral relations, you could say. So how to understand this ethics of technology then? I mean, normally we see ethics, as I said, as something human.
So don’t we make a little mess, in ethical theory if we now start making the claim that there can be ethics in things? I mean, doing ethics, for instance, requires intentions. We cannot be held accountable for an unintended action. And it also requires freedom. If you don’t do something out of your free will, you cannot be held accountable either. Well, things do not have intentions. They don’t have a freedom. So how could they possibly qualify as a moral agent? We will not blame cars for car accidents, for instance. And the other side of the story is also that behavior that was influenced by a technology cannot always be seen as a moral action.
Is slowing down near school because there is a speed bump on the road an ethical decision or is it just steered behavior? And so the question rises whether it makes sense at all to consider technologies as moral agents. From the perspective of mediation, this question is actually also simply the wrong question, because the question takes as its starting point, again, the split between humans and technologies, between subjects and objects. And it is this split that we want to overcome with the very idea of mediation. From the perspective of mediation you would say, no, things are not moral agents in themselves. They do not do ethics. But in fact, neither do humans. Humans always make ethical choices.
Also in interaction with technologies. We always do it together. We are in it together. Technologies are moral mediators. Ethical actions and decisions are never taken in a vacuum, but they’re always taken within a context in which technologies inevitably play a role. Technologies mediate ethics. They mediate morality you could say. They inform our ethical choices. They inform our ethical behavior. And therefore, we need to deal with these mediations in a responsible way when we design or implement technologies in our society.
Seeing the ethical dimension of technologies doesn’t take away ethics from the humans and moving towards technology then, but actually enlarges our scope of ethical responsibility, because we now also become responsible not only for our own ethics, but also for the ethics of the things that we are designing.

In this video we explain that technological mediations have an impact on how we act. This makes technological artefacts at least ethically significant. But what does it mean to mediate morality? Who or what has moral agency?

The approach of technological mediation has many implications for the ethics of technology. When ethics is about the question of ‘how to act’ or ‘how to live’, the phenomenon of technological mediation shows that technologies are ‘morally charged’: they help human beings to be moral agents. The technological mediation of the human-world relation has two dimensions, one concerning human perceptions and interpretations, the other human actions and practices. On the one hand, technologies help to shape human experiences of the world, and on the other they help to shape how humans act and live their lives. Both dimensions are morally significant, because they help to shape moral actions either ‘directly’ by influencing people’s behaviour, or ‘indirectly’ by shaping the perceptions and interpretations on the basis of which people make decisions. Coin locks in shopping carts stimulate people to return their carts to a central place, while sonograms make expecting parents responsible for the birth of a child with congenital abnormalities.

To what extent can technologies be moral agents?

The proposal to speak about technologies in ethical terms has raised a serious discussion in the philosophy and ethics of technology about the question of moral agency. This discussion revolved around the question to what extent technologies can be moral agents. Critics of a moral approach to things typically fear that ascribing moral agency to nonhuman entities would excavate human responsibility, because it could lead to absurd practices like blaming cars for traffic accidents. Moral agency can only be a human affair. (Peterson and Spahn 2011; Selinger 2012)

The approach of technological mediation does not make technologies moral agents in themselves, though. Only in the context of the relations human beings have with them can they help to organise people’s moral actions and perceptions. Moral agency is distributed over humans and things, as it were: if one of the two were missing, this type of agency could not exist. Each in their own way – distinct, but not separated – humans and things contribute to moral actions and decisions (cf. Verbeek 2011). Reducing ethics to an exclusively human affair leaves us with a drastically impoverished world. Because such an approach starts from a radical separation between subjects and objects, it forces us to choose between either reserving moral agency to the human domain or to claim that nonhuman entities can be moral agents as well. In the real world in which we all live, though, such purified subjects and objects do not exist. Actual moral actions and decisions take place in complex and intricate connections between humans and things, which have moral agency as a result rather than as a starting point. Such a hybrid approach to the relations between humans and things does not reduce human morality, but adds to it; it shows dimensions that normally remain underexposed. Making visible the moral significance of things does not undermine human responsibility by blaming cars for accidents, but rather expands the ways in which we can design, implement, and use technologies in responsible ways.

How do you think about technology and moral agency? Who are decision-makers? What type of behaviour should we steer? And in which direction? Just think about these questions for a moment and proceed to the next step for a discussion!


Peterson, Martin, and Andreas Spahn. “Can technological artefacts be moral agents?.” Science and engineering ethics 17.3 (2011): 411-424.

Selinger, Evan, et al. “Erratum to: Book Symposium on Peter Paul Verbeek’s Moralizing Technology: Understanding and Designing the Morality of Things. Chicago: University of Chicago Press, 2011.” Philosophy & Technology 25.4 (2012): 605-631.

Verbeek, Peter-Paul. Moralizing technology: Understanding and designing the morality of things. University of Chicago Press, 2011.

This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now