Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

Moral Mediation: How can we moralise technology?

Verbeek explains how we can moralise technology; anticipating, assessing and designing.
7.3
So now that we’ve seen that technologies are moral mediators, what can we do with that knowledge as a designer? In fact, there are three things that you could do. Three things that are increasingly invasive, you could say. First of all, you could simply anticipate the mediations that are involved when you design a new technology. Just to make sure that nothing might happen, that you wouldn’t want to happen. Second, you could also systematically assess these technologies ethically by making an inventory of all the potential effects and then evaluating them with an ethical theory to really make sure that you can ethically defend the potential implications that technology could have.
42.9
And third, you could also try to explicitly design mediations into a technology. That’s the most invasive option which also raises a lot of ethical questions itself. Let’s start with the least invasive options. Simply anticipating mediation, simply checking what mediating effects technologies could have on people’s behavior and people’s experiences, which is not always an easy thing to do. But making a mediation analysis, using your imagination to anticipate how technologies could organize actions and practices, perceptions and interpretations is always a possibility to do a quick scan, as it were, when you are designing a technology.
83
If you really want to be able to take more ethical responsibility, you can scale up one level and say, ok, I also need an ethical assessment of what we’re doing. That’s a more profound step. Then you use an ethical theory, a normative framework to go through all the potential implications of the technologies for the practice in which it will be used for society. And you try to determine which mediations are desirable and which are not. In order to make a well-informed decision when you are designing a technology.
115.5
Third, the most invasive option, we could also explicitly design mediations into a technology. Designing technologies with the explicit goal to organize the relations between their users and their environment, to influence people’s behavior, people’s perceptions and experiences. That’s in fact a complicated thing to do and also maybe ethically contested thing to do. Complicated, because technologies do not always do what designers thought a they would do. It has been shown many times already, for instance, that the more safety measures we install in our cars, the more risk people tend to take when they are driving on the road, because people feel safer. So actually making cars more safe doesn’t lead then to a reduction of casualties in traffic.
165.4
So that means that anticipating the behavior of people anticipating mediation of technologies is a complicated thing to do. And it requires, well, a lot of sophistication. So that’s one complication to deal with when you try to design for mediation. A more complicated thing, of course, is also the ethical aspect of it. Can all mediation just be defended? I think Hans Achterhuis, a well known Dutch thinker, is one of the key representatives of the idea that we should try to moralize technologies. He has also been fiercely defending this, because people thought that it was actually quite a bad idea that we would give the engineers, as it were, the power to make ethical, normative decisions about our society.
206.4
Hans said, we can be afraid then that we lose our freedom if we have technologies to which we delegate specific moral responsibilities. But then again, we are not able that well ourselves to take all of our moral responsibilities. And to go back to that example of the car that you just saw in the fog. One of the examples that Hans Achterhuis gave was actually a big car crash on such a day where people failed to slow down in this fog. And where many cars crashed upon each other, many people died.
242.1
And so Hans said maybe what we could do as well is to have some kind of a fog detector in cars that automatically limits the speed of the cars to a specific speed at which it is still safe to drive there. And so now we have saved the freedom of the driver, but actually we’ve lost many lives. Why can’t we accept that sometimes we need to delegate some responsibilities to achieve the goals that we want to achieve ourselves? So it is a contested thing to do. But there are also good arguments to say, maybe in some cases it would be good if our behavior is influenced by technologies anyway.
280
It would actually be immoral not to take responsibility for it, not to think through the potential mediating effects and try to design mediations that can count on broad democratic support for instance, in technologies, We will explore this further. And then actually next week, the week on artificial intelligence, we will introduce you to the approach of ‘guidance ethics’, which also gives you a whole systematic kind of way of working to deal with mediation in ethical practice.

In this video we explain how one could start moralising things. I aimed to emphasise that we are fundamentally mediated beings; technologies always mediate us. The theory of mediation – and the examples from the previous step – shows that any design, whether you want it or not, does have an impact on human behaviour. There is no way to get around an impact.

In the next three steps we will have a closer look at what designers can do to go about this.

  • First, performing a mediation analysis can help them to anticipate (see STEP 3.5 – How can we anticipate the moral dimensions of technology-in-design?) the moral dimensions of the technology-in-design, for instance in order to avoid undesirable mediating effects.

  • Second, mediation analysis can be the basis for assessing (see STEP 3.6 – How can we assess the quality of mediations in design?) the quality of expected mediations. Making such assessments, to be sure, does not imply a shift back from ‘accompanying’ to ‘assessing’ technology; it rather should be seen a fully-fledged part of ‘technology accompaniment’.

  • Third, mediations can be explicitly designed (see STEP 3.7 – How can you design moral technology?) into a technology. In this case, we can speak of an explicit ‘moralisation’ of technology.

This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now