Skip to 0 minutes and 1 second The idea to moralize technology is not a self-evident idea. How desirable is it, in fact, to influence users of technologies, sometimes even behind their backs. Isn’t this a form of coercion that is at odds with human freedom, and doesn’t it somehow replace democracy with technocracy? These worries are well understandable and need to be taken very seriously, but on the other hand the phenomenon of technological mediation implies that technologies always have an influence on human beings. Speed cameras can be an example here. There are websites that show how people destroy such cameras, because they get angry from the fact that they limit their freedom.
Skip to 0 minutes and 42 seconds But these people do not seem to realize that their very desire to drive so fast itself is also fully mediated by their cars that can easily drive much faster than is allowed. And the roads with a highway design that allows very high speeds. So defending autonomy does not seem to be the best angle to deal in a responsible way with technological mediation and demoralization of technology. So how then to solve this tension between moralization on the one hand and freedom on the other hand. One solution that has been developed to deal with this tension is called libertarian paternalism.
Skip to 1 minute and 19 seconds Thaler and Sunstein proposed to design technologies that nudge people towards desirable behavior, but always in such a way that this influence is visible and that there is an opt out. When you make a double sided copying option, the default setting of a copier for instance, there always needs to be room to choose for single sided copies. This possibility of an opt out, though, is not always there. Take the example of ultrasound again. The mere presence of that technology has suddenly given expecting parents the responsibility to deal with the possibility to get information about the health conditions of the fetus. Also not making a scan is an active decision now. There simply is no opt out.
Skip to 2 minutes and 5 seconds So technological mediation is not something external to humans that can also just be ignored. Technologies are deeply interwoven with our perceptions and practices and, therefore, also with normative frameworks and normative decisions. But this absence of an opt out does not imply that we need to give up on autonomy. Not at all. In fact, it gives us more forms of responsibility rather than less, because we can now also take responsibility for the design of technologies and the influence they have on us. It’s helpful here to distinguish autonomy from freedom. We are not autonomous in a sense of being able to keep all influences out, because we’re always mediated.
Skip to 2 minutes and 46 seconds But we can develop a free relation to mediations by learning to read them and to somehow reflect critically on them. Designing mediations then is to take responsibility for the inevitable mediating roles of technologies, just as a good use and implementation of technology is a way to deal with mediation in a responsible way. Taking this responsibility is a complicated thing to do. First of all, we always need to ask ourselves if and how the influence to be designed into a technology can be ethically justified. Where do the goals of the designers come from? A democratic basis is needed here in order to avoid technocracy.
Skip to 3 minutes and 26 seconds A second consideration is the impact that mediations have on people’s ability to make a moral decision in the first place. Doesn’t demoralization of technology make us people morally lazy? How to keep up the possibility to take responsibility at all. This is the central requirement when moralizing technology. It needs to happen in such a way that people keep developing a critical attitude towards technologies and their mediating roles.
The ethics of designing morality: can we just design morality?
In this video we introduce you to the idea that we can start “moralising things”. Here you can read more about the different approaches to do that.
In their book Nudge (Thaler and Sunstein 2008), Richard Thaler and Cass Sunstein make a case for designing our material surroundings in such a way that it influences us in a positive sense without taking control away from us. A nudge is a tiny push, a small stimulus that guides people’s behaviour in a certain direction. Our material world is full of such nudges, Thaler and Sunstein claim, varying from photocopying machines with a default setting of single-sided copies to urinals with a built-in image of a fly to seduce men to aim for it. Thaler and Sunstein propose that we design these nudges in an optimal manner, so that we can guide our own behaviour in directions that are widely considered beneficial.
The central idea in their approach is that human decisions are to a considerable extent organised and pre-structured by our material surroundings. When we make choices, two systems are at work in our brains, which Thaler and Sunstein call an ‘automatic system’ and a ‘reflexive system’. Most of our decisions are made automatically, without explicit reflection. But for some decisions, we really have to stop and think: they require reflection and critical distance.
To a significant degree, our automatic system is organised by our material surroundings. To use one of Thaler and Sunstein’s examples: when fried snacks are within reaching distance in a company’s canteen and the salads are hidden behind refrigerator doors, it is very likely that many people will choose the less healthy food. The layout of canteens gives nudges in a certain direction. If we want to take responsibility for such situations, we must learn to think critically about nudges. If we can design them better, we in fact design our automatic system in a more desirable way. Thaler and Sunstein, therefore, call such design activities ‘choice architecture’: the design of choice situations. We need to rewrite the default settings of our material world.
But these activities of choice architecture should never close down the reflexive system. For Thaler and Sunstein, it is extremely important that nudges always remain open to reflection and discussion, and can move from the automatic to the reflexive system. This is why they indicate their approach as ‘libertarian paternalism’. It is paternalistic because it explicitly exposes people to nudges in a direction that is considered desirable. But it is libertarian as well, because these nudges can always be ignored or undone, in all freedom. Just like everyone is currently free to use both sides of the paper when copying, even though the standard setting is one side, no one should be forced to eat a salad and pass up the croquettes in a ‘re-nudged’ cafeteria.
Thaler and Sunstein’s way out of the dilemma between influencing behaviour and respecting autonomy, therefore, is the “opt-out”: by drawing on our reflexive system, we should always be able to move away from the nudges. Every act of paternalism is compensated by the explicit possibility to take a libertarian stance toward it.
Value Sensitive Design
Another approach for ‘moralising technology’ is the method of Value-Sensitive Design (VSD). This design method, that was developed by Batya Friedman and others (Friedman et al, 2002), aims to account explicitly for human values throughout the design process. In the VSD approach, not technological functionalities are the primary focus of design activities here, but the moral values that need to be supported by the technology-in-design. The method has been used, for instance, to design a web browser that requires informed consent before saving a ‘cookie’ (a small file which contains personal information about the person surfing the internet). Starting from the values of privacy and autonomy, this design realised in creating a web browser that is not only functional for surfing the internet, but that also respects some important values that are often threatened by other web browsers.
Value-Sensitive Design uses an iterative methodology, that integrates conceptual, empirical, and technical investigations. At the conceptual level, the values that are to be implemented are carefully analysed in all their facets. For the web browser mentioned, the researchers analysed what the elements of “informed” and “consent” entail, like the adequate “disclosure” of the information needed, “comprehension” of this information, and “voluntariness” and “competence” in people’s “agreement” with what they consent to, implying the “clear opportunity to accept or decline”, the actual freedom to do so, and the “capabilities needed to give informed consent” (Friedman et al. 2002, 4).
The empirical level, subsequently, concerns “the human context in which the technical artifact is situated” (ibid., 3). Investigations here concern, for instance, how stakeholders apprehend different values, how they prioritise competing values, and how much impact these values have on their own behaviour. In the case of the privacy-sensitive web browser, this step was conducted only after work done at the third, technological, level. At the technological level, investigations concern the specific “value suitabilities that follow from properties of the technology” (ibid, 3). The central idea here, just like in mediation theory, is that technologies support certain activities and values, while discouraging other ones. In the VSD method, technical investigations can concern both the ‘value impacts’ of existing technologies, and the explicit design of technologies that support specific values.
For the privacy-sensitive web browser, technical investigations concerned the development of the technological properties of web browsers in relation to their impact on privacy. What are the default settings for cookie use? How much information is disclosed to users about the benefits and disadvantages of cookies, and about the specific information that cookies make available about their surfing behaviour. On the basis of this analysis, a redesign of the (Mozilla) web browser was made. This browser introduced a peripheral awareness of cookies; information about individual cookies and cookies in general; and user management of cookies. Empirical investigations were subsequently directed at the ways users appreciated this redesign, in order to adapt it in the most desirable direction.
Excerpt from P.P. Verbeek (2013). ‘Technology Design as Experimental Ethics’. In: S. van den Burg and Tsj. Swierstra, Ethics on the Laboratory Floor. Basingstoke: Palgrave Macmillan, pp. 83-100. ISBN 9781137002921
 COPD.com is developed by Roessingh Research and Development, the University of Twente, and Medisch Spectrum Twente; for more information see http://www.copddotcom.nl.