Currently set to Index
Currently set to Follow
Skip main navigation

Using the mediation theory to understand changing practices

Applying mediation perspective to AI
Hospital, technology
© University of Twente

4.8

24 Reviews
In this article, we apply a mediation perspective to Artificial Intelligence. Using medical diagnostics and digital monitoring at the workplace as an example, we explain how AI systems help to shape the relations between humans and the world, in this case: medical doctors and patients, and employers and employees. Analysing these mediating roles, both at the individual and the social level, can be a fruitful starting point for raising ethical questions.
To analyse and anticipate the implications of AI systems for human beings and society, the approach of technological mediation, which was explained in week 2 of this course, can be helpful. AI systems mediate the relationship between human beings and their environment: when they are used, they help to shape people’s perceptions and interpretations, as well as their actions and the social practices they engage in.
In order to make an inventory of these mediations, it is important to first analyse the relations between humans and their environment, as they take shape via the technology. In the case of an expert system advising a medical doctor in cancer diagnostics, for instance, the system mediates the relations between doctors on the one hand and patients, medical images, and colleagues on the other. The mediations take place within these relations. Diagnostic AI systems offer analyses of medical images to doctors, on the basis of acquired ‘expertise’ in pattern recognition. This gives doctors first of all a new relation to the images: the AI systems often see things better than the doctors themselves, which can either be a source of a new learning process for doctors, or bring the challenge to take responsibility for a diagnosis that partly came from an expert system and not from human expertise. This also affects the relation between doctors and patients. It might raise issues of trust, for instance: how can patients trust the doctors expertise regarding AI systems, and the expertise of the systems. The system can also affect the relation of care: potentially, it gives doctors more room to focus on the entire patient, and not only on the potential cancer diagnosis. At a societal level, these mediations might result in new demands to be put on the training of medical doctors, and new frameworks to evaluate the quality of their work. At a societal level, these AI systems might result in new ways of dealing with issues of liability.
Another example would be digital monitoring at the workplace, see the report “Valued at work: limits to digital monitoring at the workplace”. Organisations are using ever more data and other types of data, beyond working hours and beyond the workplace. Monitoring is no longer limited to what a supervisor can see or read in typical HR-reports, like employment records, the amount of leave taken, etc. AI makes it possible to track and interpret more intimate data, like someone’s facial expressions, or word choice. These data are then analysed using algorithms and AI to generate new insights, for instance to assess performance and to measure productivity, but also to evaluate suitability, potential of future success, wellbeing and commitment. This is all information that workers have not shared voluntarily, and possibly do not even know about themselves. And they are given feedback in new and subtle ways, via gaming techniques, nudges, or notifications. Moreover, monitoring is no longer limited to the interaction between the organisation and the worker; monitoring technology may also be in the hands of a third party.
From a mediation point of view, these systems affect the relations between employees and their employers, but also between employees and their own performance, and the relations among employees. Between employers and employees these systems affect evaluation mechanisms, the privacy of employees, and the power that employers have over employees. At the same time, digital monitoring mediated how employees experience the organisation they work for, the commitment they feel, and the trust that is put in them. Moreover, such systems offer employees new ways to monitor, interpret, and modify their own performance. At a societal level, this might give rise to new norms regarding productivity, values regarding meaningful work, and workplace autonomy and privacy – just to mention a few potential mediations.

Read more

  • Link to a talk about Artificial Intelligence in the military, connecting mediation theory and the ethics of AI.
  • In the Valued at Work report the Rathenau Instituut warns against excessive monitoring in the workplace using data, algorithms and AI. The institute asks employers, employees and the government to set limits on digital tools.
© University of Twente
This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education

close