Skip main navigation

How to use the mediation theory with AI

Using real world examples, we explain how AI systems and the mediation theory can help to shape relations between humans and the world.
Hospital, technology
© Pixabay

In this article, we apply a mediation perspective to Artificial Intelligence. Using medical diagnostics and digital monitoring at the workplace as an example, we explain how AI systems help to shape the relations between humans and the world, in this case: medical doctors and patients, and employers and employees. Analysing these mediating roles, both at the individual and the social level can be a fruitful starting point for raising ethical questions.

To analyse and anticipate the implications of AI systems for human beings and society, the approach of technological mediation can be helpful. AI systems mediate the relationship between human beings and their environment: when they are used, they help to shape people’s perceptions and interpretations, as well as their actions and the social practices they engage in.

Where to start

In order to make an inventory of these mediations, it is important to first analyse the relations between humans and their environment, as they take shape via the technology. In the case of an expert system advising a medical doctor in cancer diagnostics, for instance, the system mediates the relations between doctors on the one hand and patients, medical images, and colleagues on the other. The mediations take place within these relations.

Diagnostic AI systems offer analyses of medical images to doctors, on the basis of acquired ‘expertise’ in pattern recognition. This gives doctors first of all a new relation to the images: the AI systems often see things better than the doctors themselves, which can either be a source of a new learning process for doctors, or bring the challenge to take responsibility for a diagnosis that partly came from an expert system and not from human expertise. This also affects the relation between doctors and patients. It might raise issues of trust, for instance: how can patients trust the doctors expertise regarding AI systems, and the expertise of the systems. The system can also affect the relation of care: potentially, it gives doctors more room to focus on the entire patient, and not only on the potential cancer diagnosis. At a societal level, these mediations might result in new demands to be put on the training of medical doctors, and new frameworks to evaluate the quality of their work. At a societal level, these AI systems might result in new ways of dealing with issues of liability.

AI for digital monitoring

Another example would be digital monitoring at the workplace, see the report “Valued at work: limits to digital monitoring at the workplace”. Organisations are using ever more data and other types of data, beyond working hours and beyond the workplace. Monitoring is no longer limited to what a supervisor can see or read in typical HR-reports, like employment records, the amount of leave taken, etc. AI makes it possible to track and interpret more intimate data, like someone’s facial expressions, or word choice. These data are then analysed using algorithms and AI to generate new insights, for instance to assess performance and to measure productivity, but also to evaluate suitability, potential of future success, wellbeing and commitment. This is all information that workers have not shared voluntarily, and possibly do not even know about themselves. And they are given feedback in new and subtle ways, via gaming techniques, nudges, or notifications. Moreover, monitoring is no longer limited to the interaction between the organisation and the worker; monitoring technology may also be in the hands of a third party.

From a mediation point of view, these systems affect the relations between employees and their employers, but also between employees and their own performance, and the relations among employees. Between employers and employees these systems affect evaluation mechanisms, the privacy of employees, and the power that employers have over employees. At the same time, digital monitoring mediated how employees experience the organisation they work for, the commitment they feel, and the trust that is put in them. Moreover, such systems offer employees new ways to monitor, interpret, and modify their own performance. At a societal level, this might give rise to new norms regarding productivity, values regarding meaningful work, and workplace autonomy and privacy – just to mention a few potential mediations.

Read more

  • Link to a talk about Artificial Intelligence in the military, connecting mediation theory and the ethics of AI.
  • In the Valued at Work report the Rathenau Instituut warns against excessive monitoring in the workplace using data, algorithms and AI. The institute asks employers, employees and the government to set limits on digital tools.
© University of Twente
This article is from the free online

Philosophy of Technology and Design: Shaping the Relations Between Humans and Technologies

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now