Skip main navigation

The Challenges

An article presenting some of the challenges of human-machine collaborations.
© Luleå University of Technology

Data science continues to play a role in several parts of our lives and societies, and it will likely continue to do so in the future. It is used in transportation, healthcare, banking, security, legal, and many other domains. Data-enabled algorithms allow the automation of cognitive, discretionary, and decision-making tasks that only humans used to perform in the past. Indeed it seems that nowadays, decision-making to perform complex tasks is no longer exclusive to us humans.

Decisions are often taken by either algorithmic machines, humans, or both. Some recognize a replacement scenario in which human jobs are taken over by intelligent machines, while others emphasize interrelation and augmentation where humans and algorithms interact to perform tasks together. The automation of tasks formerly done by human workers may in some cases result in the replacement of the human altogether. More often, there is partial automation of specific tasks, resulting in a division of labor between humans and technology. In these cases we can also see novel tasks emerge that ensure a continued need for the human worker. Such tasks bring about several challenges for which we need to find new solutions.

Human-machine collaboration

The collaboration between humans and machines is less straightforward than it might first appear. Such collaboration needs a framework of reference. Decision-makers will face obstacles when trying to infer information from the output data science algorithms. For example, both the decision-maker and the algorithm might be conceived as experts of sorts. Yet, they have been trained differently and they reason in very distinct ways. For the decision-maker, this poses a problem once we consider cases of peer disagreement. Here, ‘peer disagreement’ describes cases of two (equally) competent peers, with respect to a certain domain-related activity, whereby both parties disagree with respect to a certain proposition.

Now, when trying to make a well-informed decision, how much weight should the decision-maker assign to the algorithm? Bluntly put, should the decision-maker be required to call a rather senior decision-maker for an additional opinion? Or, would the senior decision-maker be rightfully mad, given that the algorithm provided a clear diagnosis?

Addressing the human-machine collaboration is therefore key.


It is evident that involving analytics via data science algorithms has the potential to improve the accuracy of decision-making, but it comes at the expense of opacity when trying to assess the reliability of a given diagnosis.

Defensive decision-making

We argue that the involvement of AI and machine learning algorithms challenge the epistemic authority of decision-makers. This promotes patterns of defensive decision-making which might harm various stakeholders such as the organization, decision-makers themselves, and the society at large. In such a way, decision-makers attempt to anchor their decisions to the choice made by the algorithms to avoid the embarrassment of being wrong.


The accountability challenge could be explained by defensive decision-making. That is, decision-makers are being held accountable for their decisions. They can, when needed, justify their decisions. Furthermore, in case a decision-maker causes harm to someone by making a severe choice error, the decision-maker might be blamed for acting irresponsibly. To mitigate those risks, one should decide according to the best evidence available.

Now, let us return to the case of peer disagreement between the decision-maker and the machine-learning algorithm. The decision-maker knows that their own and the algorithm’s choices diverge. Yet, the decision-maker is unable to extract an explanation from the algorithm why it decided accordingly. At best, the decision-maker might have some higher-order evidence about the algorithm’s general degree of accuracy.

If we assume that the relevant general degree of accuracy is reasonably high, it is easy to see why it can be tempting for the decision-maker to defer to the algorithm. For one, deferring to the algorithm provides the decision-maker with a normative justification for their own decision. Then again, if the decision-maker sticks to their initial proposition and their diagnosis turn out to be wrong, the decision-maker might be conceived as acting irresponsibly as (s)he ignored the evidence provided by the algorithm. A further side-effect could be that the decision-maker might be biased toward interpreting the evidence in a way confirming the algorithm’s choices. Thus, the interplay of machine learning algorithms and decision-makers potentially risks the quality of the decision-maker’s skills over time as well as threatens the value of accountability.

Algorithmic aversion

Not all decision-makers will be able to augment the output of algorithms appropriately. Therefore, some decision-makers will under or over-utilize algorithmically generated insights in data-driven decision situations, or “augmented decisions”. When decision-makers cannot decide when to augment the output of analytics into decisions discriminately, they are averse to the use of algorithms. Aversion happens due to different reasons, such as cognitive incompatibility, decision autonomy, or lack of incentives.

© Luleå University of Technology
This article is from the free online

Data Science for Climate Change

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now