Skip main navigation

Responsible AI and responsible use of AI

Short introduction on responsible AI in a clinical setting. What is responsible AI, and how to build it? How to use AI responsibly?
White robot
© Alex Knight via Unsplash.

With the arrival of AI in healthcare, its influence on responsibility and accountability should be considered. As established before, AI will not only perform on par with the healthcare professionals, but it will also make better decisions. Still, it is not flawless. What happens if the AI system makes any errors? Who should be considered responsible?

AI performance depends on the medical data set used for training. For instance, a program trained with high-quality data (e.g. high definition mammograms) in a high-quality setting (e.g. close to the training conditions) may fail when provided with lower quality data (e.g. low resolution mammograms) or when it is placed in a suboptimal clinical environment (e.g. old medical equipment). As far as the actual AI implementation context is concerned, training environments can be biased or unrepresentative. As a result, even well-trained AI is not error-proof. AI can misdiagnose patients, leading to erroneous decisions and unintended harm.

Who is responsible?

One tough question is to define who is responsible in such situations: the clinician who uses AI as a decision support tool, or the software company. Although there is no simple answer to this question, it sheds light on the matter of liability and therefore on the concepts of responsible AI and responsible use of AI in healthcare.

Responsible AI

Responsible AI in healthcare is a framework rather than an end-product. It conceptualizes how health sector stakeholders can mitigate risks and challenges associated with AI by meeting several criteria. The three key pillars in the design of responsible AI are Accountability, Responsibility, and Transparency (A.R.T.):

  1. Accountability. Trust in AI devices leads professionals and patients to build convictions. Based on those convictions, healthcare providers and patients choose one course of action over the other(s). For this reason, accountable AI is required to explain, contextualize, and rank their decisions or predictions according to moral and ethical considerations.
  2. Responsibility. This principle pertains to the entire chain of actors, from developers to manufacturers, suppliers, users, and the AI program itself. Each actor must be liable for their decisions and openly display its limits by revealing errors and unexpected results. More specifically, healthcare professionals are responsible for their decision-making when it comes to patient care. As such, healthcare professionals are responsible for their rendition of AI predictions and, by extension, their resulting course of action. For this reason, it is of utmost importance that they receive adequate education and training about AI technology, such as that provided by AIProhealth.
  3. Transparency. Regulators and users demand transparency for responsible AI. The provenance and dynamics of input data must be transparent to ensure that they are collected, generated, and managed equitably to reduce bias. AI decision-making mechanisms also need to be transparent and accessible for inspection, testing, and path correction throughout their lifecycle.

Upholding these principles requires the commitment of all stakeholders and puts human values and social good at the center of AI systems.

Using AI responsibly

Implementing responsible AI to healthcare also entails the responsible use of AI by stakeholders. In March 2021, the American Medical Association reported seven key takeaways with respect to the responsible use of AI in healthcare:

  1. Promote population-representative data with accessibility, standardization, and quality. Rather than a one-size-fits-all policy, AI needs to be trained to consider the individual’s needs. AI must be designed accurately for all populations and comply with standard requirements to ensure quality. AI data must always be accessible and transferable.
  2. Prioritize ethical, equitable and inclusive medical AI while addressing explicit and implicit bias. AI systems must be evaluated in relation to their ability to address existing discriminations and AI-induced bias that have the potential to aggravate existing inequity. After evaluation, experts may decide to validate or invalidate the system or to deploy it in limited contexts.
  3. Contextualize the dialogue of transparency and trust, which means accepting differential needs. AI developers, suppliers, users, and regulators must consider that AI applications and transparency need fine-tuning depending on end-users’ needs and societal, environmental, and technical circumstances. Although it is necessary to fully disclose the data used for artificial intelligence training, algorithmic transparency can sometimes be useless. Clear guidelines for detailing data, performance, and algorithmic transparency must be established.
  4. Focus on augmented intelligence rather than AI autonomous agents in the near term. Undeniably, autonomous AI is confronted with significant technical and regulatory challenges that will not be resolved in the near future. Focusing on independent AI agents would hinder the deployment of AI in health care. For the time being, it is better to support the workflow of health professionals by using augmented intelligence.
  5. Establish and deploy appropriate training and educational programs. All end-users (healthcare professionals but also patients) need guidance on handling clinical AI. Eventually, the use of AI for patient care will depend on their trust and consent. To provide informed consent, the patient needs a certain understanding of medical AI.
  6. Implement a clear and informed information technology (IT) governance strategy. Before implementing AI, healthcare systems must be ready and equipped with a competent and robust IT governance strategy. Inadequate preparation can result in AI integration failure, breach in data protection, etc.
  7. Balance innovation with safety through regulation and legislation to promote trust. Throughout its life cycle, clinical AI must be assessed for efficiency and safety in relation to their clinical outcome. Such appraisal must be tightly controlled and enforced to maintain trust.

With these recommendations for responsible AI, numerous practical and ethical challenges surface. To what extent should AI support healthcare professionals and patients? And how should healthcare professionals and patients interact with AI? These topics will be discussed in the following activities.

© AIProHealth Project
This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now