Skip main navigation

European approach to trustworthy AI

Explanation of the European Guidelines for Trustworthy AI. The seven requirements are described.
AI surrounded by EU stars
© European Commission.

In 2019, the Ethics Guidelines for Trustworthy Artificial Intelligence were presented by the High-Level Expert Group on AI. These three guidelines provide seven key requirements that AI systems should meet for it to be considered trustworthy.

According to the guidelines, AI should be:

  1. Lawful. This means that all applicable laws and regulations should be respected.
  2. Ethical. Next to the legal grounds, ethical principles and values should also be respected.
  3. Robust. The system should be both technically (e.g., resilient to attacks) and socially robust (e.g., minimization of negative impact on society).

Creating such a trustworthy AI system is based on seven requirements:

  1. Human agency and oversight.

    a. Human agency and autonomy. The system should not generate confusion for end-users and it should not generate any confusion as to whether or not they are interacting with an AI system. The system should not generate over-reliance or affect human decision-making processes.

    b. Human oversight. The end-user must be kept in the loop (complete control over the actions) or on the loop (the user has oversight, but the AI system can jump into action without human approval). The end-user must also receive training on how to exercise this oversight. What should they do with undesirable adverse effects? Is there a stop-button?

  2. Technical robustness and safety.

    a. Resilience to attack and security. The system should not have adversarial, critical, or damaging effects in case of threats. It has to be certified for cybersecurity or be compliant with security standards. Measures should be put in place to ensure integrity, robustness, and security.

    b. General safety. Consider what the risk and threats of the system are. Is there a fault tolerance (e.g., a duplicated system or another parallel system that can take over the tasks)?

    c. Accuracy. A low accuracy should not result in critical, adversarial or damaging consequences. The system has to be up to date, of high quality, complete and representative. The end-users should be aware of its accuracy.

    d. Reliability, fallback plans, and reproducibility. In case of low reliability/reproducibility, the system should not cause critical, adversarial, or damaging consequences.

  3. Privacy and data governance.

    a. Privacy. The impact of the system on privacy has to be considered. Mechanisms to lag issues related to privacy should be established.

    b. Data protection. Data governance mechanisms (such as the compliance with the GDPR) must be ensured.

  4. Transparency.

    a. Traceability. Measures should be put in place that address the traceability of the system (e.g., assess the quality of input data, trace which data is used for certain decisions, assess quality of output, use adequate logging practices).

    b. Explainability. Decisions should be explained to end-users. Users have to be continuously surveyed to see if they understand the decisions.

    c. Communication. Users have to be aware that they are interacting with an AI system. They should also be informed of its capabilities, limitations and risks.

  5. Diversity, non-discrimination, and fairness.

    a. Avoidance of unfair bias. Bias regarding input data and algorithm design should be avoided. Diversity and representativeness have to be considered in the data. Mechanisms that flag issues related to bias, discrimination, or poor performance should be put in place.

    b. Accessibility and universal design. The system has to correspond to a variety of preferences and abilities in society. Use Universal Design principles to do so. People with special needs and disabilities need to be able to use it too.

    c. Stakeholder participation. The widest range of possible stakeholders should be considered in the design and development.

  6. Environmental and societal well-being.

    a. Environmental well-being. Environmental impact should be evaluated and reduced.

    b. Impact on work and skills. The system should be introduced and workers should be consulted in advance. The impact of the system on human work should be understood. De-skilling the workforce should be counteracted and training opportunities should be provided for re- and up-skilling.

    c. Impact on society at large or democracy. If there is a negative impact on society at large or democracy, the harm should be minimized.

  7. Accountability.

    a. Auditability (the traceability of the development process, the sourcing of training data, the logging of processes etc.) should be facilitated. The system has to be auditable by independent third parties.

    b. Risk management. External guidance is needed to foresee ethical concerns and accountability measures. Risk training could be organized. Third parties should be able to report potential vulnerabilities, risks, or biases.

The fifth requirement shows that during the development, bias should be avoided to have a diverse and fair AI system. The next steps will explain bias in more detail. The explainability principle of requirement four and compliance will both be discussed later this week.

© AIProHealth Project
This article is from the free online

How Artificial Intelligence Can Support Healthcare

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now