Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. T&Cs apply

European Union – Artificial Intelligence Act (proposal)

The EU Artificial Intelligence Act may be the first regulation for artificial intelligence. Ching-Fu Lin discusses it in this article.

The proposed EU Artificial Intelligence Act seeks to regulate AI systems in the European Union to ensure their safety, respect for fundamental rights, and alignment with Union values. Following are the key points of the Act.

You could find the full text of proposed EU Artificial Intelligence Act 2021 in:

“Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”

Objectives of the Act

  • Ensure AI systems are safe and respect existing laws.
  • Provide legal certainty to stimulate AI investment and innovation.
  • Enhance governance and enforcement of AI-related laws.
  • Develop a unified market for lawful and trustworthy AI applications to avoid market fragmentation.

Risk-Based Regulatory Approach

AI systems are categorized based on their potential risks:
  1. Prohibited Systems: AI systems that can harm or violate principles of human dignity, transparency, and non-discrimination. E.g., systems using subliminal techniques, exploiting vulnerabilities, evaluating social scores, and real-time remote biometric identification for law enforcement.
  2. High-Risk Systems: Systems with potential societal or individual harm if malfunctioned. E.g., biometric identification, critical infrastructure management, and AI in education, employment, public services, law enforcement, justice, and democratic processes.
  3. Systems with Transparency Obligation: AI must clearly inform humans they’re interacting with a machine, ensure fairness in emotion recognition, and mark ‘deep fake’ content.
  4. Systems with Minimal/No Risk: Encouraged to follow voluntary codes of conduct to ensure ethical practices.

Key Requirements for High-Risk AI Systems

  • Risk Management System to identify and mitigate risks.
  • Use high-quality, relevant, and reliable data with proper data governance.
  • Provide technical documentation describing the AI system.
  • Maintain detailed records of the system’s development, testing, and operation.
  • Ensure transparency and provide clear information to users.
  • Have human oversight to monitor and intervene in the AI’s decision-making.
  • Ensure accuracy, robustness, and cybersecurity of the AI system.

The Legislative Process and Updates (last updated: Sept. 22, 2023)

By June 2023, the European Parliament had cast its vote on its stance. Currently, EU legislators are initiating discussions to conclude the new law. Significant changes to the Commission’s initial proposal are being considered, such as refining the AI systems’ definition, expanding the roster of banned AI systems, and setting requirements for general-purpose AI and generative AI models like ChatGPT.

By now (Sept. 22, 2023), the process is at the ‘trilogue.’ Trilogues consist of unofficial three-way discussions on legislative suggestions involving representatives from the Parliament, the Council, and the Commission. They aim to establish a tentative consensus on a text that both the Council and the Parliament find agreeable.

To have more information on the legislative process of the Artificial Intelligence Act, you could refer to this document made by the European Parliament Research Service.

© Ching-Fu Lin and NTHU, proofread by ChatGPT
This article is from the free online

AI Ethics, Law, and Policy

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now