European Union Prepares First AI Law

ai gavel

In April 2021, the European Commission published draft legislation regulating artificial intelligence (AI).

According to the European Parliament,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare, safer and cleaner transport, more efficient manufacturing, and cheaper and more sustainable energy.

According to the Parliament,

Parliament’s priority is ensuring that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. AI systems should be overseen by people rather than by automation to prevent harmful outcomes.

The draft defines AI as

Software developed with machine learning, logic, and knowledge-based or statistical approaches that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

A “Provider” is an entity, such as a company, that offers an AI system.

A “User” is a person or entity that uses such a system.

Providers and Users outside the EU are subject to EU law if the AI system is used in the EU.

Interestingly, the law excludes AI used for weapons and other military purposes.

The EU regulatory framework for AI analyzes and classifies AI systems according to the risks they pose to users. Riskier systems will be subject to more regulation.

High-risk systems are those that “negatively affect safety or fundamental rights.” These fall into two categories:

  1. AI systems used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices, and lifts (elevators).
  2. AI systems falling into eight specific areas that will have to be registered in an EU database:
    1. Biometric identification and categorization of natural persons
    2. Management and operation of critical infrastructure
    3. Education and vocational training
    4. Employment, worker management, and access to self-employment
    5. Access to and enjoyment of essential private services and public services and benefits
    6. Law enforcement
    7. Migration, asylum, and border control management
    8. Assistance in legal interpretation and application of the law.

The draft legislation prohibits AI systems that are considered a threat to people. These include:

  • Distorting a person’s behavior that causes or is likely to cause physical or psychological harm by deploying subliminal techniques or by exploiting vulnerabilities due to the person’s age or physical or mental disability, such as voice-activated toys that encourage dangerous behavior in children;
  • “Social scoring” by public authorities based on social behavior, socioeconomic status, or characteristics leading to detrimental or unfavorable treatment of particular groups of people;
  • Real-time and remote biometric identification in public spaces for law enforcement purposes, unless necessary for a targeted crime search or prevention of substantial threats.

People must also be notified when interacting with AI systems if it’s not apparent. However, this doesn’t apply when the AI detects, prevents, investigates, or prosecutes crimes.

The use of AI to create “deep fakes” must be disclosed. As Wikipedia explains, deep fakes

Are synthetic media that have been digitally manipulated to convincingly replace one person's likeness with another. Deepfakes are the manipulation of facial appearance through deep generative methods…

Deepfakes have garnered widespread attention for their potential use in creating child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying, and financial fraud.

The EU draft considers AI that can generate deepfakes only a “limited risk.”

Amendments were adopted in June of 2023, and some address generative AI (GAI) tools such as ChatGPT.

As Amendment 99 explains,

Foundation models are a recent development in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. The foundation model can be unimodal or multimodal, trained through various methods such as supervised learning or reinforced learning. AI systems with specific intended purpose or general-purpose AI systems can be an implementation of a foundation model, which means that each foundation model can be reused in countless downstream AI or general-purpose AI systems. These models hold growing importance to many downstream applications and systems.

That “broad range of data” can include art, text, music, etc., protected by copyright and for which the copyright authors have not authorized use for GAI training purposes.

Amendment 102 notes,

As foundation models are a new and fast-evolving development in the field of artificial intelligence, it is appropriate for the Commission and the AI Office to monitor and periodically assess the legislative and governance framework of such models and, in particular of generative AI systems based on such models, which raise significant questions related to the generation of content in breach of Union law, copyright rules, and potential misuse. It should be clarified that this Regulation should be without prejudice to Union law on copyright and related rights, including Directives 2001/29/EC, 2004/48/ECR, and (EU) 2019/790 of the European Parliament and the Council.

Under Amendment 399, Providers of foundation models must “document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.” However, it’s not clear how technically feasible this is.

The EU AI Act is not expected to determine the following:

  • Whether the use of copyrighted material to train AI tools is copyright infringement
  • Whether the owners of copyrighted materials must be compensated if their copyrighted work is used to train AI
  • Whether AI-generated content can be protected by copyright law
  • Whether work generated via AI can infringe the copyrights of human-created works

The EU Commission, the EU Council, and the EU Parliament are engaged in negotiations to agree on a final text of the AI Act. The EU bodies involved in the legislation hope to decide by the end of 2023.

Categories: Patents