Menu
Architect of EU Copyright Law Says AI “Loophole” is “Irresponsible”
June 6th, 2025
The Guardian has reported that Axel Voss, a German center-right member of the European parliament, who played a key role in writing the EU’s 2019 copyright directive, claims that EU copyright law wasn’t designed to deal with generative AI (GAI) models: systems such as ChatGPT that can generate text, images or music with a simple text prompt.
Voss calls the legal gap “irresponsible.”
As The Guardian reported,
The intervention came as 15 cultural organisations wrote to the European Commission this week, warning that draft rules to implement the AI Act were “taking several steps backwards” on copyright, while one writer spoke of a “devastating” loophole.
Voss was quoted as saying, “What I do not understand is that we are supporting big tech instead of protecting European creative ideas and content.”
The Guardian noted that
The EU’s AI Act, which came into force last year, was already in the works when ChatGPT, an AI chatbot that can generate essays, jokes, and job applications, burst into public consciousness in late 2022, becoming the fastest-growing consumer application in history.
ChatGPT was developed by OpenAI, which also created Dall-E to generate images.
As The Guardian and this blog have previously reported,
The rapid rise of generative AI systems, which are based on vast troves of books, newspaper articles, images, and songs, has caused alarm among authors, newspapers, and musicians, triggering a slew of lawsuits about alleged breaches of copyright.
The EU Commission announced recently that it was withdrawing proposed rules on artificial intelligence liability, one day after US vice-president JD Vance criticized what he called “excessive regulation” of the AI sector.
The draft EU directive on AI liability was withdrawn because there was “no foreseeable agreement” between EU lawmakers in the Council of Ministers and the European Parliament.
However, some EU lawmakers rejected the Commission’s decision to scrap the proposed AI liability rules.
Axel Voss said that the Commission’s move was a “strategic mistake.”
IAPP reported that Voss asked,
Why the sudden U-turn? The answer likely lies in pressure from industry lobbyists who view any liability rules as an existential threat to their business models. Big Tech firms are terrified of a legal landscape where they could be held accountable for the harms their AI systems cause. Instead of standing up to them, the Commission has caved, throwing European businesses and consumers under the bus in the process.
According to the European Commission,
The Commission proposed a legal framework for artificial intelligence which aims to address the risks generated by specific uses of AI through a set of rules focusing on the respect of fundamental rights and safety.
At the same time, the Commission intends to make sure that persons harmed by artificial intelligence systems enjoy the same level of protection as persons harmed by other technologies.
The liability directive is part of a broader group of laws (the “AI Act”) intended to regulate AI in the EU.
We wrote about this back in October of 2023, noting that
The EU regulatory framework for AI analyzes and classifies AI systems according to the risks they pose to users. Riskier systems will be subject to more regulation.
High-risk systems are those that “negatively affect safety or fundamental rights.”
“High risk” systems include those involving toys, aviation, cars, medical devices, and lifts (elevators).
AI applications that are considered a threat to people are prohibited. These include AI tools for:
- Distorting a person’s behavior that causes or is likely to cause physical or psychological harm by deploying subliminal techniques or by exploiting vulnerabilities due to the person’s age or physical or mental disability, such as voice-activated toys that encourage dangerous behavior in children;
- “Social scoring” by public authorities based on social behavior, socioeconomic status, or characteristics leading to detrimental or unfavorable treatment of particular groups of people;
- Real-time and remote biometric identification in public spaces for law enforcement purposes, unless necessary for a targeted crime search or prevention of substantial threats.
The EU AI Act defines AI as a “machine-based system designed to operate with varying levels of autonomy.”
As The Guardian notes,
The bill matters outside the EU because Brussels is an influential tech regulator, as shown by GDPR’s impact on the management of people’s data. The AI Act could do the same.
The UK is also dealing with AI laws. As Screen Daily reports,
The government is proposing a new exemption in copyright law that would allow tech companies to train their AI models on creative works, including films, TV shows, and audio recordings, without permission, unless creators actively opt out, akin to the European Union’s approach.
Many UK artists, including Elton John, Paul McCartney, and Annie Lennox, have lobbied against proposed plans to make it easier for AI companies to use copyright-protected works.
More than 1,000 artists released a “silent album” in February as a protest.
As Y! Entertainment reported back in 2023,
According to the Writers Guild of Great Britain, 65 percent of respondents to a recent survey sent to its members said they thought the increased use of artificial intelligence would reduce their income from writing. Meanwhile, 61 percent said they were worried that AI could replace their jobs.
The position of the Writers Guild of Great Britain is that
Many AI developers are not transparent about what data has been used to train their tools, meaning writers cannot tell if their work has been used.
… AI developers should only use writers’ work if they’ve been given express permission to do so. 80% of respondents to our survey ‘somewhat agreed’ or ‘strongly agreed’ that, “AI developers and systems should seek permission from writers before using their material.”
As Time Magazine reported, a recent poll shows that the British public wants tougher AI rules than the UK government is proposing:
The new poll shows that 87% of Brits would back a law requiring AI developers to prove their systems are safe before release, with 60% in favor of outlawing the development of “smarter-than-human” AI models. Just 9%, meanwhile, said they trust tech CEOs to act in the public interest when discussing AI regulation.