Media vs. AI: Copyright Battles Escalate as More Papers Sue

AI Copyright

Several recent developments have occurred in intellectual property law disputes over the use of generative artificial intelligence (GAI or GenAI).

In April, eight major US newspapers filed suit in federal court in the Southern District of New York, accusing OpenAI and Microsoft of copyright infringement for using their copyright-protected articles to train the GAI tool ChatGPT.

The newspapers involved in the suit are the New York Daily News, Chicago Tribune, Orlando Sentinel, South Florida Sun Sentinel, San Jose Mercury News, Denver Post, Orange County Register, and St. Paul Pioneer Press.

The New York Times previously filed suit in the same district against the same defendants in late 2023.

Microsoft and OpenAI tried to have parts of the Times lawsuit dismissed, contending that the Times hadn’t shown actual harm and that the large language models (LLMs) that power GAI chatbots hadn’t replaced the market for news articles.

Other publications, such as the Financial Times, Axel Springer (which owns Business Insider and Politico), The Associated Press, and Le Monde, have made deals to license their content to GAI tech companies.

According to the new 294-page complaint, which includes many exhibits showing samples of the alleged infringement,

This lawsuit arises from Defendants purloining millions of the Publishers’ copyrighted articles without permission and without payment to fuel the commercialization of their generative artificial intelligence (“GenAI”) products, including ChatGPT and Copilot. Although OpenAI purported at one time to be a non-profit organization, its recent $90 billion valuation underscores how that is no longer the case. ChatGPT, along with Microsoft Copilot (formerly known as Bing Chat) has also added hundreds of billions of dollars to Microsoft’s market value. Defendants have created those GenAI products in violation of the law by using important journalism created by the Publishers’ newspapers without any compensation.

As the New York Times explained,

The complaint said the chatbots regularly surfaced the entire text of articles behind subscription paywalls for users and often did not prominently link back to the source. This, it said, reduced the need for readers to pay subscriptions to support local newspapers and deprived the publishers of revenue both from subscriptions and from licensing their content elsewhere.

Also, the Times noted,

A recent investigation by The New York Times found that numerous tech companies, in their push to keep pace, had ignored policies and debated skirting copyright law in an effort to obtain as much data as possible to train chatbots.

The new lawsuit also claims that GAI chatbots falsely stated that the plaintiffs’ publications engaged in inaccurate or misleading reporting, “tarnishing the newspapers’ reputations and spreading dangerous information.”

For example, said the Times,

One example included ChatGPT’s response to a query about which infant lounger The Chicago Tribune recommended. ChatGPT, according to the complaint, responded that The Tribune recommended the Boppy Newborn Lounger, a product that was recalled after it was linked to infant deaths and that the newspaper had never recommended.

According to the complaint, a GAI chatbot also claimed falsely that the Denver Post had published research indicating that smoking could potentially cure asthma.

In other GenAI legal news, a California district court judge said that he was inclined to let a copyright infringement case against Stability AI, Midjourney, and other GAI companies go forward.

According to Reuters,

U.S. District Judge William Orrick said on Tuesday, that the ten artists behind the lawsuit had plausibly argued that Stability, Midjourney, DeviantArt and Runway AI copied and stored their work on company servers and could be liable for using it without permission.

Three of the artists in the case had sued the GAI companies last January over the use of their copyrighted art to train AI models.

In the original complaint, the plaintiffs asserted the following claims against all three sets of defendants:

(1) Direct Copyright Infringement, 17 U.S.C. § 106;

(2) Vicarious Copyright Infringement, 17 U.S.C. § 106;

(3) Violation of the Digital Millennium Copyright Act, 17 U.S.C. §§ 1201-1205 (“DMCA”);

(4) Violation of the Right to Publicity, Cal. Civil Code § 3344;

(5) Violation of the Common Law Right of Publicity;

(6) Unfair Competition, Cal. Bus. & Prof. Code § 17200; and

(7) Declaratory Relief.

The plaintiffs also asserted a breach of contract claim against DeviantArt only.

Judge Orrick dismissed many of the claims but allowed them to be refiled in a new lawsuit in November.

The plaintiffs contend that Stability's Stable Diffusion model, which all the defendants use, unlawfully contains "compressed copies" of their copyright artwork.

The defendants have argued that their use of copyrighted material for AI training is allowed under the copyright doctrine of “fair use.” Judge Orrick’s ruling didn’t address that issue, which is central to the legality of AI LLMs.

Judge Orrick said that he was inclined to deny Midjourney’s motion to dismiss the plaintiffs’ false endorsement and trade dress claims and inclined to grant DeviantArt’s motion to dismiss the plaintiff’s contract claim for express breach of contract and for breach of the implied covenant of good faith and fair dealing.

He also said he was inclined to grant the defendants’ motion to dismiss all of the plaintiffs’ claims under the Digital Millennium Copyright Act (DMCA).

Section 1202(b) of the Digital Millennium Copyright Act (“DMCA”) provides that:

No person shall, without the authority of the copyright owner or the law—

    1. intentionally remove or alter any copyright management information;
    2. distribute or import for distribution any copyright management information knowing that the copyright management information has been removed or altered without authority of the copyright owner . . . or
    3. distribute, import for distribution . . . works [or] copies of works . . . knowing that copyright management information has been removed or altered without authority of the copyright owner . . . knowing or having reasonable grounds to know that it will induce, enable, facilitate or conceal an infringement of any right under this title.

17 U.S.C. § 1202(b).

Stability moved to dismiss the DMCA claim because it asserted plaintiffs failed to allege that Stability removed any “copyright management information” (CMI) from any particular work of the plaintiffs’.

Categories: Copyright