Menu
Congress and Copyright Office Focus on New AI Digital Image Laws
October 3rd, 2024
The US Congress and the US Copyright Office have been thinking about how laws should be adjusted to deal with artificial intelligence (AI).
The US Copyright Office has issued a report recommending the creation of a new federal law that would create a kind of property right in a person’s digital replica, including their face and voice.
Some existing state and federal laws address these issues, but there are gaps, as the Report notes:
State laws are both inconsistent and insufficient in various respects. As described above, some states currently do not provide rights of publicity and privacy, while others only protect certain categories of individuals. Multiple states require a showing that the individual’s identity has commercial value. Not all states’ laws protect an individual’s voice; those that do may limit protection to distinct and well-known voices, to voices with commercial value, or to use of actual voices without consent (rather than a digital replica).
Also, some existing laws don’t address harms that can be inflicted by non-commercial uses, including deepfake pornography.
For example, as the New York Times recently reported, budding politician Sabrina Javellana, who in 2018, at age 21, won a seat on the city commission in Hallandale Beach, Florida, had her life derailed when Internet trolls used her face to make fake porn – and there was nothing she could do about it.
The Copyright Office has identified the following critical elements of the proposed new law:
(1) the definition of “digital replica;”
(2) the persons protected;
(3) the term of protection;
(4) prohibited acts;
(5) secondary liability;
(6) licenses and assignments;
(7) accommodation of First Amendment concerns;
(8) remedies; and
(9) interaction with state laws.
The Report defines a “digital replica” as “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.”
Some state rights of publicity only protect those who can demonstrate that they are famous or that their identities have commercial value.
However,
The Office believes that the goal of enacting a federal digital replica law is to ensure that everyone has adequate protection and recommends that the law cover all individuals.
As for the term of protection,
Talent agency WME argued in favor of postmortem rights, stating that “[u]nauthorized deepfakes threaten to usurp estates’ valid interests in preserving and strengthening artists’ legacies through the legitimate use of AI” and may detract from the authenticity, credibility, and commercial value of an artist’s body of work.
According to the Copyright Office,
A federal digital replica right should prioritize the protection of the livelihoods of working artists, the dignity of living persons, and the security of the public from fraud and misinformation regarding current events. For these purposes, a postmortem term is not necessary.
At the same time, we recognize that there is a reasonable argument for allowing heirs to control the use of and benefit from a deceased individual’s persona that had commercial value at the time of death. If postmortem rights are provided in a new federal law, we would recommend an initial term shorter than twenty years, perhaps with the option of extending it if the persona continues to be commercially exploited.
Infringing acts would include “activities that involve dissemination to the public—in copyright terms, the acts of distribution, publication, public performance, display, or making available” because “this is the type of conduct likely to cause harm to the individual whose image or voice is being replicated.”
According to the Copyright Office,
The new right should not sweep too broadly. … [S]tate rights of publicity have been interpreted to cover a broad range of imitations or evocations, including catchphrases or caricatures. But the conduct that now demands federal attention—such as voice cloning in music and the creation of a video or image that appears to depict a real person—involves replicas that do not merely evoke an individual but are difficult to distinguish from reality. We recommend that federal law target replicas that convincingly appear to be the actual individual being replicated.
On the same day the Report was released, Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC) officially introduced the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024” (NO FAKES Act).
A discussion draft of the bill was introduced in 2023 to “protect the voice and visual likenesses of individuals from unfair use through generative artificial intelligence (GAI).”
GAI includes things like ChatGPT, which can generate images, music, and other things that, if created by humans, would be considered “works of authorship” and protected by copyright law.
(As we’ve discussed in previous blogs, works “created” by GAI can’t be protected under US copyright or patent law.)
The NO FAKES Act would create a “descendible and licensable property right that continues for 70 years after the individual’s death, even if it is not exploited during their lifetime.”
According to the summary of the discussion draft of the Act in the Report,
Licensing of the right is valid only if the individual is represented by counsel; the agreement is in writing; or the license is governed by a collective bargaining agreement. The draft bill imposes liability for producing and disseminating a digital replica without consent. It conditions liability on “knowledge that the digital replica was not authorized by the applicable individual or rights holder.” The draft includes a list of categorical exclusions from liability, including the use of digital replicas in news, public affairs, or sports broadcasts; in documentary, historical, or biographical works; for comment, criticism, scholarship, satire, or parody; and where the use is de minimis or incidental. Potential remedies include statutory or actual damages, whichever is greater; punitive damages; and attorney’s fees. The bill categorizes the law as an intellectual property law for the purposes of Section 230 of the Communications Decency Act.
Later parts of the Report are expected to address topics such as “the copyrightability of works created using generative AI, training of AI models on copyrighted works, licensing considerations, and allocation of any potential liability.”
Categories: Trademarks