Generative AI and copyright law: What’s the future for IP?

In a guidance document recently released by the U.S. Copyright Office, the agency attempts to clarify its stance on AI-generated works and their eligibility for copyright protection.

The guidance emphasizes the importance of human authorship and outlines how the office evaluates works containing AI-generated content to determine whether the AI contributions are the result of “mechanical reproduction” or an author’s “own original mental conception.”

The Copyright Office will not register works whose traditional elements of authorship are produced solely by a machine, such as when an AI technology receives a prompt from a human and generates complex written, visual or musical works in response. According to the Office, in these cases, the AI technology, rather than the human user, determines the expressive elements of the work, making the generated material ineligible for copyright protection.

However, a work containing AI-generated material may still be eligible for copyright protection if it also contains sufficient human authorship. Examples include a human selecting or arranging AI-generated content in a creative way or an artist modifying AI-generated material to the extent that the modifications meet the standard for copyright protection. In these cases, copyright protection only applies to the human-authored aspects of the work.

We are seeing the emergence of competing interests come to light between authors, AI companies and the general public.

The guidance also outlines the responsibilities of copyright applicants to disclose the use of AI-generated content in their works, providing instructions on submitting applications for works containing AI-generated material and advising on correcting a previously submitted or pending application. The Copyright Office emphasizes the need for accurate information regarding AI-generated content in submitted works and the potential consequences of failing to provide such information.

In light of the Office’s guidance, AI companies are updating their policies. OpenAI’s Terms of Use grant users “all right, title and interest in and to Output,” which it defines as content “generated and returned by the Services based on the [user] Input.” However, OpenAI restricts its users from “represent[ing] that output from the Services was human-generated when it is not,” suggesting that ChatGPT’s users must comply with the Copyright Office’s requirement of honest disclosure of AI use.

AI-generated content presents a second concern: What if AI bots infringe on third parties’ intellectual property rights? With this novel technology comes the risk that AI technologies may inadvertently or unintentionally infringe on the exclusive intellectual property rights of others, resulting in harm to both the authors and ChatGPT users.

Recently, two lawsuits have been filed against AI image-generator companies, highlighting the ongoing debate surrounding AI-generated works and the rights of the artists whose works are used to train the AI bots.

One of the cases involves stock photo provider Getty Images suing London-based artificial intelligence company Stability AI Inc.—currently valued at $1 billion. Getty Images accuses Stability AI of misusing over 12 million of its photos without permission to train its Stable Diffusion AI image-generation system.

Getty Images argues that its photos are particularly valuable for AI training due to their high quality, variety of subject matter and detailed metadata. Stable Diffusion was trained on 5 billion image-text pairs from datasets prepared by non-party LAION, a German entity working in conjunction with and sponsored by Stability AI. According to the lawsuit, Stability AI provided LAION with both funding and significant computing resources to produce its datasets as part of Stability AI’s infringing scheme. LAION allegedly created the datasets of image-text pairs used by Stability AI by scraping links to billions of pieces of content from various websites, including Getty Images’ websites.

Getty further claims that it has licensed millions of suitable digital assets to other leading technology innovators for AI-related purposes. Getty now seeks redress under the Copyright Act of 1976, the Lanham Act and Delaware trademark and unfair competition laws.

In the other case, a group of artists filed a class action lawsuit against Stability AI, Midjourney and DeviantArt, alleging direct and vicarious copyright infringement, DMCA violations, right of publicity violations and other legal breaches. The lawsuit similarly alleges that the defendants’ AI image products, which are based on Stable Diffusion, were trained on billions of copyrighted images from the LAION-5B dataset without consent or compensation to the artists. The plaintiffs argue that these AI-generated images compete with the original images in the marketplace and threaten the viability of “artist” as a career path.

The complaint contends that the AI image generators create “derivative works,” which are based entirely on the training images and infringe on the artists’ rights. The lawsuit alleges that Stability AI, Midjourney and DeviantArt are misappropriating the works of artists without consent, credit or compensation. Moreover, it argues that Stability AI’s for-profit nature disqualifies its use of artists’ works from falling under fair use, as it violates factor four of the fair use law. The plaintiffs assert that these companies should pay licensing fees like other commercial entities that use artwork to create products. Consequently, the artists seek redress for this infringement and an injunction to prevent future harms.

Both lawsuits contend that AI image generators unfairly exploit artists by using their work without permission, credit or compensation. AI art tool creators generally argue that training their software on copyrighted data is covered by the fair use doctrine in the United States. However, cases involving fair use have yet to be decided, and courts have not determined if fair use protects training AI software. Stability AI has responded by stating that it takes these matters seriously and that anyone who believes their technology does not constitute fair use misunderstands the law and the technology.

While the future of AI and its legal ramifications remain uncertain, we are seeing the emergence of competing interests come to light between authors, AI companies and the general public. It is crucial to continue monitoring both technological and policy developments so we can refine our legal frameworks to accommodate the rapidly evolving landscape of artificial intelligence and to balance the protection of both those threatened and improved by it.