This week, OpenAI granted users of its image-generating AI system, DALL-E 2, the right to use their generations for commercial projects, like illustrations for children’s books and art for newsletters. The move makes sense, given OpenAI’s own commercial aims — the policy change coincided with the launch of the company’s paid plans for DALL-E 2. But it raises questions about the legal implications of AI like DALL-E 2, trained on public images around the web, and their potential to infringe on existing copyrights.
DALL-E 2 “trained” on approximately 650 million image-text pairs scraped from the internet, learning from that dataset the relationships between images and the words used to describe them. But while OpenAI filtered out images for specific content (e.g. pornography and duplicates) and implemented additional filters at the API level, for example for prominent public figures, the company admits that the system can sometimes create works that include trademarked logos or characters. See:
“OpenAI will evaluate different approaches to handle potential copyright and trademark issues, which may include allowing such generations as part of ‘fair use’ or similar concepts, filtering specific types of content, and working directly with copyright [and] trademark owners on these issues,” the company wrote in an analysis published prior to DALL-E 2’s beta release on Wednesday.
It’s not just a DALL-E 2 problem. As the AI community creates open source implementations of DALL-E 2 and its predecessor, DALL-E, both free and paid services are launching atop models trained on less-carefully filtered datasets. One, Pixelz.ai, which rolled out an image-generating app this week powered by a custom DALL-E model, makes it trivially easy to create photos showing various Pokémon and Disney characters from movies like Guardians of the Galaxy and Frozen.
When contacted for comment, the Pixelz.ai team told TechCrunch that they’ve filtered the model’s training data for profanity, hate speech and “illegal activities” and block users from requesting those types of images at generation time. The company also said that it plans to add a reporting feature that will allow people to submit images that violate the terms of service to a team of human moderators. But where it concerns intellectual property (IP), Pixelz.ai leaves it to users to exercise “responsibility” in using or distributing the images they generate — grey area or no.
“We discourage copyright infringement both in the dataset and our platform’s terms of service,” the team told TechCrunch. “That being said, we provide an open text input and people will always find creative ways to abuse a platform.”
Bradley J. Hulbert, a founding partner at law firm MBHB and an expert in IP law, believes that image-generating systems are problematic from a copyright perspective in several aspects. He noted that artwork that’s “demonstrably derived” from a “protected work” — i.e. a copyrighted character — has generally been found by the courts to be infringing, even if additional elements were added. (Think an image of a Disney princess walking through a gritty New York neighborhood.) In order to be shielded from copyright claims, the work must be “transformative” — in other words, changed to such a degree that the IP isn’t recognizable.
“If a Disney princess is recognizable in an image generated by DALL-E 2, we can safely assume that The Walt Disney Co. will likely assert that the DALL-E 2 image is a derivative work and an infringement of its copyrights on the Disney princess likeness,” Hulbert told TechCrunch via email. “A substantial transformation is also a factor considered when determining whether a copy constitutes ‘fair use.’ But, again, to the extent a Disney princess is recognizable in a later work, assume that Disney will assert later work is a copyright infringement.”
Of course, the battle between IP holders and alleged infringers is hardly new, and the internet has merely acted as an accelerant. In 2020, Warner Bros. Entertainment, which owns the right to film depictions of the Harry Potter universe, had certain fan art removed from social media platforms including Instagram and Etsy. A year earlier, Disney and Lucasfilm petitioned Giphy to take down GIFs of “Baby Yoda.”
But image-generating AI threatens to vastly scale the problem by lowering the barrier to entry. The plights of large corporations aren’t likely to garner sympathy (nor should they), and their efforts to enforce IP often backfire in the court of public opinion. On the other hand, AI-generated artwork that infringes on, say, an independent artist’s characters could threaten a livelihood.
The other thorny legal issue around systems like DALL-E 2 pertains to the content of their training datasets. Did companies like OpenAI violate IP law by using copyrighted images and artwork to develop their system? It’s a question that’s already been raised in the context of Copilot, the commercial code-generating tool developed jointly by OpenAI and GitHub. But unlike Copilot, which was trained on code that GitHub might have the right to use for the purpose under its terms of service (according to one legal analysis), systems like DALL-E 2 source images from countless public websites.
As Dave Gershgorn points out in a recent feature for The Verge, there isn’t a direct legal precedent in the U.S. that upholds publicly available training data as fair use.
One potentially relevant case involves a Lithuanian company called Planner 5D. In 2020, the firm sued Meta (then Facebook) for reportedly stealing thousands of files from Planner 5D’s software, which were made available through a partnership with Princeton to contestants of Meta’s 2019 Scene Understanding and Modeling challenge for computer vision researchers. Planner 5D claimed Princeton, Meta and Oculus, Meta’s VR-focused hardware and software division, could have benefited commercially from the training data that was taken from it.
The case isn’t scheduled to go to trial until March 2023. But last April, the U.S. district judge overseeing the case denied motions by then-Facebook and Princeton to dismiss Planner 5G’s allegations.
Unsurprisingly, rightsholders aren’t swayed by the fair use argument. A spokesperson for Getty Images told IEEE Spectrum in an article that there are “big questions” to be answered about “the rights to the imagery and the people, places, and objects within the imagery that [models like DALL-E 2] were trained on.” Association of Illustrators CEO Rachel Hill, who was also quoted in the piece, brought up the issue of compensation for images in training data.
Hulbert believes it’s unlikely a judge will see the copies of copyrighted works in training datasets as fair use — at least in the case of commercial systems like DALL-E 2. He doesn’t think it’s out of the question that IP holders could come after companies like OpenAI at some point and demand that they license the images used to train their systems.
“The copies … constitute infringement of the copyrights of the original authors. And infringers are liable to the copyright owners for damages,” he added. “[If] DALL-E (or DALL-E 2) and its partners make a copy of a protected work, and the copy was neither approved by the copyright owner nor fair use, the copying constitutes copyright infringement.”
Interestingly, the U.K. is exploring legislation that would remove the current requirement that systems trained through text and data mining, like DALL-E 2, be used strictly for non-commercial purposes. While copyright holders could still ask for payment under the proposed regime by putting their works behind a paywall, it would make the U.K.’s policy one of the most liberal in the world.
The U.S. seems unlikely to follow suit, given the lobbying power of IP holders in the U.S. The issue seems likely to play out in a future lawsuit instead. But time will tell.