OpenAI claims New York Times copyright lawsuit is without merit

In late December, The New York Times sued OpenAI and its close collaborator and investor, Microsoft, for allegedly violating copyright law by training generative AI models on the Times’ content. Today, OpenAI gave a public response, claiming — unsurprisingly — that the Times’ lawsuit is meritless.

In a letter published this afternoon on OpenAI’s official blog, the company reiterates its view that training AI models using publicly available data from the web — including articles like the Times’ — is fair use. In other words, in creating generative AI systems like GPT-4 and DALL-E 3, which “learn” from billions of examples of artwork, ebooks, essays and more to generate human-like text and images, OpenAI believes that it isn’t required to license or otherwise pay for the examples — even if it makes money from those models.

“We view this principle as fair to creators, necessary for innovators and critical for U.S. competitiveness,” OpenAI writes.

OpenAI also addresses in its letter regurgitation, the phenomenon where generative AI models spit out training data verbatim (or near-verbatim) when prompted in a certain way — for example, generating a photo that’s identical to one taken by a famous photographer. OpenAI makes the case that regurgitation is less likely to occur with training data from a single source (e.g., The New York Times) and places the onus on users to “act responsibly” and avoid intentionally prompting its models to regurgitate.

“Interestingly, the regurgitations The New York Times [cites in its lawsuit] appear to be from years-old articles that have proliferated on multiple third-party websites,” OpenAI writes. “It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate. Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts.”

OpenAI’s response comes as the copyright debate around generative AI reaches a fever pitch.

In a piece published this week in IEEE Spectrum, noted AI critic Gary Marcus and Reid Southen, a visual effects artist, show how AI systems, including DALL-E 3, regurgitate data even when not specifically prompted to do so — making OpenAI’s claims to the contrary less credible. Marcus and Southen, in fact, make reference to The New York Times lawsuit in their piece, noting that the Times was able to elicit “plagiaristic” responses from OpenAI’s models simply by giving the first few words from a Times story.

The Times is only the latest copyright holder to sue OpenAI over what it believes is a clear violation of IP laws.

Actress Sarah Silverman joined a pair of lawsuits in July that accuse Meta and OpenAI of having “ingested” Silverman’s memoir to train their AI models. In a separate suit, thousands of novelists, including Jonathan Franzen and John Grisham, claim OpenAI sourced their work as training data without their permission or knowledge. And several programmers have an ongoing case against Microsoft, OpenAI and GitHub over Copilot, an AI-powered code-generating tool, which the plaintiffs say was developed using their IP-protected code.

Some news outlets, rather than fight generative AI vendors in court, have chosen to ink licensing agreements with them. The Associated Press struck a deal in July with OpenAI, and Axel Springer, the German publisher that owns Politico and Business Insider, did likewise in December. OpenAI also has deals in place with the American Journalism Project and NYU.

But the payouts tend to be quite small. According to The Information, OpenAI — whose annualized revenue reportedly hovers around $1.6 billion — offers between $1 million and $5 million a year to license copyrighted news articles to train its AI models.

Until recently, The New York Times, too, had been in conversations with OpenAI to establish a “high-value” partnership involving “real-time display” of its brand in ChatGPT, OpenAI’s AI-powered chatbot. But discussions broke down in mid-December, according to OpenAI.

For what it’s worth, the public might be on publishers’ sides. According to a recent poll from the independent think tank The AI Policy Institute, when informed about the details of The New York Times lawsuit against OpenAI, 59% of respondents agreed that AI companies shouldn’t be allowed to use publisher content to train models while 70% said that the companies should compensate outlets if they want to use copyrighted materials in model training.