AI

Fan fiction writers are trolling AIs with Omegaverse stories

Comment

Image Credits: A-S-L / Getty Images

Fan fiction writers know that their work is being used to train generative AIs, and they’re not happy about it. Now, “Omegaverse” writers are participating in a week-long writing marathon called Knot in my Name to encourage the fan community to publish as much of their fan fiction as possible. It’s a long-shot attempt to mess with AI generators, but why not try?

As generative AI becomes more mainstream, numerous communities of writers and artists have spoken out against the technology’s appropriation of original creative works, from striking TV writers to record labels. Fanfic writers had their own moment of reckoning when Sudowrite, an AI-powered fiction writing tool, was found to be trained on Omegaverse fan fiction.

“Can we get [Amazon] fixating on knotting toys? Can we make slickmats a [Google] keyword?” asked fanfic writer MotherKat, who organized the event. “No idea.”

If phrases like “knotting toys” and “slickmats” sound incomprehensible, that’s because they’re not supposed to sound like real things if you haven’t read Omegaverse fic. And that’s also why it’s so obvious to fic writers when AIs have been trained on their work.

The Omegaverse is a subculture within a subculture. Writer Rose Eveleth describes it best as “an act of collective sexual worldbuilding.” The Omegaverse, which spans multiple fandoms, imagines a sexual dynamic in which society is divided into Alphas, Betas and Omegas (another way to refer to the Omegaverse is Alpha/Beta/Omega, or A/B/O). Alphas are more dominant, Omegas are more submissive, and Betas are neutral; it’s a variation on supposed wolf pack dynamics. The Omegaverse is most visible on platforms like Tumblr, where users reacted to Governor Ron DeSantis’ presidential bid by making memes about how he wishes he was an Alpha, but he is, in fact, an Omega.

There’s an entire lexicon of Omegaverse-specific language, which would never appear organically outside of fandom spaces. Some generators, like Open AI’s ChatGPT, are trained on variations of datasets like Common Crawl, which scrape the web to make massive archives of the internet. With more than 3 billion webpages in the Common Crawl dataset alone, it’s inevitable that creative works would be swept up in the archives, unbeknownst to writers and artists. So when platforms like ChatGPT-powered Sudowrite start waxing poetic about the power dynamics in Alpha-Omega relationships, it’s not hard to guess where the AI is being trained.

While fan fiction stories are derivative works themselves, these hobbyist writers aren’t trying to profit off of their legal creative outlet. This makes it all the more insulting to fan writers and IP holders alike, who watch as their work becomes fodder for synthetic texts. According to MotherKat, morale has been low among fanfic writers, especially as readers boast about getting ChatGPT to write hyper-specific fanfic for them.

“I have a few fandoms I’m involved in, and everyone was really low. No one was writing, a lot of people were taking their work down,” MotherKat told TechCrunch. “Most of us aren’t aspiring writers. This is our hobby, the space we go to escape the misery of our jobs being automated away.”

When she’s not hanging out in online fandom spaces, MotherKat is a professional voice actor; she says she’s feeling the threat of AI on all fronts, both at work and in her free time. That’s why she wanted to create “a movement designed to make scraping our content for sale as unpalatable as possible.”

“We have found that a lot of people had no idea that they were training the machine by putting in incomplete stories, so in a way we have helped with that,” MotherKat said.

Two days into the Knot in my Name campaign, fandom writers have published 64 stories across 51 different fandoms. This amounts to roughly 450,000 words of fan fiction; that’s about the size of Stephen King’s “It,” or “Moby Dick,” “To Kill a Mockingbird” and “Jane Eyre” combined.

“We can’t fix what they have already done,” MotherKat told TechCrunch. “But if we can have this irreverent little moment, maybe we can make scraping in the future less palatable to the venture capital guys if they know we, well, we got slick all over it.”

AI can’t replace human writers

Netflix’s lawsuit against the ‘Bridgerton Musical’ could change online fandom

More TechCrunch

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning