FlowGPT is the Wild West of GenAI apps

A few months ago, OpenAI launched the GPT Store, a marketplace where people can create and list AI-powered chatbots customized to perform a number of tasks (e.g., coding, answering trivia questions). The GPT Store is powerful, to be sure. But using it requires using OpenAI’s models and no others, which some chatbot creators — and users — are opposed to doing.

So startups are creating alternatives.

One, FlowGPT, aims to be a sort of “app store” for generative AI models like Google’s Gemini, Anthropic’s Claude, Meta’s Llama 2 and OpenAI’s DALL-E 3, as well as front-end experiences for those models (think text fields and prompt suggestions). Through FlowGPT, users can build their own GenAI-powered apps and make them publicly available, earning tips for their contributions.

Jay Dang, a UC Berkeley computer science dropout, and Lifan Wang, a former engineering manager at Amazon, co-founded FlowGPT last year out of a shared desire to create a platform where people could quickly spin up — and share — GenAI apps.

“There’s still a learning curve for users to use AI,” Dang told TechCrunch in an email interview. “FlowGPT is making the bar lower in each iteration, making it more accessible.”

Dang describes FlowGPT as an “ecosystem” for GenAI-powered apps — a collection of infrastructure and creator tools tied to a marketplace and community of GenAI app users. Users get a feed of apps and app collections recommended to them based on trending categories (e.g., “Creative,” “Programming,” “Game” “Academic”), while creators get options for customizing the behavior — and appearance — of GenAI apps.

Users interact with GenAI apps on FlowGPT through a chat window that’s not dissimilar to ChatGPT, with options to type in prompts, thumbs-up (or thumbs-down) apps, share links to conversations or tip individual app creators. Each app has a creator-provided description along with the date it was created, how many times it’s been used and the model the creator recommends to power it.

I say “model the creator recommends” because FlowGPT apps really, at their core, are prompts — prompts that prime models to respond in certain ways. For example, the “Scared Girl from Horror Movie” app instructs ChatGPT to narrate — as the title hints at — a horror story involving one scared girl. “TitleTuner” prompts ChatGPT to optimize headlines so they rank better on search engines. And SchoolGPT leverages ChatGPT for step-by-step solutions to math, physics and chemistry problems.

FlowGPT

Image Credits: FlowGPT

You’ll notice the heavy reliance on ChatGPT. Use FlowGPT long enough, and you’ll also notice that many of the prompts break when the model’s switched from the default.

Sometimes it’s a matter of the selected model not having the right capabilities. Other times, the prompt runs up against a model’s filters and safeguards.

On the subject of safeguards . . .

Some of FlowGPT’s most popular apps are essentially jailbreaks designed to circumvent models’ safety measures. There’s multiple versions of DAN on the marketplace — “DAN” being a popular prompting method used to get models to respond to prompts unbounded by their usual rules. Elsewhere, there’s apps like WormGPT, which purports to be able to code malware (and link to paid, dark web versions of the chatbot that do more), and dating simulators that run afoul of OpenAI’s rules against fostering romantic companionship.

Many of these apps could potentially cause harm, like therapy apps and apps that advertise themselves as authoritative health resources. GenAI models like ChatGPT are a notoriously bad health advice givers, with one study showing that an earlier version of ChatGPT rarely provided referrals to specific resources for help relating to suicide, addiction and sexual assault.

Any app on FlowGPT that offends — say, for giving instructions on how to generate deepfake nudes with an AI image generator (and there’s several that do this) — can be reported to the platform’s community manager for review. And FlowGPT does offer a toggle for “sensitive content.”

FlowGPT

Image Credits: FlowGPT

But it’s clear from looking at the homepage that FlowGPT has a moderation problem. It’s the Wild West of GenAI apps — and the toggle’s ineffective to the point where I barely notice a difference in app selection with it switched on.

Dang swears up and down that FlowGPT is in fact an ethical and rule-abiding platform, with risk mitigation policies in place aimed at “ensur[ing] public safety.”

“We’re proactively engaging with leading experts in the field of AI ethics,” he said. “Our collaboration is focused on developing comprehensive strategies to minimize risks associated with AI deployment.”

Considering that this writer got a FlowGPT app to give instructions on selling drugs and robbing a bank, I’d say that the company has some work to do.

Investors seemingly feel otherwise.

This week, Goodwater announced that it led a $10 million “pre-Series A” round in FlowGPT with participation from existing backer DCM. Goodwater partner Coddy Johnson, speaking to TechCrunch via email, said that he sees FlowGPT “helping to lead the way” in GenAI by offering “the widest choice” and “the most flexibility and freedom” to both users and creators.

“We believe the biggest future for AI is in open ecosystems,” Johnson added. “FlowGPT [is allowing] creators to choose their models and collaborate with their communities.”

FlowGPT

Image Credits: FlowGPT

I’m not so sure that all the maintainers of the models FlowGPT’s tapping — particularly those who’ve pledged to make AI safety a top priority — will share in that enthusiasm.

Nevertheless and in lieu of repercussions from said vendors (at least as of publication time), FlowGPT — which isn’t revenue-generating yet — is laying the groundwork for expansion. The company’s beta testing apps for Android and iOS that’ll bring a revamped FlowGPT experience to mobile, working on a revenue-sharing model for app creators and recruiting to grow its Berkeley-based 10-person team, Dang said.

“With millions of monthly users and a fast growth rate, we’ve already proven we are on the right track, and we believe it’s the time to accelerate the progress,” he continued. “We are setting a new standard for immersion in AI-driven environments, offering a world where creativity knows no bounds . . . [O]ur mission remains to cultivate a more open and creator-focused platform.”

We’ll see how far that gets it.