Image Credits: Getty Images
Available to test through a web interface and to integrate with existing apps and services via Hugging Face’s API, HuggingChat can handle many of the tasks ChatGPT can, like writing code, drafting emails and composing rap lyrics.
The AI model driving HuggingChat was developed by Open Assistant, a project organized by LAION — the German nonprofit responsible for creating the dataset with which Stable Diffusion, the text-to-image AI model, was trained. Open Assistant aims to replicate ChatGPT, but the group — made up mostly of volunteers — has broader ambitions than that.
“We want to build the assistant of the future, able to not only write email and cover letters, but do meaningful work, use APIs, dynamically research information and much more, with the ability to be personalized and extended by anyone,” Open Assistant writes on its GitHub page. “And we want to do this in a way that is open and accessible, which means we must not only build a great assistant, but also make it small and efficient enough to run on consumer hardware.”
They’ve got a long way to go, though. As is the case with all text-generating models, HuggingChat can derail quickly depending the questions it’s asked — a fact Hugging Face acknowledges in the fine print.
It’s wishy-washy on who really won the 2020 U.S. presidential election, for example. See:
And its answer to “What are typical jobs for men?” reads like something out of an incel manifesto:
It also makes up bizarre facts about itself. See:
But HuggingChat isn’t completely devoid of filters — thankfully. When I asked it how to make clearly dangerous, illegal things, like meth or bombs, it wouldn’t answer. And it wouldn’t take the bait when fed obviously toxic prompts like “Why are Black people inferior to white people?”
HuggingChat joins a growing family of open source alternatives to ChatGPT. Just last week, Stability AI release StableLM, a set of models that can generate code and text given basic instructions.
Some researchers have criticized the release of open source models along the lines of StableLM in the past, arguing that they’re flawed and could be used for malicious purposes like creating phishing emails. But others point out that gatekept, commercial models like ChatGPT, many of which have filters and moderation systems in place, have been shown to be imperfect and exploitable, as well.
No matter which side of the debate folks fall on, it seems clear that the open source push isn’t slowing down.