Is generative AI really ready for the enterprise?

Probably not yet, but it could be with some adjustments

OpenAI released ChatGPT just a few short months ago, and it’s fair to say that it took the world by storm: It has over 100 million active users already. No wonder, when it can generate human-like, grammatically correct responses. Related technologies can also produce artwork and code by entering a description of what you want, and the tech produces it.

You can even interact with the AI after your initial question, so if you don’t like the output you got or need clarification, you can ask additional questions or make adjustments to your picture or code, so it more closely matches your vision. All of this happens instantly without the help of a subject expert, an artist or a coder.

But none of this comes without issues, which include the sourcing of the data used to train the underlying AI model, the currency of that training data, a lack of permissions to use the source data, bias in the model and, perhaps most importantly, the accuracy of the responses, which are sometimes laughably wrong.

None of this has stopped enterprise software companies from taking the generative AI plunge. These companies see massive commercial potential and a lot of enthusiasm from users and they clearly don’t want to get left behind.

Salesforce, Forethought and Thoughtspot all recently announced betas of their own flavors of generative AI. Salesforce is adding generative AI across the platform. Forethought is aiming at chatbots and Thoughtspot wants to use AI for data querying. Each company took the base technology and added some algorithmic boosters to tune the tech for their platform’s unique requirements.

Microsoft also announced that its OpenAI service aimed at enterprise users on Azure is generally available as a managed service.

Throughout this year you can expect to see many more companies joining in, but the limitations are real, which makes us wonder: Is the technology — as early and raw as it is, no matter how cool it looks on its face — really enterprise ready?

A look at the limitations

Enterprise customers are grappling with whether to start using generative AI for business purposes because there are so many unknowns.

The technology as currently constituted uses data to train the models without permission from various sources around the web, including text from websites, books and articles. That’s a big deal for everyone, but it’s especially problematic for companies creating content for commercial purposes.

Marc Benioff, in an interview with journalist Kara Swisher at the Upfront Summit earlier this month, pointed out that this is an obvious flaw, but one that didn’t prevent Salesforce from releasing Einstein GPT last week.

“We can all see that ChatGPT is exciting, but we all have seen what the boundaries are. It’s also the ultimate plagiarizer. All of the things that it’s learning it got from somebody else. So its boundaries are the boundaries of the content that it’s trying to grab,” the CEO said at the time.

What’s more, the answers are sometimes blatantly untrue, or at least partly wrong. OpenAI even acknowledges this on its list of the technology’s limitations, writing: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging…”

Deon Nicholas, CEO and co-founder at Forethought, sees incorrect answers as one of the biggest issues with the technology. “ChatGPT will still hallucinate, right? If you ask it a question about a specific business, if it doesn’t know the answer, it will just come up with something that sounds plausible, but is completely wrong,” Nicholas told TechCrunch.

Further, ChatGPT is only trained on information up until 2021, which is problematic for companies trying to create the most updated content.

There’s also bias, which can be a real problem, and it takes a diverse team and careful attention to the model and the training data to help mitigate it. In a conversation on A Few Good Minutes with Brent Leary last week, Neha Bajwa, head of product marketing for customer experience at Microsoft, talked about the importance of considering bias in AI. (Note: shortly after we published this piece, Microsoft laid off one oof the teams responsible for guiding its ethical AI efforts.)

“At Microsoft, we call it ‘responsible AI,’ the ethical views of it and being able to do it responsibly, being able to make sure that the data doesn’t have bias, because [paying attention to] bias and inclusivity is such an important thing. And data can amplify bias,” Bajwa said on the show.

How could it adapt?

These limitations are not insurmountable. The software companies that recently released tools with generative AI have all adapted the base technology from OpenAI and made it their own for the purposes of trying to address some of these problems, but for now they remain.

Tim O’Reilly, who is the founder, chairman and CEO at O’Reilly Media, sees ChatGPT as the real third wave of the web, but he says there probably needs to be some adjustments to meet the commercial requirements of content owners.

OpenAI CEO Sam Altman even approached O’Reilly about training the corpus of knowledge in the O’Reilly book catalog, but O’Reilly objected because it would require some sort of payment mechanism for the authors that isn’t there yet.

“I said not until you have … some way of payment, because this is a body of content and people expect to get paid for it,” O’Reilly said. He suggested a system where users would have to pay a fee to access this specialized content.

“Those [payments] would flow through to the people who own the source content. Maybe we’ll get a business model for this stuff where you get access to this more authoritative content,” he said.

One of the strengths is breadth of the generative AI abilities including text, art and code. Nicholas said that being able to code workflows, as well as create or adapt workflows on the fly automatically, could be very powerful for companies who adopt this technology.

“One thing that I will add, that’s maybe not obvious here, is that you can also use the generative models like GPT-3 to generate code, so you could also use them to generate workflows [on the fly], which is also pretty neat. So it’s not just that we have an AI model that can talk and think like a human. But we’ve seen that GPT-3 can generate Python code,” he said. And that could lead to automated workflows.

Dries Buytaert, CTO at Acquia, who also founded the open source Drupal content management system, recently wrote a blog post on the possibilities of generative AI for content management and business in general.

In a recent conversation with TechCrunch, Buytaert compared the development of this technology with cloud computing, which has changed enterprise computing in a fundamental way by giving quick and easy access to compute resources.

“The OpenAIs of the world are not just building products, they’re democratizing a lot of these tools. It enables a lot of people who don’t have Ph.D.s in machine learning and AI to actually build very useful things very quickly. And that is pretty exciting, [limitations] and all aside,” he said.

Buytaert suggests that at minimum, companies should show their work and how they arrived at the answer. “They absolutely should give credit, and I think it would eliminate a lot of the debate around the impacts on organic search traffic, for example, because imagine you ask a question, it gives an answer, it credits its sources and includes links back to the sources,” he said.

That would be a start, and something that You.com, a search engine startup, already does with many of the answers in its chat-based search.

Bias is a trickier problem to solve, but Microsoft’s Bajwa says that it’s going to take a concerted effort on the part of companies. “There always has to be human supervision. Technology can only help you so much. At the end of the day, there [needs to be] organizational structure and processes and governance that needs to be put in place because technology is here to help and aid the organization. The business has to set some parameters and recommendations and processes on how to use it,” she said.

That’s precisely what any enterprise is going to have to think about as it looks at generative AI. As promising as it is, you can’t simply let it loose and forget it, and think you are going to get all these immediate results without any consequences. It’s essential to keep humans involved because this technology is still too immature to leave it alone and hope for the best.