This could be AI’s light bulb moment

ChatGPT has certainly put AI in the public’s imagination

In the 19th century, electricity was the primary technology that was driving innovation, but it wasn’t until it was practically applied with inventions like the light bulb and the telephone that it stimulated the public’s imagination. With the release of ChatGPT last year, it triggered what some industry experts believe is a light bulb moment for AI where the utility of the technology became readily apparent to people outside of tech.

Stephen Wolfram, the computer scientist, physicist and founder and CEO at Wolfram Alpha, is one of those people who sees the current situation much like the 19th century when electricity was put to work in practical ways that had a palpable impact on ordinary folks.

“We think about computation as this enabling technology,” he said last month in an interview at the Imagination in Action conference at MIT in Cambridge, Massachusetts. “In the long-distant past, electricity was an enabling technology that lots of people were interested in … And the analogy I thought of with ChatGPT is something like the first telephone that actually worked. People had known in principle that there should be ways to kind of humanize electricity to be useful for human communication. And people had tried for a long time.”

That moment came in 1877 when Alexander Graham Bell started Bell Telephone as a commercial entity to sell phones and transformed human communication.

Peter Levine, general partner at Andreessen Horowitz, similarly sees the current state of AI as this “aha moment,” and as with electricity, one that took some time to develop into practical applications. “People have been working on this for a long time. It’s been [being] refined for a long time, which makes it really interesting, probably in the same way that electricity was. And it’s the first time the light bulb goes on in the street. People are like, ‘wow, I get it,’” Levine said.

Wolfram pointed out that the power of this approach lies in the linguistic user interface, which gives us the ability to interact directly with the AI. “We’ve managed to take what’s out there on the web, in books and so on, and get something which takes all of that text and is able to produce reasonable human text [as a response],” he said.

That may have a profound impact on enterprise software moving forward, something every startup developing a product today needs to be thinking about. AI is going to be table stakes for every company from now on, according to Levine.

“AI will be a property of every application going forward, and this is the moment where that starts to happen, where just as a database or an operating system is part of every application today, I believe that AI will be a property of every application,” he said. That could change the way we think about how applications function and how we interact with them.

Not so fast

Vishal Sikka, CEO and founder at Vianai Systems, an MLOps startup, who was formerly CEO at Infosys, thinks it’s a bit more complicated than that, especially for enterprise companies. He believes we might not be at that aha moment yet in business, and it could take longer than we think for this technology to mainstream inside companies, especially in mission-critical applications.

He is cautious particularly because of the hallucination problem we have seen with generative AI. “The first part is the safety issue because in the current state of the art, the scientists who have built this transformer technology don’t know how to make it produce good answers and not produce bad ones. They don’t know if it is even possible that it can be done,” said Sikka, who’s been studying AI since the ’90s and was a signatory of the letter asking OpenAI to take a six-month pause.

Companies will probably focus on the benefits of AI to produce productivity improvements rather than risking their most critical systems to generative AI as currently constituted, he said, adding that it’s essential to solve the hallucination issue; it’s not something you can simply gloss over because it’s neat technology. That means if you are making lending decision or a diagnosis in health and medicine, a calculation of a company’s financials or a stress test decision for a bridge, you need it to be absolutely reliable. “These things will not happen unless there is a concrete answer to the stability, safety and trustworthiness question,” he said.

He added, “I’m not saying it cannot be solved; it is possible that it gets solved in the next few years. We will see,” but he said he thinks that solving this trust question will be essential for business users.

But there might be ways to overcome the hallucination problem with smaller, more-focused models, according to Tim O’Reilly, founder and CEO of O’Reilly Media.

“We have a model that’s trained on all the content on the O’Reilly platform. And you ask your question, and it’ll give you an answer,” he said. What’s more, it will give a precise footnote to the source material on the platform. This could help people get more accurate answers, while giving them pointers to the source content to help determine the veracity of the answers.

Regardless, generative AI is clearly helping folks outside of the tech bubble interact directly with AI in a way that wasn’t really feasible before. It’s just important to understand its limitations as we incorporate these capabilities into our software, even as we marvel at this aha moment.