10 investors talk about the future of AI and what lies beyond the ChatGPT hype

When I mentioned “the rise of AI” in a recent email to investors, one of them sent me an interesting reply: “The ‘rise of AI’ is a bit of a misnomer.”

What that investor, Rudina Seseri, a managing partner at Glasswing Ventures, means to say is that sophisticated technologies like AI and deep learning have been around for a long time now, and all this hype around AI is ignoring the simple fact that they have been in development for decades. “We saw the earliest enterprise adoption in 2010,” she pointed out.

Still, we can’t deny that AI is enjoying unprecedented levels of attention, and companies across sectors around the world are busy pondering the impact it could have on their industry and beyond.

Dr. Andre Retterath, a partner at Earlybird Venture Capital, feels several factors are working in tandem to generate this momentum. “We are witnessing the perfect AI storm, where three major ingredients that evolved throughout the past 70 years have finally come together: Advanced algorithms, large-scale datasets, and access to powerful compute,” he said.

Still, we couldn’t help but be skeptical at the number of teams that pitched a version of “ChatGPT for X” at Y Combinator’s winter Demo Day earlier this year. How likely is it that they will still be around in a few years?

Karin Klein, a founding partner at Bloomberg Beta, thinks it’s better to run the race and risk failing than sit it out, since this is not a trend companies can afford to ignore. “While we’ve seen a bunch of ‘copilots for [insert industry]’ that may not be here in a few years, the bigger risk is to ignore the opportunity. If your company isn’t experimenting with using AI, now is the time or your business will fall behind.”

And what’s true for the average company is even more true for startups: Failing to give at least some thought to AI would be a mistake. But a startup also needs to be ahead of the game more than the average company does, and in some areas of AI, “now” may already be “too late.”

To better understand where startups still stand a chance, and where oligopoly dynamics and first-mover advantages are shaping up, we polled a select group of investors about the future of AI, which areas they see the most potential in, how multilingual LLMs and audio generation could develop, and the value of proprietary data.

This is the first of a three-part survey that aims to dive deep into AI and how the industry is shaping up. In the next two parts to be published soon, you will hear from other investors on the various parts of the AI puzzle, where startups have the highest chance of winning, and where open source might overtake closed source.

We spoke with:


Manish Singhal, founding partner, pi Ventures

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

This is a dynamically changing landscape when it comes to applications of LLMs. Many companies will form in the application domain, and only a few will succeed in scaling. In terms of foundation models, we do expect OpenAI to get competition from other players in the future. However, they have a strong head start and it will not be easy to dislodge them.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

I think in the applied AI space, there should be significant consolidation. AI is becoming more and more horizontal, so it will be challenging for applied AI companies, which are built on off-the-shelf models, to retain their moats.

However, there is quite a bit of fundamental innovation happening on the applied front as well as on the infrastructure side (tools and platforms). They are likely to do better than the others.

Is open source the most obvious go-to-market route for AI startups?

It depends on what you are solving for. For the infrastructure layer companies, it is a valid path, but it may not be that effective across the board. One has to consider whether open source is a good route or not based on the problem they are solving.

Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see?

We are seeing LLMs in other languages as well, but of course, English is the most widely used. Based on the local use cases, LLMs in different languages definitely make sense.

Besides linguistic differentiation, we expect to see LLM variants that are specialized in certain domains (e.g., medicine, law and finance) to provide more accurate and relevant information within those areas. There is already some work happening in this area, such as BioGPT and Bloomberg GPT.

LLMs suffer from hallucination and relevance when you want to use them in real production-grade applications. I think there will be considerable work done on that front to make them more usable out of the box.

What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months?

It can surely happen, although it may take longer than a few months. Once quantum computing goes mainstream, the AI landscape will change significantly again.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

Multimodal generative AI is picking up pace. For most of the serious applications, one will need those to build, especially for images and text. Audio is a special case: There is significant work happening in auto-generation of music and speech cloning, which has wide commercial potential.

Besides these, auto-generation of code is becoming more and more popular, and generating videos is an interesting dimension — we will soon see movies completely generated by AI!

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

Contrary to what the world may think, proprietary data gives a good head start, but eventually, it is very difficult to keep your data proprietary.

Hence, the tech moat comes from a combination of intelligently designed algorithms that are productized and fine-tuned for an application along with the data.

When could AGI become a reality, if ever?

We are getting close to human levels with certain applications, but we are still far from a true AGI. I also believe that it is an asymptotic curve after a while, so it may take a very long time to get there across the board.

For true AGI, several technologies, like neurosciences and behavioral science, may also have to converge.

Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI?

Not really. Our companies are more targeted toward solving specific problems, and for most applications, lobbying does not help. It’s useful to participate in discussion groups, as one can keep a tab on how things are developing.

Rudina Seseri, founder and managing partner, Glasswing Ventures

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

The foundation layer model providers such as Alphabet, Microsoft/OpenAI and Meta will likely maintain their market leadership and function as an oligopoly over the long-term. However, there are opportunities for competition in models that provide significant differentiation, like Cohere and other well-funded players at the foundational level, placing a strong emphasis on trust and privacy.

We have not invested and likely will not invest in the foundation layer of generative AI. This layer will probably end in one of two states: In one scenario, the foundation layer will have oligopoly dynamics akin to what we saw with the cloud market, where a select few players will capture most of the value.

The other possibility is that foundation models are largely supplied by the open source ecosystem. We see the application layer holding the biggest opportunity for founders and venture investors. Companies that deliver tangible, measurable value to their customers can displace large incumbents in existing categories and dominate new ones.

Our investment strategy is explicitly focused on companies offering value-added technology that augments foundation models.

Just as value creation in the cloud did not end with the cloud computing infrastructure providers, significant value creation has yet to arrive across the gen AI stack. The gen AI race is far from over.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

A few market segments in AI might not be sustainable as long-term businesses. One such example is the “GPT wrapper” category — solutions or products built around OpenAI’s GPT technology. These solutions lack differentiation and can be easily disrupted by features launched by existing dominant players in their market. As such, they will struggle to maintain a competitive edge in the long run.

Similarly, companies that do not provide significant business value or do not solve a problem in a high-value, expensive space will not be sustainable businesses. Consider this: A solution streamlining a straightforward task for an intern will not scale into a significant business, unlike a platform that resolves complex challenges for a chief architect, offering distinct and high-value benefits.

Finally, companies with products that do not seamlessly integrate within current enterprise workflows and architectures, or require extensive upfront investments, will face challenges in implementation and adoption. This will be a significant obstacle for successfully generating meaningful ROI, as the bar is far higher when behavior changes and costly architecture changes are required.

Is open source the most obvious go-to-market route for AI startups?

Open source models foster collaboration and improvement from the community while closed source models, being proprietary, often offer unique functionalities. In the long-term, companies will likely employ a hybrid approach: open source models for transparency and innovation and closed source models for delivering exceptional value based on their business needs.

In these early stages, startups can benefit from lower costs, higher productivity and frequent product updates by utilizing open source for initial platform/solution development. However, this choice might expose them to vulnerabilities in security and defensibility.

Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see?

I’m multilingual, so this interests me personally, but these models will be more valuable for serving large enterprises, which by nature are usually highly international.

We have recently seen the launch of a significant breakthrough in this space with Meta’s Seamless, “the first all-in-one multilingual multimodal AI translation and transcription model.” Meta’s model can do a range of AI translation and transcription tasks, including speech-to-text, speech-to-speech, text-to-speech, and text-to-text translations, for nearly 100 languages.

Beyond languages, I am very excited about multimodal models that can analyze and generate text and utilize multiple data forms.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

ChatGPT has garnered the lion’s share of hype. However, there is much more in development now that combines text with images, text with video, images with 3D models, and iterations in between.

As it relates to audio specifically, there are challenges that do not exist with text. Think about the size of a training set for text (the entire internet and beyond) versus the training data for audio. It is much more limited and quite a bit tougher to understand and train people on audio. There is also complexity, such as tone in audio, that does not exist in text. For example, how would a model utilize tone to determine if we are fighting or having a good time?

Additionally, model validation and performance measurement are more complicated with audio. If you cannot assess the type of conversation quantitatively or qualitatively, how do you build a model on top and extract data? We will see significant breakthroughs around audio models, but it makes technical sense that it lags behind text. It is a matter of technological progress.

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

The “rise of AI” is a bit of a misnomer. Even the term “deep learning,” representing the more sophisticated machine learning that utilized neural networks, dates back to 2006, and we saw the earliest enterprise adoption in 2010. Indeed, it is the level of public attention that is relatively new.

Every company that effectively utilizes AI will become more valuable. This is because more effective prescriptive analytics and the previously impossible automation will create economic value from the top line, such as sales and marketing, to the bottom line, such as supply chain and logistics.

That said, proprietary data that improves model performance in a specific use case will make those companies more valuable. This includes everything from manufacturing, where proprietary data around production processes, quality assurance and supply chains can be used to optimize manufacturing efficiency and perform predictive maintenance, to retail, where proprietary data around customer purchasing histories, preferences and other behavioral data can be used to more accurately forecast sales and better optimize pricing strategies and marketing campaigns.

When could AGI become a reality, if ever?

AGI is a theoretical possibility, but the attention the concept receives due to seemingly human interactions from LLMs is a false pretense. There are many reasons LLMs are not the direct path to AGI. First, LLMs “parrot”; they do not reason. Gen AI’s “emergent” behaviors seem uncannily human-like to some, but that distracts from the fact that while Gen AI models excel at some things, they have a significant error rate with even simple tasks outside the training data.

Second, LLMs cannot synthesize points of view that are not already represented in the training data. This is why differing prompts on the same topic generate vastly different responses. Third, LLMs do not derive the rules of the world. If an LLM is trained on a math textbook, it does not derive the rules of physics, such as a pipe of a specific circumference will not fit into a hole of a smaller circumference.

Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI?

We invest in companies and founders with a solid technical foundation and the understanding that they know how to build a business around those talents. Our founders are keeping up with news, issues and proposed standards for the AI industry as they are leveraging state-of-the-art technologies, algorithms and models.

Through their interactions with industry peers and discussions about AI’s future, they continue to ensure that AI moves forward thoughtfully, ethically, and responsibly, as is reflected and in keeping with our investment DEI and ESG policies.

Investors from Underscore VC

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

Chris Gardner, general partner: The market is moving way too fast to anoint winners or losers at this point, and since many platforms are being shared as open source, the notion of “leadership: is murky. That said, don’t sleep on Apple.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

CG: Companies building a thin veneer of UI on top of open source models will struggle to differentiate in the long-term. Real value will lie in leveraging models on top of proprietary, industry-specific data.

Lily Lyman, general partner: The Xth generative photo or video platform will struggle to differentiate in the long-term unless it can find a specific functional or vertical use case that enables that work to get done 10x better or faster, or unlocks a set of data assets that previously were inaccessible.

Is open source the most obvious go-to-market route for AI startups?

Richard Dulude, general partner: Even with $1 billion to outmarket open source, you cannot compete with its distribution model, especially in AI. Moreover, you cannot compete with the innovation that a globally distributed community is capable of in the long run.

Do you wish there were more LLMs trained in other languages than English? 

CG: To build the best representation of global human knowledge, models should be trained on the entirety of the world’s digitized content, not just English. Over time, the availability of open source tooling will ensure that models are trained on all of the world’s languages. For example, even before the recent release of LLMs, Google’s Translate team began working on AI-enabled translation for Sanskrit!

Besides linguistic differentiation, what other types of differentiation do you expect to see?

LL: Beyond type and volume of data in an LLM, we think there will be interesting opportunities to differentiate in the data schema or architecture in how the data is brought together.

If done right, architecture will allow for the ability to leverage the latest models in a modular way while also maintaining a basis for a seamless, consistent user experience.

What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months?

RD: LLMs are just one type of neural network with a specific set of strengths and weaknesses. While it’s powerful, the current method has a lot of shortcomings — cost and lack of interpretability are two of the biggest — so there will undoubtedly be disruption.

The likelihood of more innovation in foundational models is effectively 100%. I would hope more of that is open source.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

CG: We are excited about the opportunity around generative voice and what it can mean for individual businesses and entire industries. Give me 3 seconds and I’ve cloned your voice or written a pop hit. It’s about to get really interesting in generative audio. And the early innovation is already triggering huge, and long overdue, changes to copyright law.

Brian Devaney, principal: Multimodal AI that combines text, speech/audio and images/video is of particular interest. Additionally, we’re interested in how text inputs can output code, images and video. We believe these modes of interaction have the opportunity to change how we interact and collaborate with software on a daily basis.

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

LL: Proprietary and verticalized data is becoming a hugely valuable asset that many vertical SaaS and industry cloud companies can capitalize on over time. Yet, we are still a ways away from truly unlocking that value, as data rights, interpretability and the approach to small language models gets sorted out.

Once data “property rights” become clearer, there will be a huge value shift toward the owners of those datasets.

When could AGI become a reality, if ever?

LL: It’s likely we’ll see versions of AGI in the coming decades. But the exact timeline will depend on the pace of some of the research being led right here in Boston at MIT. Hopefully, this timeline gives us time to put guardrails in place around the ethical and safety challenges that will inevitably come with AGI.

Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI?

LL: As an industry, we have an opportunity and responsibility to support the evolution of AI in a way that encourages the safe use of this powerful technology, but without inhibiting innovation. We’re excited when we see our portfolio company founders leaning into these discussions.

Karin Klein, founding partner, Bloomberg Beta

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

Today’s leading AI companies have a great start, in large part because of the extraordinary talent they’ve attracted over the past two years and the sizable capital they’ve raised. While OpenAI’s access to data and computing power provided a meaningful early edge in building models, we’ve already seen computing costs drop and new entrants crop up. There’s a beauty to technology in that there’s always room for new entrants — for example, up until recently, OpenAI’s GPT-4 seemed untouchable, but there are now alternatives to consider.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

While we’ve seen a bunch of “copilots for [insert industry]” that may not be here in a few years, the bigger risk is to ignore the opportunity. If your company isn’t experimenting with AI, now is the time or your business will fall behind.

Sometimes, leaders put forth concerns such as data accuracy and data privacy as reasons to delay experimentation. We encourage them to learn from an implementation around a lower-risk dataset and use case.

For example, start with an internally used chatbot instead of a customer service bot. Bring a small team of employees on board to participate and give feedback on the results.

There are tremendous potential opportunities, including insights, efficiency, personalization, time savings, scalability and more, for practically every aspect of a business, from customer service and engineering to hiring.

Is open source the most obvious go-to-market route for AI startups?

Yes. If a startup is focused on developers and has a credible approach to building a community, open source provides a powerful way to advance a product and attract new users. We also like the transparency and the way it fosters collaboration.

That said, if the product will be used in the enterprise, at some point, the company will still need to develop an enterprise sales motion given some of the challenges (data privacy, IP ownership, hallucinations, security, copyright, etc.).

Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see?

Training LLMs in other languages could have many benefits such as greater accessibility, cultural awareness, accuracy of translation and reduced bias. It could lead to powerful advancements in communication as well as better cultural understanding.

It’s also exciting to imagine the potential for LLMs to be trained with new forms of data like media, images and sound. Since most text has been scanned, these models are hungry for more data.

What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months?

While development is moving fast, it’s difficult to specify timing. Big Tech companies and well-funded startups are focusing on building in this space, so there could be new technologies and approaches introduced at any time. Or we may not hear of the advances if they are kept for internal purposes.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

Text obviously gets a lot of mindshare, since it’s the most immediately usable to the widest group of users, including consumers and business executives. However, many savvy engineers and product leads are working with a range of data inputs and technologies. They see that generative audio and image generation also have promise.

There’s the potential to free up a creative’s time by reducing work, such as editing video or creating video game textures. It can also extend current capabilities, such as creating new art or making educational products more engaging and effective. There are many exciting potential applications. Our minds are the limiting factor.

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

To some extent, proprietary data has always yielded the potential for advantage. That said, AI is a data-driven technology, and the more data a startup has, the better it can train its models to offer better service, create new products, etc.

For example, a startup that collects data on customer behavior could use this data to develop better focused marketing campaigns or product recommendations. Over the years, we each have probably added a surprising number of items to our Amazon cart from their recommendations based on our purchase history.

When could AGI become a reality, if ever?

How do we know this wasn’t written by AGI?

Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI? 

While we’re not prescriptive, it’s important that startup founders have an understanding of how the government works and the potential implications for their business. Founders themselves have the opportunity to model how to build responsibly with AI.

In Washington, DC, we recently organized a briefing on current AI technologies for the Congressional Caucus. We brought together AI founders we’ve backed and government officials so they could learn from each other and collaborate on the emerging legislative conversation on AI policy.

Xavier Lazarus, partner, Elaia

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

It will not be sufficient to master the foundational models to become a long-term leader, but some will emerge, especially the ones that will be the most effective and innovative.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

OpenAI should last, but maybe not as an independent company. Every Big Tech company will have its own technology, and some open source related leaders could emerge and remain independent. Mistral.Ai could be one.

Is open source the most obvious go-to-market route for AI startups?

It shouldn’t be, since OpenAI, the clear leader, is not open source. However, it looks like open source is the most obvious source for the moment to avoid competition with OpenAI.

Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see?

Non-English-speaking currents of thought are weakened when they express themselves in a language that is not their own. To train only in English is to [disregard] the elements of other schools of thought. Being able to learn in all languages means taking the best from all cultures.

What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months?

The current innovation roadmap is to reduce the computing cost and amount of content needed to train an LLM on a specific topic. This can be done without disrupting the core tech, but at some point, you reach a new barrier with new ideas — transformers were a new idea back then. Maybe the new idea already exists in some lab and is not yet either needed by the market or known by the masses.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

The real big thing is code generation. The cost of a developer is way higher than a copywriter and the resources are scarcer, so this is where innovation should focus in the short-term, even if it isn’t as sexy as ChatGPT or Dall-E.

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

Proprietary data has always been an amazing source of wealth — just look at how Google or Meta did before gen AI. Gen AI will just be another tool to take this wealth to the next level. Yes, owning unique data has more and more value.

When could AGI become a reality, if ever?

Machines are better than humans at many tasks, often better than some humans, but not the geniuses of humanity past, present and future.

Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI?

It is important that the voice of startups is heard because they don’t want to be between a rock and a hard place — between the U.S. Big Tech and European regulation.

Dr. Andre Retterath, partner, Earlybird Venture Capital

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

Different LLMs are converging and will be increasingly commoditized. Therefore, deep product integrations and distribution via direct or indirect channels will become key. Today’s leading gen AI companies need to translate their technical — often very research-centered — head start into actual products and tangible business value to secure their position and foster long-lasting relationships with their customers.

I assume that few incredibly large companies will evolve, some of which are at the leading front already today.

Is open source the most obvious go-to-market route for AI startups?

Yes! While the ultimate decision of “closed versus open” depends on the specific context, I’m generally a strong advocate for open source go-to-market motions, as they come with a variety of benefits.

Firstly, companies can actively involve a broader community of researchers and developers to shape their technology and stay ahead of their competitors. Secondly, it allows everyone interested in deploying the respective solution within their own company or products to verify and scrutinize the underlying technology. This is particularly important for trustworthiness and explainability. Few customers want a black box running the most crucial parts of their business.

Lastly, open source lowers the entry barriers for exploration and thus increases overall adoption, which in turn can drive great commercial success.

Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see? 

While current differentiators may center around modalities such as text, image or code, the emphasis will soon shift to product attributes like explainability, traceability, compliance and guardrails to ensure cultural alignment.

At their core, LLMs encapsulate human knowledge and the values they learned from the training data across different modalities. A credible and trusted brand as well as alignment between LLM providers and their customers will become some of the most important differentiators in the future.

What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months?

We are witnessing the perfect AI storm, where three major ingredients that evolved throughout the past 70 years have finally come together: Advanced algorithms, large-scale datasets and access to powerful compute. I surely expect that these deeply interwoven dimensions will advance and eventually lead to artificial general intelligence. Within this bigger context, neural networks are powerful instruments, yet it is unclear whether we will explore more effective tools in the near future.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

I’m convinced that hype and attention do not always correlate with value accrual. ChatGPT is a powerful interface that dramatically changed the way humans interact with LLMs — not with code and complex prompts, but with natural language. The justified hype around it likely evolved due to its broad applicability and widespread consumer attention.

That’s different for most generative audio or image-generation LLMs, which tend to be specific to use cases and are often focused on prosumers or businesses. As a result, there are likely a lot of less known LLM companies that get comparatively less attention relative to their value creation than ChatGPT.

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

100%! Many firms have been collecting, storing, and processing data since the big data hype and beyond, often without clear applications and awareness of the underlying value. With LLMs, we can suddenly unlock this value and allow these companies to gain a unique competitive advantage via verticalized LLMs.

From observing the protective reactions of different online services like Twitter, Reddit or Stack Overflow, which all shut down their APIs and restricted access to their data for LLM training, there seems to be increasing awareness of the intrinsic value of data. The rise of LLMs was essentially just a trigger, and access to proprietary data has become a powerful moat.

When could AGI become a reality, if ever?

Considering the accelerated speed of innovation and extrapolating the achievements of the past decades, I’m convinced that we will achieve AGI sooner [rather] than later — probably within the next 10 to 20 years.

Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI?

Given the outsized impact of AI on all dimensions of our lives, thoughtful regulation and responsible development (and use!) are of utmost importance. Shaping the future of AI will be one of the most critical projects of our civilization, and I expect everyone capable and educated enough to contribute their perspectives and get involved in the discussion.

Matt Cohen, managing partner, Ripple Ventures

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

Likely to see shifts in leadership as innovation continues. Established companies will have resources, but startups may bring disruptive technologies as well with unique use cases that large incumbents aren’t interested in pursuing.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

Can’t pinpoint specific companies, but those not investing in R&D may fall behind. Seeing companies like Zoom and Dialpad announcing GPT-like solutions that are trained on customer data and conversations may completely backfire as people become more data-protected.

Is open source the most obvious go-to-market route for AI startups?

Open source fosters collaboration and innovation but may not be the only route, as it depends on the business model and target market. A lot of open source models are great to get started, but may also become commoditized, given companies need to build up more.

Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see?

Yes, more LLMs in other languages would foster global inclusivity and help people around the world access specialized industries that have limited newcomers entering their workforce. For example, overseas employers accessing the U.S. insurance or healthcare industry by leveraging LLMs that help them converse with customers in foreign countries.

What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months?

It’s possible, as the field is rapidly evolving. New methodologies are constantly being explored, and we may see how enterprises even become leaders in tech because of their proprietary data. Just look at McKinsey and how their internal AI tool is being used. Maybe they’ll sell that to other enterprises and become a SaaS company.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

Yes, focus on text-based models like ChatGPT may overshadow other media types, and generative audio and image models will have significant potential, especially for high-volume content creation (social media, YouTube ads, etc.).

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

Proprietary data can be a strong asset, especially with the rise of AI, and might be the easier moat to value initially. However, it depends on how the data aligns with the company’s goals and industry needs, and if the use cases are big enough that customers will pay for access to those software tools for multiyear contracts.

When could AGI become a reality, if ever?

AGI is a complex and long-term goal, and predicting a specific timeline is challenging. Continuous research and breakthroughs will be needed, so I give it another 10 years.