If you have been on Twitter in the last few days, you likely noticed a deluge of screenshots from a service called ChatGPT. From the OpenAI group, ChatGPT is a conversational tool that allows you to provide the system with prompts that it responds to in written format.
(You can make a free OpenAI account and give the service a shake yourself. Just don’t identify as a journalist during the onboarding process — you’ll get jammed up. Self-describe in a different manner and you can get right in.)
The Exchange explores startups, markets and money.
TechCrunch has been busy covering OpenAI lately, with our own Darrell Etherington writing this morning about ChatGPT and how it is “quickly becoming apparent that how a user interfaces with generative models and systems is at least as important as the underlying training and inference technology.” We also have eyes on new generations of the well-known, and well-liked, GPT-3 AI writing tool.
That there is excitement among founders and venture investors in generative AI services like what OpenAI is building is well-known. Hell, it’s been a minute since Copy.AI showed that leveraging AI writing tools could build eight-figure ARR startups. Others are chasing similar magic, and the technology continues to improve. Hence all the screenshots.
Naturally, I had to see if I was in near-term employment trouble. So I ran a little test with the ChatGPT model this morning. The results are below.
Let’s have some fun
First, a reminder that using computers to generate text is not new, and this is not the first time that I have stared down the barrel of a new tech tool that could, in theory, be coming for my job. Back in 2014, responding to news that the AP was planning on using some automated tech to report on corporate earnings, I wrote the following:
If I could offload the most quotidian [parts of covering earnings reports], say, crafting a standard paragraph that compares results with forecasts, and save myself a few seconds, I’d be all for it. No one reads TechCrunch, or me I suppose, because our prose when comparing fiscal third quarter diluted earnings per share on a non-GAAP basis to market expectations is especially riveting.
But I think that people do seek out reporting and analysis that helps make those numbers mean something. And show me the robot that can do that. Hell, I can’t most of the time, and I’m human. And I really try. So bring on the goddamn robots, I say.
My view at the time was that certain technologies might be useful in helping humans work — a bit how we expect robotics to be human-assisting, rather than human-replacing, for all but the most rote efforts — and that given the complexities of covering breaking news, I would remain employed.
Eight years and change later, I am, in fact, still at work. I still cover earnings, and I still do it by hand. So, there’s that.
Back to ChatGPT. Given how good and fun GPT-3 has proven, and how much excitement there is for the upcoming GPT-4, I wanted to go back to our 2014 conversation and see what ChatGPT could come up with regarding software valuations. After all, TechCrunch has written about the changing value of software companies frequently — let’s be clear, too much — in the last few years. What could the AI come up with? Am I closer to being out of a gig?
First, a note that the system’s knowledge ends in 2021. If you ask ChatGPT about something more recent, you may get the following:
I’m sorry, but I’m a large language model trained by OpenAI, and my training data only goes up until 2021. I don’t have any information on software valuations in 2022, as I am not able to browse the internet or access any information outside of my training data. Is there anything else I might be able to help you with?
Which is very polite, and provides an interesting cap on what we can expect the model to be able to comment on.
So! When we asked ChatGPT to write a TechCrunch blog post about software valuations, we knew that the stuff we’ve written lately wouldn’t make it in. Here’s what the system came up with regardless:
Write a TechCrunch blog post about software valuations:
Software valuations have been a hot topic in the tech industry as of late, with many companies trying to understand how to accurately value their software assets. The traditional approach to valuing software has been to look at the cost of development and compare it to the potential revenue that the software could generate. However, this approach has been criticized for being too simplistic and not taking into account the full range of factors that can impact the value of software.
One of the key challenges in valuing software is that it is a highly dynamic asset. Unlike physical assets, which have a relatively fixed value, software can change in value quickly based on a range of factors, including changes in the market, new innovations, and shifts in customer demand. As a result, valuing software requires a more nuanced approach that takes into account these dynamic factors.
One approach that has gained popularity in recent years is the use of discounted cash flow analysis. This method looks at the potential future cash flows that the software could generate, and then discounts them based on the risks and uncertainties associated with the software. By taking into account these risks and uncertainties, discounted cash flow analysis can provide a more accurate picture of the true value of software.
Another approach that has gained traction is the use of market-based valuation methods. These methods look at the valuation of similar software companies or assets in the market, and use that information to help determine the value of the software in question. While this approach can be useful, it is important to remember that every software asset is unique, and market-based valuations may not always provide an accurate picture of the true value of a given piece of software.
Ultimately, the key to accurately valuing software is to use a combination of different methods and approaches, and to take into account the full range of factors that can impact the value of software. By doing so, companies can better understand the true value of their software assets and make more informed decisions about how to invest in and manage those assets.
There’s a lot to like here, including an argument that simplistic valuation methods will have weaknesses, the fact that discounted cash flow models have become more popular lately and that market comps are a good thing to keep in mind when valuing a software business. The text is also accidentally funny. The way that ChatGPT describes discounted cash flow analysis might make you think it’s a new method of reaching a valuation mark for a particular asset of a company. It’s not.
It appears that I am not in any near-term danger of losing my gig over what the OpenAI crew is building today, as the above is too general to be that useful and too lacking in specificity to be newsy. That’s good to hear. (I am not the only person asking this question, it turns out.)
Solipsism aside, it’s good to have some regular tests in hand when new tech products come out. I tend to test new search engines with queries about Metallica, a topic that I know more about than I typically want to admit out loud. In the same way, I know quite a lot about online media and writing, so by forcing the ChatGPT model to spit out stuff in my wheelhouse, I can better vet it than I could with more random use cases. I presume that you have your own mental models for testing tech — give ChatGPT a try; it’s incredibly good fun.
I think it’s fair and accurate to say that generative AI is making material strides in quality and usability. And I think it is reasonable to expect those gains to continue, and for more use cases to open up for the tech.
At the same time, precision is a huge hill to climb, and while I think that the GPTs of the world are going to help a lot of folks write, I doubt that we are in much danger of losing the need for human scribblers. At least not for a while.