Shares of edtech company Chegg fell off a cliff this week after the company reported Q1 results that bested analyst expectations.
But Q1’s results aren’t what made the company lose nearly half of its value. In its earnings call the company’s executives noted that ChatGPT was slowing its ability to add new subscribers, not only potentially slowing its growth but also throwing uncertainty into its ability to predict its future financial results.
Chegg’s dramatic post-earnings valuation flop will not be the last time that we see new AI tooling run headlong into existing enterprises. But it is one of the most dramatic cases to date and raises more questions than simply what is ahead for Chegg itself — and edtech more broadly. AI is the elephant in every sector’s room: How are startups reacting, especially when a public company readily admits that a leading product is slowing growth?
Let us explain. If it has been some time since you attended college, you might think of Chegg as a place to rent textbooks for a fraction of their list price. That is a correct, if incomplete understanding of the company’s product mix. Chegg also has a subscription service called Chegg Study that provides students with homework help. As Chegg Study also has a wealth of answers to various academic questions, it has also become a reportedly popular tool for cheating.
Naturally we are not saying that Chegg is the only party potentially at fault. Students make their own choices, and Chegg does some work to help schools prevent cheating. But the company also surely knows that its customers — students — are often using its service to get around traditional academic ethics and is still collecting checks from them. Still, we’re not trying to throw shade apart from merely stating facts.
Where does ChatGPT come into the Chegg world? Here’s how the company described the situation early on during its earnings call:
In the first part of the year, we saw no noticeable impact from ChatGPT on our new account growth, and we were meeting expectations on new sign ups. However, since March, we saw a significant spike in student interest in ChatGPT. We now believe it’s having an impact on our new customer growth.
Chegg was clear that ChatGPT didn’t affect its user retention metrics, more that it was slowing its new user growth. Throw in the fact that Chegg said it wouldn’t know the full impact of ChatGPT on its growth until the new school year — this fall, in other words — and investors decided to drop its shares like hot rocks.
While Chegg is a notable example of the commercial impact of student usage of ChatGPT and other similar services, it’s hardly the only to be found. Indeed, if you read through recent commentary and analysis of how similar LLMs will affect education, some folks are even excited about the prospect of the tools.
Schools are not sitting still, however. Several large U.S. school districts, for example, have blocked access to the service on their networks. That’s a clear signal of concern, we’d reckon. This led us to a question:
How good is ChatGPT at helping you cheat?
Briefly, how good are modern LLMs at helping students cheat? Pretty alright, it turns out. As with Chegg, OpenAI and others are not responsible for how their technologies are used, no more than we can blame YouTube for hosting bad covers of great songs. Such is the will of the end user.
But for fun we pulled some questions listed on the Chegg website and ran them through ChatGPT and Wolfram Alpha, which has long been a friend of the struggling mathematics student. In short, our testing of Chegg-listed questions in ChatGPT was impressive in its detail across topics, even if we weren’t able to fully vet all of the math it was doing (it has been a minute since we took advanced mathematics!). Wolfram Alpha, in contrast, choked instantly on the mix of words and numbers that we threw at the LLM, essentially confirming that when it comes to using digital services to solve homework questions, at least in certain contexts, yes, LLMs are better than what came before them.
To better vet how good ChatGPT was at the tasks we were throwing at it, we dumbed our queries down a bit, pulling from a Berkeley calculus workbook, and watched ChatGPT not only solve what we asked — to a degree of accuracy that passed our, again limited retained, math knowledge — but also explain the process by which it went about the work. It was a mini-lecture of sorts, and one that was, frankly, pretty dang cool.
If you were an enterprising student looking for a tool to check your work, well, here you go. ChatGPT and Chegg’s offerings may simply be incredibly powerful tools that render portions of traditional academic grading moot — if students can check their homework before turning it in, why not always get a perfect score? — but would not replace the power of in-person testing using pencils, the method by which Alex took his calculus courses in college, the stress from which he recalls to this day.
Not the end of the world, then, unless students are turning in work from home computers where they can’t be monitored. Then again, if students can’t be trusted at all, there are greater issues at play than merely the interplay between new technologies and existing educational paradigms, and they don’t stem from teachers and professors.
The edtech reaction
Beyond Chegg, how seriously should the broader edtech market be taking the rise of ChatGPT? Class CEO and founder Michael Chasen, an edtech veteran, sees the “direct, straight competition” between AI and the platform itself: Students used to turn to Chegg to help them with writing papers, conducting research or accessing tutor support, he explained.
The entrepreneur said that a lot of people are talking about the tech without trying to answer a key question: “How do you deploy AI in a meaningful way that makes a difference with teachers and students?”
Class has spent months working on an AI teaching assistant, where students can ask questions for a variety of scenarios including: a summary of what you missed if you dialed into class 10 minutes late, further explanation on a phrase that a lecturer said and didn’t clarify, and basic definitions of terms. “It wasn’t technically difficult to do this, the hard thing was how to figure out how to utilize AI in our product in a way that made it easy” enough so students and teachers don’t even need to understand how AI works, Chasen said.
The product has not yet been rolled out to classrooms. Class is still working on ways to make sure the AI doesn’t say anything negative or offensive. Right now, if a student asks an inappropriate question, it is only visible to them, he said. But as we all know, screenshots of AI gone wrong have a tendency of going viral, so mass integration in classrooms will require some sort of promise that the AI TA won’t go rogue.
“AI isn’t a small feature, and I think any edtech company that is thinking that way is missing the boat,” he said.
Philip Cutler, CEO and founder of Paper, an educational software provider that powers tutoring services in schools, sees generative AI as an asset; he described it as a “new form of calculator.” The company recently launched a product that helps students practice reading, using artificial intelligence to better understand what they read, how well they read based on grade level and to help provide more reading material.
“We are using these tools but unaware of what the bias may be,” Cutler said. “Curriculum and content are very personal for communities, and if we don’t know what the bias is in the AI model, it may cause conflict in what is being taught.”
TechCrunch has its ears perked for more edtech results and whether ChatGPT or other modern AI tools make an appearance. Duolingo reports earnings next week, for example.
It’s still early days; ChatGPT came out less than a year ago. This is, then, year one in a sense. By the time we reach the 10th year of the massively available LLM era, who knows what the challenges will be?