MIT professor wants to overhaul ‘The Hype Machine’ that powers social media

'We really have a full-blown social media crisis on our hands'

More than 3.6 billion people use social media, and its runaway success has left the industry at a crossroads. There are now heated debates in Washington and Brussels over the future of antitrust regulation for this market, whether platform operators should filter certain content (and if so, which types), and how to open the market to new innovators.

To find my way through this thicket of interesting questions, I spoke with Sinan Aral, a professor of management at the MIT Sloan School of Management who also is director of MIT’s Initiative on the Digital Economy. He has spent years analyzing the social media market, directly participating in its development as chief scientist of SocialAmp and Humin and as a founding partner of Manifest Capital.

This fall, he published his latest book, “The Hype Machine,” which explores what’s next for social media giants. In our discussion, we talked about the landscape of the market today, what responsibilities companies and users have to each other and what come next as the industry evolves.

This interview has been edited and condensed for clarity.

TechCrunch: Why don’t we start with how the book came together and how you got interested in this topic of digital media and how it affects our decision-making?

Sinan Aral: I started researching social media four years before Mark Zuckerberg founded Facebook. I have worked with all of the major social media platforms for the last 20 years: Facebook, Twitter, Snapchat, WeChat, Yahoo and the rest. I’ve published a number of very large-scale studies, and I’m also an entrepreneur. So, I’ve got a vantage point as a practitioner, but also as a long-time academic leader in this area.

We really have a full-blown social media crisis on our hands, as is obvious if you turn on the TV on any given day.

The reason why I wrote “The Hype Machine” is because essentially, we’ve seen this coming to a head for many years now. We really have a full-blown social media crisis on our hands, as is obvious if you turn on the TV on any given day.

My book takes off from where “The Social Dilemma” documentary and Shoshana Zuboff’s “The Age of Surveillance Capitalism” leave off, which is to ask, what can we concretely do to solve the social media crisis that we find ourselves in? The book argues that in order to do that, we have to stop armchair theorizing about how social media works, and we have to stop debating whether or not social media is good or evil. The answer is yes.

The book goes through the fundamentals of how social media works. So, there’s a chapter on neuroscience and social media, and economics and social media, and that eventually informs the solutions in the book, which cover everything from antitrust and competition to federal privacy legislation. How do we secure our elections and our democracy? What do we do about Section 230 of the Communications Decency Act? How do we balance free speech and hate speech? How do we deal with misinformation and fake news?

I think for a lot of us in tech, we’re a bit stuck. On one hand, these technologies have produced jarring amounts of wealth in the tech industry, but they have also caused a large number of harms. What do we do next?

Let me start by saying that the general framework of the solution is about what I call the four levers: money, code, norms and laws.

Money is the business models, which create the incentives for how the advertisers on the platforms and the users behave. Code is how we design the platforms and the algorithms underlying the platforms, which I go into in great detail. Norms are how we adopt, appropriate and use the technology. And obviously, laws are regulation.

In terms of solutions, I think the entry ticket for solving the social media crisis is creating competition in the social media economy. Platforms that lack competition don’t have any incentive to change away from the attention economy and their engagement-driven business models, nor do they have any real incentive to clean up their negative externalities in our information ecosystem, whether it’s hate speech or misinformation or manipulation.

Now, when I say competition, the first thing on everyone’s mind is always, “Oh, you mean break up Facebook.” But the point I make in the book — and I take a very clear stance on this — is that breaking up Facebook in this economy doesn’t solve the problem. This economy runs on network effects. The value of these platforms is a function of the number of users on the platform. Economies that run on network effects tend toward concentration and monopoly.

So, if you break up Facebook, it’s just going to tip the next Facebook-like company into market dominance. What we really need is structural reform of the social media economy, and that involves social network portability, data portability and interoperability legislation.

Let me push back on this a bit though. Terms like “data portability” always sound nice as a solution, but have we ever effectively used this tool to open a market?

This isn’t the first time that we’ve done this. During the AOL-Time Warner merger, we forced AOL’s AIM product to become interoperable with Yahoo Messenger and MSN Messenger. And it went from a 65% market share to a 59% market share one year later, down to like 50%, then it ceded the entire market to new entrants three years later.

Another good analogy is number portability in the cell phone market. It used to be that you couldn’t take your cell phone number with you when you switched from one cell phone provider to another, and then we legislated that they had to let you take your number with you. That was akin to a social network at the time, because all of your friends knew to call you at that number.

Research has shown that number portability created about $880 million of consumer surplus every quarter for years and years after it was instituted in Europe, and it created a lot of competition. We should have something very similar in social networks, around social network portability and data portability, so that we could create competition.

Now, if you break up Facebook after these kinds of structural reforms to the market, that’s a different question, but breaking up Facebook without structural reforms to the market economy is like putting a Band-Aid on a tumor. It’s not going to solve the underlying lack of competition that the social media economy has.

“The Hype Machine” details how we might do that and suggests that there could be a stack of commodity messaging formats that would be required to be interoperable. Then, you could have unique messaging formats for every platform on top of that. But things like texts, short-form videos, stories that either persist or disappear, that kind of stuff should have a level of interoperability that’s legislated. The entry ticket to solving the social media crisis is creating competition.

Okay, so competition is the solution. But let me ask you, when you look at misinformation, how does competition “solve” misinformation? Couldn’t there be a race to the bottom with more competition?

You have to have competition to create the incentive to change. But take misinformation, for instance. There’s a number of ways that we can deal with misinformation.

For instance, we know from large-scale experimental studies that being reflective reduces the likelihood of believing and sharing misinformation, and that periodic requests to a user to ask them, “Hey, is this news story or this headline reliable, true, false, et cetera?,” inspires them to be reflective for a period of time after that nudge.

Now, if you think about it, you go to the grocery store to buy food to consume. It’s extensively labeled by law. We don’t have anywhere near the same amount of labels for the information that we’re consuming, but we could with a system that combines crowdsourcing, which makes people reflective and believe and share misinformation less, combined with machine learning and the human-in-the-loop moderators to create a system of labeling.

Take that kind of system, combined with the demonetization of falsity (making it so that you can’t earn ad dollars against known and harmful false content), and then emphasizing media literacy and critical thinking plus slowing all information down — all that combines to solve this crisis.

I feel like we are so focused on speeding information up in tech — what does slowing information down even look like?

I call this “The Hype Machine” because it’s designed to hype us up due to the business model of engagement as a precursor to selling attention for persuasion.

So what can you do? WhatsApp reduced the number of reshares to five, and then to one, to prevent the spread of coronavirus misinformation, for instance. We saw Twitter reduce retweeting, requiring you to Quote Tweet between October 20 and the election. Those types of policies probably should remain, even after the election.

As we know, falsity outpaces the truth. Debunking never is quick enough to catch up with the falsity. So, slowing information down, nudging people to be reflective, creating a systematic, scalable system of crowdsourced machine learning and human-in-the-loop labeling systems, demonetizing fake news, reducing the relevance of fake news in search results, combined with media literacy. This is just an example of solutions for fake news.

I think it would be good to allow an open space in which people could submit algorithms, that could then be adopted by the platforms for users to choose from …

Some critics of social media have called for algorithms to be removed entirely from curating content. Is that a useful approach?

I don’t think you can eliminate the algorithms altogether. The problem with no algorithms is that, there’s too much information. If we just all had a reverse-chronological feed of all the information in the world, none of these systems would be efficient in terms of giving us what we need and want in terms of our information.

I think it would be good to allow an open space in which people could submit algorithms, which could then be adopted by the platforms for users to choose from, and there needs to be transparency of those algorithms.

You mentioned media literacy a bit ago. I’ve been thinking about this aspect more: Who is ultimately responsible for the truth?

Yeah, I think it’s a good question. I gave a TED Talk a couple of years ago, all about how do we protect truth in the age of misinformation? One thing that I noted there, and I also note in the book, is obviously we can think about technical solutions to misinformation, and we can think about regulatory solutions. But underlying any of these regulatory or technical solutions is a much deeper, more important question that is foundational to any solution, which is who gets to decide what’s true and what’s false in society? That is a question that requires ethicists, philosophers and so on to be part of the conversation as well.

Is it Facebook that gets to decide? Is it the government that gets to decide what’s true and what’s false, or is it some international consortium of fact-checkers? Who’s checking the fact-checkers? How do we know that they’re impartial and unbiased?

This is a very deep question that doesn’t have an easy answer. But I will say that, for instance, in our 10-year study of Twitter on fake news that we published in Science, we used, I think it was six or eight international fact-checking organizations, and they agreed 95%-98% of the time on things that they labeled as true or false.

I think that having objective sources of fact-checking are important. We know that fact-checking does reduce the belief in and sharing of fake information.

In terms of media literacy, and I cover the evidence in my book, there is evidence that media literacy does make positive strides toward making people able to think critically about what’s true and what’s false. There have been experimental studies of games that are designed to teach kids how to root out fake news and false information, and those seem relatively successful.

Google has some interesting educational and media literacy initiatives as well that seem to be showing signs of being effective. It’s early days, but my current thinking is that initial signs are promising.

Finally here, what do the next few years look like for social media?

We are at a crossroads between the promise and the peril of social media.

I don’t think we should forget the promise that it has. I detail a lot of the tremendous positive benefits that social media has brought us and could bring us as well in the future. But I think going forward, we are at an inflection point.

My feeling is that, in the next 18 to 24 months, we have a window of opportunity to really get serious about being rigorous, about how we address this crisis. That means having rigorous scientific understandings of how social media works under the hood of any solution.

I think we need a national commission on democracy and technology that brings experts, activists and industry representatives together to devise solutions to the problems that we see. I think that we can learn from the mistakes and successes of GDPR in Europe and do it even better in the United States.

We need to start moving on this today, or yesterday actually, given what we’ve seen in the 2016 election and the 2020 election, along all four dimensions that I described. Changes to the business models, changes to the regulation, attempts to institutionalize norm development toward more healthy information ecosystems and finally the code, the design of the platforms and the algorithms themselves.

Update Dec 21, 2020: Aral is the sole director of MIT’s Initiative on the Digital Economy, not the co-lead.