The possibility of regulation hangs on the horizon over generative AI

Generative AI came out of nowhere this year, and it has captured the imagination and the attention of the tech industry. Companies appear to be fully embracing it, perhaps sensing that this could be a truly transformative technology. Yet even as companies fall all over themselves to get in on the ground floor of this potential opportunity, a cloud hangs over the enthusiasm.

That is the great unknown of regulation, which could have a tremendous impact on every company selling and implementing generative AI. Biden released an executive order that dictates a broad set of guidelines; there was an AI Safety Summit meeting in the U.K.; and the EU is working on its own set of potentially stringent requirements, too.

There’s been a range of reactions to the rise of generative AI, with some — like the letter signed by 1,100 technology industry luminaries last March — calling for a six-month moratorium on AI development. That didn’t happen, of course. If anything, it has accelerated, even as some scream hysterically that AI is an existential threat.

At the other end of the spectrum, you have folks who think any type of regulation would stifle innovation without really generating any actual protection. The primary argument being how can you protect people from negative outcomes until you know what they are. Of course, some would argue that if you wait for those bad results, it could be too late to do anything about it.

And some people see the existential threat argument as a smoke screen covering up real problems we face from the current generation of AI. What’s worse, regulations that are too stringent favor the richest and most established companies, pushing aside startups, which might not be able to afford to comply.

There’s something to be said for that, too, especially when the incumbents are sitting at the table helping to draft those same regulations. It raises some interesting questions about how much to regulate and where the right answers lie.

To regulate or let it be

It seems that most folks would see some AI regulation as a given, perhaps a necessity, especially from those who see it in purely dystopian science-fiction terms. But that’s not always the case. In Marc Andreessen’s rambling pro tech manifesto, published in October, he envisions a world of unfettered and unregulated technology where regulatory bodies are the enemy of progress.

“We believe intelligence is the ultimate engine of progress,” he wrote. “Intelligence makes everything better. Smart people and smart societies outperform less smart ones on virtually every metric we can measure. Intelligence is the birthright of humanity; we should expand it as fully and broadly as we possibly can.”

In his view, regulating AI could, in some cases, be akin to murder: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”

He is not alone in some of his views.

Speaking at Web Summit last month, MIT professor Andrew McAfee (who made it clear he was not representing his institution with his views) divided the world into two distinct groups: “Team Permissionless Innovation” and “Team Upstream Governance.” You can see where this is going. McAfee, while not going so far as to say there shouldn’t be any regulation, made it clear that those sitting on Team Upstream Governance are in favor of stifling innovation, especially for startups.

“If you have more upstream governance, one of the things you should expect is less innovation. The upstream governance side looks at us and says if we continue to have lots of permissionless innovation, we will have more harm and at some level these two philosophies are incompatible. And I think we face a choice about which team we’re on,” he said, putting it in rather sharply delineated terms.

McAfee’s position isn’t quite as stark as Andreessen’s appears to be. He at least sees a place for regulation where real harm could result, but his view is to wait for something to happen and then regulate it; reactive regulation, if you will.

As an example, he uses the case of upskirting on Boston subways in 2009, when creepy guys started using their cell phone cameras to take photos up women’s dresses. The public reacted with justifiable outrage, and the Massachusetts legislature quickly passed a law outlawing the distasteful practice (perhaps the fastest action ever taken by that particular political body, not known for moving quickly). What he points out is that they start regulating the base technology: cell phones or cameras.

In an analogous case, we are seeing “nudify” apps, which let users create deepfake nudes of people, usually women, without their consent. So far there hasn’t been widespread call to ban this, and so without existing regulation or laws specifically prohibiting it, it continues.

“I was absolutely not saying that AI should not be regulated,” McAfee told TechCrunch+ in an interview after his Web Summit presentation. “Technologies need regulation. There’s a question about when you decide to regulate and intervene, and my camp, the permissionless innovation camp, says intervene after the harms are clear, especially if there’s not a reason to believe that you’re not jeopardizing key things like health, safety, or the environment up front.”

Perhaps some regulation is in order

Not everyone agrees with this worldview. Albert Wenger, managing partner at Union Square Ventures, took exception to McAfee’s take on the situation when he followed him onstage at Web Summit.

“This is not a soccer match, folks, like AI is not a soccer match,” he said. “You don’t have to pick your favorite team. Both teams can be wrong. And there can be middle paths that actually are hard to find, but are much more rewarding. And it’s exactly what we need to be pushing for here.”

But he doesn’t see tight regulation as the answer, either. “The answer isn’t extremely to this side or extremely to that side,” Wenger said. “There’s definitely a failure mode where you give the government a huge amount of power and the government regulates who can do what with AI. Very bad. We don’t want this outcome. There is also failure mode where we publish ever more powerful open source models, and people really do very bad things with it.”

In other words, it’s complicated.

Christine Spang, CTO at Nylas, speaking on a Web Summit panel on generative AI, suggested that it’s too soon to regulate because we don’t exactly know what to regulate at this point. “It’s too early to make the rules because we don’t really know what the end game is gonna be. And, you know, the goal of regulation is to prevent really bad things from happening and they haven’t really happened yet. So why are we trying to make rules [now]?” she said.

“So I hope that there’s going to be a good old sort of very intense debate around what sort of regulation is necessary, and then I hope that the U.S. [and other international regulatory bodies] will back off a bit because it’s too early.”

Contrast that view with Sarab Narang’s, who is GM of generative AI at cloud giant AWS. He sees regulating AI as a starting point.

“This industry needs to be regulated, right? And we’re doing a lot of work, like I’m spending a lot of my time as well on those topics,” Narang said. “I think it’s important for these regulations to be done in a way that it’s actually implementable. And so I think it’s important for industry to be involved in that process. And we’re heavily involved in that process. We’ve got public policy teams, we’ve got teams engaging [with governments] and pulling us in, when it comes to actually deciding what makes sense to do.”

Regulation can also have unintended consequences of benefiting larger organizations, said Jon Turow, a partner at Madrona Ventures. “Now, in AI, I don’t know how much it can be controlled. But if we do enough regulation, then it really does change the operating landscape, and the effect that it will have is more concentrated power in a few big companies that are able to comply with all the rules,” he said. And that could have a detrimental impact on startups.

Regardless of what governments do to regulate AI, there are a wide range of viewpoints. As companies implement generative AI, they have to understand that different governments could end up drafting vastly different rules, making it extremely challenging for the companies implementing this technology (while likely creating opportunities for startups and incumbents around generative AI governance).

And as we look at adding these capabilities to the enterprise software stack, companies have to understand that beyond the potentially transformational value of this technology, it could get much more complicated from a regulatory and governance perspective.