AI

It’s critical to regulate AI within the multi-trillion-dollar API economy

Comment

Image Credits: Getty Images

Alex Akimov

Contributor

With two decades of tech leadership experience, Alex Akimov, former head of API at Adyen, now revolutionizes embedded finance at Monite by building best-in-class APIs for effortless client integrations.

Application programming interfaces (APIs) power the modern internet, including most websites, mobile apps, and IoT devices we use. And, thanks to the ubiquity of the internet in nearly all parts of the planet, it is APIs that give people the power to connect to almost any functionality they want. This phenomenon, often referred to as the “API economy,” is projected to have a total market value of $14.2 trillion by 2027.

Given the rising relevance of APIs in our daily lives, it has caught the attention of multiple authorities who have brought in key regulations. The first level is defined by organizations like IEEE and W3C, which aim to set up the standards for technical capabilities and limitations, which define the technology of the whole internet.

Security and data privacy aspects are covered by internationally acknowledged requirements such as ISO27001, GDPR, and others. Their main goal is to provide the framework for the areas underpinned by APIs.

But now, with AI, it has become much more complicated to regulate.

How AI integration changed the API landscape

Many AI companies use the benefits of API technologies to bring their products to every home and workplace. The most prominent example here is OpenAI’s early release of its API to the public. This combination would not be possible just two decades ago, when neither APIs nor AI were at the level of maturity that we started observing in 2022.

Code creation or co-creation with AI has quickly become the norm in software development, especially in the complicated process of API creation and deployment. Tools like GitHub Copilot and ChatGPT are able to write the code to integrate with any API, and soon they will define certain ways and patterns that most software engineers use to create APIs, sometimes even without understanding it deeply enough.

We also see how companies like Superface and Blobr innovate in the field of API integration, making it possible to use AI to connect to any API you want in a way you would talk to a chatbot.

Various kinds of AI have been here for a while, but it’s generative AI (and large language models [LLMs]) that completely changed the risk landscape. GenAI has the ability to create something in endless ways, and this creativity is either controlled by humans or — in the case of artificial general intelligence (AGI) — will be beyond our current ability to control.

This last idea provides a clear dichotomy for our future efforts around AI regulation, as it raises issues as to what is specifically being regulated and who is responsible for a given incident.

What exactly are we regulating?

The most obvious part of new regulation initiatives will be first targeted in areas where AIs are performing specific actions driven by human intent. The challenges related to these activities include misinformation, cybercrime, copyright and other areas. Here, a lot of regulations are actively emerging. Perhaps the most far-reaching of them is the EU AI Act.

Strictly speaking, it’s not an AI that should be regulated here — it is more about how different people and organizations use AI capabilities, what intent they have, and how this usage is compliant with what is beneficial for society.

If we compare this with the recent developments and regulations in the API industry, it is safe to say that a lot of “human-controlled AI” regulations will be connected to data privacy as a whole and to the banking and financial sectors in particular.

However, the most intriguing and perhaps near-to-impossible part will be the attempt to regulate the AI instances themselves. Regardless of whether we consider any AI instance a true AGI, it still has the “creativity” component, which, combined with APIs, can reach almost anywhere that has the internet and the machine to execute the code.

AI and APIs combined: Problem scenarios

To understand the complexity of these regulations and controls, let’s explore some instances where API and AI are intertwined:

  • API integration between two software systems was always difficult, and many companies have been focusing a lot on great developer experience to make it easier for software engineers to use their APIs. However, soon we will observe machine-to-machine APIs where an AI bot can connect to any API and switch between them seamlessly.
  • AI bots will be able to solve any technical task, and will do it in a fully autonomous way. They can learn from their mistakes, replicate themselves and follow the mission that drives their existence. One of the most fascinating and scary examples that we’ve seen recently is ChaosGPT, whose primary goal is to do as much harm as possible.
  • AI can be trained to create any other programming language or an API, because API is essentially a technical, artificial language. This means that there might be new languages developed by some AIs that can be understood only by them.

Combining all these together might paint a gloomy picture, where an autonomous AI can spread itself with an API and can create as many other APIs as it wants. These creations will be understandable only by other instances of this AI. They can easily find all the security holes and use all this to work toward any goal — set by either a human or an AI component.

How to deal with a regulatory nightmare

So can AI using APIs be regulated at all? This problem is part of the AI alignment discussion, which can provide a framework for efficient AI control. However, it’s the API sector that makes this risk grow dramatically and requires a more sophisticated approach to possible regulations.

There are definitely a lot of security practices and regulation controls we need to put in place when creating new AI systems, but everywhere else these systems can be used with APIs. For example, certain technical standards and capabilities should be developed to detect unwanted and potentially harmful activities of AIs of any kind.

There should be a way to trace back who might be responsible for these kinds of activities and hold them liable when they break the law. Potentially, there might be a technical solution that allows us to embed an “AI alignment” component into any possible AI instance and thus ensure it always stays within the existing legal/regulatory frameworks.

Inventing and enforcing these new mechanisms might be one of our biggest challenges in the coming decades.

More TechCrunch

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia…

U.K. agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing

Here’s what one insider said happened in the days leading up to the layoffs.

Tesla’s profitable Supercharger network is in limbo after Musk axed the entire team

StrictlyVC events deliver exclusive insider content from the Silicon Valley & Global VC scene while creating meaningful connections over cocktails and canapés with leading investors, entrepreneurs and executives. And TechCrunch…

Meesho, a leading e-commerce startup in India, has secured $275 million in a new funding round.

Meesho, an Indian social commerce platform with 150M transacting users, raises $275M

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others