It’s critical to regulate AI within the multi-trillion-dollar API economy

Application programming interfaces (APIs) power the modern internet, including most websites, mobile apps, and IoT devices we use. And, thanks to the ubiquity of the internet in nearly all parts of the planet, it is APIs that give people the power to connect to almost any functionality they want. This phenomenon, often referred to as the “API economy,” is projected to have a total market value of $14.2 trillion by 2027.

Given the rising relevance of APIs in our daily lives, it has caught the attention of multiple authorities who have brought in key regulations. The first level is defined by organizations like IEEE and W3C, which aim to set up the standards for technical capabilities and limitations, which define the technology of the whole internet.

Security and data privacy aspects are covered by internationally acknowledged requirements such as ISO27001, GDPR, and others. Their main goal is to provide the framework for the areas underpinned by APIs.

But now, with AI, it has become much more complicated to regulate.

How AI integration changed the API landscape

Various kinds of AI have been here for a while, but it’s generative AI (and LLMs) that completely changed the risk landscape.

Many AI companies use the benefits of API technologies to bring their products to every home and workplace. The most prominent example here is OpenAI’s early release of its API to the public. This combination would not be possible just two decades ago, when neither APIs nor AI were at the level of maturity that we started observing in 2022.

Code creation or co-creation with AI has quickly become the norm in software development, especially in the complicated process of API creation and deployment. Tools like GitHub Copilot and ChatGPT are able to write the code to integrate with any API, and soon they will define certain ways and patterns that most software engineers use to create APIs, sometimes even without understanding it deeply enough.

We also see how companies like Superface and Blobr innovate in the field of API integration, making it possible to use AI to connect to any API you want in a way you would talk to a chatbot.

Various kinds of AI have been here for a while, but it’s generative AI (and large language models [LLMs]) that completely changed the risk landscape. GenAI has the ability to create something in endless ways, and this creativity is either controlled by humans or — in the case of artificial general intelligence (AGI) — will be beyond our current ability to control.

This last idea provides a clear dichotomy for our future efforts around AI regulation, as it raises issues as to what is specifically being regulated and who is responsible for a given incident.

What exactly are we regulating?

The most obvious part of new regulation initiatives will be first targeted in areas where AIs are performing specific actions driven by human intent. The challenges related to these activities include misinformation, cybercrime, copyright and other areas. Here, a lot of regulations are actively emerging. Perhaps the most far-reaching of them is the EU AI Act.

Strictly speaking, it’s not an AI that should be regulated here — it is more about how different people and organizations use AI capabilities, what intent they have, and how this usage is compliant with what is beneficial for society.

If we compare this with the recent developments and regulations in the API industry, it is safe to say that a lot of “human-controlled AI” regulations will be connected to data privacy as a whole and to the banking and financial sectors in particular.

However, the most intriguing and perhaps near-to-impossible part will be the attempt to regulate the AI instances themselves. Regardless of whether we consider any AI instance a true AGI, it still has the “creativity” component, which, combined with APIs, can reach almost anywhere that has the internet and the machine to execute the code.

AI and APIs combined: Problem scenarios

To understand the complexity of these regulations and controls, let’s explore some instances where API and AI are intertwined:

  • API integration between two software systems was always difficult, and many companies have been focusing a lot on great developer experience to make it easier for software engineers to use their APIs. However, soon we will observe machine-to-machine APIs where an AI bot can connect to any API and switch between them seamlessly.
  • AI bots will be able to solve any technical task, and will do it in a fully autonomous way. They can learn from their mistakes, replicate themselves and follow the mission that drives their existence. One of the most fascinating and scary examples that we’ve seen recently is ChaosGPT, whose primary goal is to do as much harm as possible.
  • AI can be trained to create any other programming language or an API, because API is essentially a technical, artificial language. This means that there might be new languages developed by some AIs that can be understood only by them.

Combining all these together might paint a gloomy picture, where an autonomous AI can spread itself with an API and can create as many other APIs as it wants. These creations will be understandable only by other instances of this AI. They can easily find all the security holes and use all this to work toward any goal — set by either a human or an AI component.

How to deal with a regulatory nightmare

So can AI using APIs be regulated at all? This problem is part of the AI alignment discussion, which can provide a framework for efficient AI control. However, it’s the API sector that makes this risk grow dramatically and requires a more sophisticated approach to possible regulations.

There are definitely a lot of security practices and regulation controls we need to put in place when creating new AI systems, but everywhere else these systems can be used with APIs. For example, certain technical standards and capabilities should be developed to detect unwanted and potentially harmful activities of AIs of any kind.

There should be a way to trace back who might be responsible for these kinds of activities and hold them liable when they break the law. Potentially, there might be a technical solution that allows us to embed an “AI alignment” component into any possible AI instance and thus ensure it always stays within the existing legal/regulatory frameworks.

Inventing and enforcing these new mechanisms might be one of our biggest challenges in the coming decades.