How confidential computing could secure generative AI adoption

Generative AI has the potential to change everything. It can inform new products, companies, industries, and even economies. But what makes it different and better than “traditional” AI could also make it dangerous.

Its unique ability to create has opened up an entirely new set of security and privacy concerns.

Enterprises are suddenly having to ask themselves new questions: Do I have the rights to the training data? To the model? To the outputs? Does the system itself have rights to data that’s created in the future? How are rights to that system protected? How do I govern data privacy in a model using generative AI? The list goes on.

It’s no surprise that many enterprises are treading lightly. Blatant security and privacy vulnerabilities coupled with a hesitancy to rely on existing Band-Aid solutions have pushed many to ban these tools entirely. But there is hope.

Confidential computing — a new approach to data security that protects data while in use and ensures code integrity — is the answer to the more complex and serious security concerns of large language models (LLMs). It’s poised to help enterprises embrace the full power of generative AI without compromising on safety. Before I explain, let’s first take a look at what makes generative AI uniquely vulnerable.

Generative AI has the capacity to ingest an entire company’s data, or even a knowledge-rich subset, into a queryable intelligent model that provides brand-new ideas on tap. This has massive appeal, but it also makes it extremely difficult for enterprises to maintain control over their proprietary data and stay compliant with evolving regulatory requirements.

Protecting training data and models must be the top priority; it’s no longer sufficient to encrypt fields in databases or rows on a form.

This concentration of knowledge and subsequent generative outcomes, without adequate data security and trust control, could inadvertently weaponize generative AI for abuse, theft, and illicit use.

Indeed, employees are increasingly feeding confidential business documents, client data, source code, and other pieces of regulated information into LLMs. Since these models are partly trained on new inputs, this could lead to major leaks of intellectual property in the event of a breach. And if the models themselves are compromised, any content that a company has been legally or contractually obligated to protect might also be leaked. In a worst-case scenario, theft of a model and its data would allow a competitor or nation-state actor to duplicate everything and steal that data.

These are high stakes. Gartner recently found that 41% of organizations have experienced an AI privacy breach or security incident — and over half are the result of a data compromise by an internal party. The advent of generative AI is bound to grow these numbers.

Separately, enterprises also need to keep up with evolving privacy regulations when they invest in generative AI. Across industries, there’s a deep responsibility and incentive to stay compliant with data requirements. In healthcare, for example, AI-powered personalized medicine has huge potential when it comes to improving patient outcomes and overall efficiency. But providers and researchers will need to access and work with large amounts of sensitive patient data while still staying compliant, presenting a new quandary.

To address these challenges, and the rest that will inevitably arise, generative AI needs a new security foundation. Protecting training data and models must be the top priority; it’s no longer sufficient to encrypt fields in databases or rows on a form.

In scenarios where generative AI outcomes are used for important decisions, evidence of the integrity of the code and data — and the trust it conveys — will be absolutely critical, both for compliance and for potentially legal liability management. There must be a way to provide airtight protection for the entire computation and the state in which it runs.

The advent of “confidential” generative AI

Confidential computing offers a simple, yet hugely powerful way out of what would otherwise seem to be an intractable problem. With confidential computing, data and IP are completely isolated from infrastructure owners and made only accessible to trusted applications running on trusted CPUs. Data privacy is ensured through encryption, even during execution.

Data security and privacy become intrinsic properties of cloud computing — so much so that even if a malicious attacker breaches infrastructure data, IP and code are completely invisible to that bad actor. This is perfect for generative AI, mitigating its security, privacy, and attack risks.

Confidential computing has been increasingly gaining traction as a security game-changer. Every major cloud provider and chip maker is investing in it, with leaders at Azure, AWS, and GCP all proclaiming its efficacy. Now, the same technology that’s converting even the most steadfast cloud holdouts could be the solution that helps generative AI take off securely. Leaders must begin to take it seriously and understand its profound impacts.

With confidential computing, enterprises gain assurance that generative AI models learn only on data they intend to use, and nothing else. Training with private datasets across a network of trusted sources across clouds provides full control and peace of mind. All information, whether an input or an output, remains completely protected and behind a company’s own four walls.

On top of that, confidential computing delivers proof of processing, providing hard evidence of a model’s authenticity and integrity. Trust in the outcomes comes from trust in the inputs and generative data, so immutable evidence of processing will be a critical requirement to prove when and where data was generated.

This is particularly important when it comes to data privacy regulations such as GDPR, CPRA, and new U.S. privacy laws coming online this year. Confidential computing ensures privacy over code and data processing by default, going beyond just the data. While organizations must still collect data on a responsible basis, confidential computing provides far higher levels of privacy and isolation of running code and data so that insiders, IT, and the cloud have no access.

This is an ideal capability for even the most sensitive industries like healthcare, life sciences, and financial services. When data and code themselves are protected and isolated by hardware controls, all processing happens privately in the processor without the possibility of data leakage. While authorized users can see results to queries, they are isolated from the data and processing in hardware. Confidential computing thus protects us from ourselves in a powerful, risk-preventative way.

Crucially, the confidential computing security model is uniquely able to preemptively minimize new and emerging risks. For example, one of the attack vectors for AI is the query interface itself. To mitigate this vulnerability, confidential computing can provide hardware-based guarantees that only trusted and approved applications can connect and engage.

This restricts rogue applications and provides a “lockdown” over generative AI connectivity to strict enterprise policies and code, while also containing outputs within trusted and secure infrastructure.

Second, as enterprises start to scale generative AI use cases, due to the limited availability of GPUs, they will look to utilize GPU grid services — which no doubt come with their own privacy and security outsourcing risks.

The use of general GPU grids will require a confidential computing approach for “burstable” supercomputing wherever and whenever processing is needed — but with privacy over models and data. Emerging confidential GPUs will help address this, especially if they can be used easily with complete privacy. In effect, this creates a confidential supercomputing capability on tap.

Last, confidential computing controls the path and journey of data to a product by only letting it into a secure enclave, enabling secure derived product rights management and consumption. Confidential computing hardware can prove that AI and training code are run on a trusted confidential CPU and that they are the exact code and data we expect with zero changes.

This immutable proof of trust is incredibly powerful, and simply not possible without confidential computing. Provable machine and code identity solves a massive workload trust problem critical to generative AI integrity and to enable secure derived product rights management. In effect, this is zero trust for code and data.

When we look at the big picture, securing generative AI must span the following:

  • Trust in the infrastructure it is running on: to anchor confidentiality and integrity over the entire supply chain from build to run.
  • Control over what data is used for training: to guarantee that data shared with partners for training, or data acquired, can be trusted to achieve the most accurate outcomes without inadvertent compliance risks.
  • Privacy over processing during execution: to limit attacks, manipulation and insider threats with immutable hardware isolation.
  • Privacy over computation and query: to limit new threats and to meet state-of-the-art compliance requirements.

Fortunately, confidential computing is ready to meet many of these challenges and build a new foundation for trust and private generative AI processing.

Tilting the scales of the generative AI cost-benefit analysis

Generative AI is unlike anything enterprises have seen before. But for all its potential, it carries new and unprecedented risks. Fortunately, being risk-averse doesn’t have to mean avoiding the technology entirely. Confidential computing solves the cost-benefit generative AI equation for enterprises, ensuring that they can use LLMs without compromising on security, privacy, control, and compliance.

Going forward, scaling LLMs will eventually go hand in hand with confidential computing. When vast models, and vast datasets, are a given, confidential computing will become the only feasible route for enterprises to safely take the AI journey — and ultimately embrace the power of private supercomputing — for all that it enables.

If investments in confidential computing continue — and I believe they will — more enterprises will be able to adopt it without fear, and innovate without bounds.