AI

Securing generative AI across the technology stack

Comment

Pixelated data representation with red physical cube shapes on a black artificial background, tile-able composition
Image Credits: matejmo / Getty Images

Connie Qian

Contributor

Connie Qian is a vice president at Forgepoint Capital. She focuses on early-stage enterprise software companies in security and adjacent sectors, including AI/ML, infrastructure software, and fintech.

Research shows that by 2026, over 80% of enterprises will be leveraging generative AI models, APIs, or applications, up from less than 5% today.

This rapid adoption raises new considerations regarding cybersecurity, ethics, privacy, and risk management. Among companies using generative AI today, only 38% mitigate cybersecurity risks, and just 32% work to address model inaccuracy.

My conversations with security practitioners and entrepreneurs have concentrated on three key factors:

  1. Enterprise generative AI adoption brings additional complexities to security challenges, such as overprivileged access. For instance, while conventional data loss prevention tools effectively monitor and control data flows into AI applications, they often fall short with unstructured data and more nuanced factors such as ethical rules or biased content within prompts.
  2. Market demand for various GenAI security products is closely tied to the trade-off between ROI potential and inherent security vulnerabilities of the underlying use cases for which the applications are employed. This balance between opportunity and risk continues to evolve based on the ongoing development of AI infrastructure standards and the regulatory landscape.
  3. Much like traditional software, generative AI must be secured across all architecture levels, particularly the core interface, application, and data layers. Below is a snapshot of various security product categories within the technology stack, highlighting areas where security leaders perceive significant ROI and risk potential.
Table showing data for securing GenAI tech stack
Image Credits: Forgepoint Capital

Interface layer: Balancing usability with security

Businesses see immense potential in leveraging customer-facing chatbots, particularly customized models trained on industry and company-specific data. The user interface is susceptible to prompt injections, a variant of injection attacks aimed at manipulating the model’s response or behavior.

In addition, chief information security officers (CISOs) and security leaders are increasingly under pressure to enable GenAI applications within their organizations. While the consumerization of the enterprise has been an ongoing trend, the rapid and widespread adoption of technologies like ChatGPT has sparked an unprecedented, employee-led drive for their use in the workplace.

Widespread adoption of GenAI chatbots will prioritize the ability to accurately and quickly intercept, review, and validate inputs and corresponding outputs at scale without diminishing user experience. Existing data security tooling often relies on preset rules, resulting in false positives. Tools like Protect AI’s Rebuff and Harmonic Security leverage AI models to dynamically determine whether or not the data passing through a GenAI application is sensitive.

Due to the inherently non-deterministic nature of GenAI tools, a security vendor would need to understand the model’s expected behavior and tailor its response based on the type of data it seeks to protect, such as personal identifiable information (PII) or intellectual property. These can be highly variable by use case as GenAI applications are often specialized for particular industries, such as finance, transportation, and healthcare.

Like the network security market, this segment could eventually support multiple vendors. However, in this area of significant opportunity, I expect to see a competitive rush to establish brand recognition and differentiation among new entrants initially.

Application layer: An evolving enterprise landscape

Generative AI processes are predicated on sophisticated input and output dynamics. Yet they also grapple with threats to model integrity, including operational adversarial attacks, decision bias, and the challenge of tracing decision-making processes. Open source models benefit from collaboration and transparency but can be even more susceptible to model evaluation and explainability challenges.

While security leaders see substantial potential for investment in validating the safety of ML models and related software, the application layer still faces uncertainty. Since enterprise AI infrastructure is relatively less mature outside established technology firms, ML teams rely primarily on their existing tools and workflows, such as Amazon SageMaker, to test for misalignment and other critical functions today.

Over the longer term, the application layer could be the foundation for a stand-alone AI security platform, particularly as the complexity of model pipelines and multimodel inference increase the attack surface. Companies like HiddenLayer provide detection and response capabilities for open source ML models and related software. Others, like Calypso AI, have developed a testing framework to stress-test ML models for robustness and accuracy.

Technology can help ensure models are fine-tuned and trained within a controlled framework, but regulation will likely play a role in shaping this landscape. Proprietary models in algorithmic trading became extensively regulated after the 2007–2008 financial crisis. While generative AI applications present different functions and associated risks, their wide-ranging implications for ethical considerations, misinformation, privacy, and intellectual property rights are drawing regulatory scrutiny. Early initiatives by governing bodies include the European Union’s AI Act and the Biden administration’s Executive Order on AI.

Data layer: Building a secure foundation

The data layer is the foundation for training, testing, and operating ML models. Proprietary data is regarded as the core asset of generative AI companies, not just the models, despite the impressive advancements in foundational LLMs over the past year.

Generative AI applications are vulnerable to threats like data poisoning, both intentional and unintentional, and data leakage, mainly through vector databases and plug-ins linked to third-party AI models. Despite some high-profile events around data poisoning and leakage, security leaders I’ve spoken with didn’t identify the data layer as a near-term risk area compared to the interface and application layers. Instead, they often compared inputting data into GenAI applications to standard SaaS applications, similar to searching in Google or saving files to Dropbox.

This may change as early research suggests that data poisoning attacks may be easier to execute than previously thought, requiring less than 100 high-potency samples rather than millions of data points.

For now, more immediate concerns around data were closer to the interface layer, particularly around the capabilities of tools like Microsoft Copilot to index and retrieve data. Although such tools respect existing data access restrictions, their search functionalities complicate the management of user privileges and excessive access.

Integrating generative AI adds another layer of complexity, making it challenging to trace data back to its origins. Solutions like data security posture management can aid in data discovery, classification, and access control. However, it requires considerable effort from security and IT teams to ensure the appropriate technology, policies, and processes are in place.

Ensuring data quality and privacy will raise significant new challenges in an AI-first world due to the extensive data required for model training. Synthetic data and anonymization such as Gretel AI, while applicable broadly for data analytics, can help prevent scenarios of unintentional data poisoning through inaccurate data collection. Meanwhile, differential privacy vendors like Sarus can help restrict sensitive information during data analysis and prevent entire data science teams from accessing production environments, thereby mitigating the risk of data breaches.

The road ahead for generative AI security

As organizations increasingly rely on generative AI capabilities, they will need AI security platforms to be successful. This market opportunity is ripe for new entrants, especially as the AI infrastructure and regulatory landscape evolves. I’m eager to meet the security and infrastructure startups enabling this next phase of the AI revolution — ensuring enterprises can safely and securely innovate and grow.

More TechCrunch

StrictlyVC events deliver exclusive insider content from the Silicon Valley & Global VC scene while creating meaningful connections over cocktails and canapés with leading investors, entrepreneurs and executives. And TechCrunch…

Meesho, a leading e-commerce startup in India, has secured $275 million in a new funding round.

Meesho, an Indian social commerce platform with 150M transacting users, raises $275M

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract