AI

Advancing generative AI exploration safely and securely

Comment

Single wire holding together frayed ends of model wiring harness
Image Credits: Ray Massey (opens in a new window) / Getty Images

Jason Rader

Contributor

Jason Rader is chief information security officer for Insight Enterprises, a Fortune 500 solutions integrator accelerating digital transformation by unlocking the power of people and technology.

Security concerns are inexorably intertwined with the exploration and implementation of generative AI. According to a recent report featuring data we commissioned, 49% of business leaders consider safety and security risks a top concern, while 38% identified human error or human-caused data breaches arising from a lack of understanding of how to use GPT tools.

While these concerns are valid, the benefits early adopters stand to see far outweigh the potential downsides of limiting integration.

I want to share what I have learned from helping our teammates and clients alike understand why security should not be an afterthought but a prerequisite for integrating AI into the business, and some best practices for doing so.

The AI conversation starts with a safe-use policy

Companies understand the urgency with which they need to respond to the new security risks AI presents. In fact, according to the report referenced above, 81% of business leaders said their company already has implemented or was in the process of establishing user policies around generative AI.

However, because of the rapidly evolving nature of the technology — with new applications and use cases emerging every day — the policy should be continuously updated to address emerging risks and challenges.

Guardrails for testing and learning are essential to accelerating exploration while minimizing security risks. The policy also should not be created in a silo. Representation from across the business is important to understand how the technology is being or could be used by each function to account for unique security risks.

Importantly, skunkworks exploration of AI should not be banned altogether. Companies that resist it out of fear will no longer have to worry about competitors eroding their market share; they’ve already done that for themselves.

Enabling citizen developers

In order to ensure we use AI in a safe manner, we first gave our citizen developers carte blanche access to use a private instance of our large language learning model, Insight GPT. This has not only helped us identify potential use cases but also allowed us to stress test its outputs, enabling us to make continued refinements.

One extraordinary use case popped up when one of our warehouse teammates found a way to increase order-fulfillment productivity by asking Insight GPT to write a script in SAP that automated part of their workload. While the result was fantastic, this could have easily become an incident if we failed to have the proper guardrails in place. What if the worker accidentally fulfilled an order and generated a transaction that didn’t exist?

When enabling citizen development but minimizing risk, you need to have:

  • Review boards that establish clear guidelines, conduct risk assessments and enforce transparency for AI systems.
  • Appropriate training to educate employees on how AI can be incorporated into their workloads responsibly, elucidating key topics such as ethical standards, bias, human oversight and data privacy to name a few.
  • Open internal forums that encourage teammates to share their discoveries — and errors — among a group of company innovators.

Minimizing risks due to hallucinations

A large reason why generative AI can be risky is its occasional propensity to output hallucinations. According to the Insight report, a common theme across the biggest concerns of business leaders is how hallucinations could lead to bad business decisions. However, the risk of hallucinations is not always the same and can be higher or lower depending on what you’re trying to generate.

While GPT tools are certainly capable of outputting something objectively wrong, we quickly learned how it could potentially give a right answer to a poorly worded question. For instance, in an early play test, we asked Insight GPT when Michael Jackson and Eddie Van Halen were on a song together. It said “Thriller” when the correct answer is “Beat It.” However, “Beat It” is on the album Thriller, so it wasn’t completely off base.

This perfectly illustrates the varying risk for hallucinations, particularly when dealing with more subjective workloads. Addressing this risk from a security standpoint means creating and enforcing a policy that all AI-generated content requires human oversight, and despite that, all content must be labeled clearly that the work product was assisted by AI. This needs to be done ubiquitously as content flows through the internal and external value chains.

The industry is still nascent, and embracing its responsible — and secure — adoption will help organizations achieve competitive advantage while reducing vulnerabilities to data leaks, misinformation, biases and other risks. Companies need to keep their AI policies in sync with the industry’s continuous changes to ensure compliance, consider hallucinations and ultimately build user trust.

More TechCrunch

Terri Burns, a former partner at GV, is venturing into a new chapter of her career by launching her own venture firm called Type Capital. 

GV’s youngest partner has launched her own firm

The decision to go monochrome was probably a smart one, considering the candy-colored alternatives that seem to want to dazzle and comfort you.

ChatGPT’s new face is a black hole

Apple and Google announced on Monday that iPhone and Android users will start seeing alerts when it’s possible that an unknown Bluetooth device is being used to track them. The…

Apple and Google agree on standard to alert people when unknown Bluetooth devices may be tracking them

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: Watch here

A human safety operator will be behind the wheel during this phase of testing, according to the company.

GM’s Cruise ramps up robotaxi testing in Phoenix

OpenAI announced a new flagship generative AI model on Monday that they call GPT-4o — the “o” stands for “omni,” referring to the model’s ability to handle text, speech, and…

OpenAI debuts GPT-4o ‘omni’ model now powering ChatGPT

Featured Article

The women in AI making a difference

As a part of a multi-part series, TechCrunch is highlighting women innovators — from academics to policymakers —in the field of AI.

3 hours ago
The women in AI making a difference

The expansion of Polar Semiconductor’s facility would enable the company to double its U.S. production capacity of sensor and power chips within two years.

White House proposes up to $120 million to help fund Polar Semiconductor’s chip facility expansion

In 2021, Google kicked off work on Project Starline, a corporate-focused teleconferencing platform that uses 3D imaging, cameras and a custom-designed screen to let people converse with someone as if…

Google’s 3D video conferencing platform, Project Starline, is coming in 2025 with help from HP

Over the weekend, Instagram announced it is expanding its creator marketplace to 10 new countries — this marketplace connects brands with creators to foster collaboration. The new regions include South…

Instagram expands its creator marketplace to 10 new countries

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

Four-year-old Mexican BNPL startup Aplazo facilitates fractionated payments to offline and online merchants even when the buyer doesn’t have a credit card.

Aplazo is using buy now, pay later as a stepping stone to financial ubiquity in Mexico

We received countless submissions to speak at this year’s Disrupt 2024. After carefully sifting through all the applications, we’ve narrowed it down to 19 session finalists. Now we need your…

Vote for your Disrupt 2024 Audience Choice favs

Co-founder and CEO Bowie Cheung, who previously worked at Uber Eats, said the company now has 200 customers.

Healthy growth helps B2B food e-commerce startup Pepper nab $30 million led by ICONIQ Growth

Booking.com has been designated a gatekeeper under the EU’s DMA, meaning the firm will be regulated under the bloc’s market fairness framework.

Booking.com latest to fall under EU market power rules

Featured Article

‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Estate is an invite-only website that has helped hundreds of attackers make thousands of phone calls aimed at stealing account passcodes, according to its leaked database.

8 hours ago
‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Squarespace is being taken private in an all-cash deal that values the company on an equity basis at $6.6 billion.

Permira is taking Squarespace private in a $6.9 billion deal

AI-powered tools like OpenAI’s Whisper have enabled many apps to make transcription an integral part of their feature set for personal note-taking, and the space has quickly flourished as a…

Buy Me a Coffee’s founder has built an AI-powered voice note app

Airtel, India’s second-largest telco, is partnering with Google Cloud to develop and deliver cloud and GenAI solutions to Indian businesses.

Google partners with Airtel to offer cloud and GenAI products to Indian businesses

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to…

Women in AI: Rep. Dar’shun Kendrick wants to pass more AI legislation

We took the pulse of emerging fund managers about what it’s been like for them during these post-ZERP, venture-capital-winter years.

A reckoning is coming for emerging venture funds, and that, VCs say, is a good thing

It’s been a busy weekend for union organizing efforts at U.S. Apple stores, with the union at one store voting to authorize a strike, while workers at another store voted…

Workers at a Maryland Apple store authorize strike

Alora Baby is not just aiming to manufacture baby cribs in an environmentally friendly way but is attempting to overhaul the whole lifecycle of a product

Alora Baby aims to push baby gear away from the ‘landfill economy’

Bumble founder and executive chair Whitney Wolfe Herd raised eyebrows this week with her comments about how AI might change the dating experience. During an onstage interview, Bloomberg’s Emily Chang…

Go on, let bots date other bots

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. AI Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and…

UK agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing