Advancing generative AI exploration safely and securely

Security concerns are inexorably intertwined with the exploration and implementation of generative AI. According to a recent report featuring data we commissioned, 49% of business leaders consider safety and security risks a top concern, while 38% identified human error or human-caused data breaches arising from a lack of understanding of how to use GPT tools.

While these concerns are valid, the benefits early adopters stand to see far outweigh the potential downsides of limiting integration.

I want to share what I have learned from helping our teammates and clients alike understand why security should not be an afterthought but a prerequisite for integrating AI into the business, and some best practices for doing so.

The AI conversation starts with a safe-use policy

Companies understand the urgency with which they need to respond to the new security risks AI presents. In fact, according to the report referenced above, 81% of business leaders said their company already has implemented or was in the process of establishing user policies around generative AI.

Guardrails for testing and learning are essential to accelerating exploration while minimizing security risks.

However, because of the rapidly evolving nature of the technology — with new applications and use cases emerging every day — the policy should be continuously updated to address emerging risks and challenges.

Guardrails for testing and learning are essential to accelerating exploration while minimizing security risks. The policy also should not be created in a silo. Representation from across the business is important to understand how the technology is being or could be used by each function to account for unique security risks.

Importantly, skunkworks exploration of AI should not be banned altogether. Companies that resist it out of fear will no longer have to worry about competitors eroding their market share; they’ve already done that for themselves.

Enabling citizen developers

In order to ensure we use AI in a safe manner, we first gave our citizen developers carte blanche access to use a private instance of our large language learning model, Insight GPT. This has not only helped us identify potential use cases but also allowed us to stress test its outputs, enabling us to make continued refinements.

One extraordinary use case popped up when one of our warehouse teammates found a way to increase order-fulfillment productivity by asking Insight GPT to write a script in SAP that automated part of their workload. While the result was fantastic, this could have easily become an incident if we failed to have the proper guardrails in place. What if the worker accidentally fulfilled an order and generated a transaction that didn’t exist?

When enabling citizen development but minimizing risk, you need to have:

  • Review boards that establish clear guidelines, conduct risk assessments and enforce transparency for AI systems.
  • Appropriate training to educate employees on how AI can be incorporated into their workloads responsibly, elucidating key topics such as ethical standards, bias, human oversight and data privacy to name a few.
  • Open internal forums that encourage teammates to share their discoveries — and errors — among a group of company innovators.

Minimizing risks due to hallucinations

A large reason why generative AI can be risky is its occasional propensity to output hallucinations. According to the Insight report, a common theme across the biggest concerns of business leaders is how hallucinations could lead to bad business decisions. However, the risk of hallucinations is not always the same and can be higher or lower depending on what you’re trying to generate.

While GPT tools are certainly capable of outputting something objectively wrong, we quickly learned how it could potentially give a right answer to a poorly worded question. For instance, in an early play test, we asked Insight GPT when Michael Jackson and Eddie Van Halen were on a song together. It said “Thriller” when the correct answer is “Beat It.” However, “Beat It” is on the album Thriller, so it wasn’t completely off base.

This perfectly illustrates the varying risk for hallucinations, particularly when dealing with more subjective workloads. Addressing this risk from a security standpoint means creating and enforcing a policy that all AI-generated content requires human oversight, and despite that, all content must be labeled clearly that the work product was assisted by AI. This needs to be done ubiquitously as content flows through the internal and external value chains.

The industry is still nascent, and embracing its responsible — and secure — adoption will help organizations achieve competitive advantage while reducing vulnerabilities to data leaks, misinformation, biases and other risks. Companies need to keep their AI policies in sync with the industry’s continuous changes to ensure compliance, consider hallucinations and ultimately build user trust.