Critical 2024 AI policy blueprint: Unlocking potential and safeguarding against workplace risks

Many have described 2023 as the year of AI, and the term made several “word of the year” lists. While it has positively impacted productivity and efficiency in the workplace, AI has also presented a number of emerging risks for businesses.

For example, a recent Harris Poll survey commissioned by AuditBoard revealed that roughly half of employed Americans (51%) currently use AI-powered tools for work, undoubtedly driven by ChatGPT and other generative AI solutions. At the same time, however, nearly half (48%) said they enter company data into AI tools not supplied by their business to aid them in their work.

This rapid integration of generative AI tools at work presents ethical, legal, privacy, and practical challenges, creating a need for businesses to implement new and robust policies surrounding generative AI tools. As it stands, most have yet to do so — a recent Gartner survey revealed that more than half of organizations lack an internal policy on generative AI, and the Harris Poll found that just 37% of employed Americans have a formal policy regarding the use of non-company-supplied AI-powered tools.

While it may sound like a daunting task, developing a set of policies and standards now can save organizations from major headaches down the road.

AI use and governance: Risks and challenges

Developing a set of policies and standards now can save organizations from major headaches down the road.

Generative AI’s rapid adoption has made keeping pace with AI risk management and governance difficult for businesses, and there is a distinct disconnect between adoption and formal policies. The previously mentioned Harris Poll found that 64% perceive AI tool usage as safe, indicating that many workers and organizations could be overlooking risks.

These risks and challenges can vary, but three of the most common include:

  1. Overconfidence. The Dunning–Kruger effect is a bias that occurs when our own knowledge or abilities are overestimated. We’ve seen this manifest itself relative to AI usage; many overestimate the capabilities of AI without understanding its limitations. This could produce relatively harmless results, such as providing incomplete or inaccurate output, but it could also lead to much more serious situations, such as output that violates legal usage restrictions or creates intellectual property risk.
  2. Security and privacy. AI needs access to large amounts of data for full effectiveness, but this sometimes includes personal data or other sensitive information. There are inherent risks that come along with using unvetted AI tools, so organizations must ensure they’re using tools that meet their data security standards.
  3. Data sharing. Just about every technology vendor has launched or will soon launch AI capabilities to augment their core product offerings, and many of these additions are self-service or user-enabled. Free-to-use solutions often operate by monetizing user-provided data, and in these cases, there is one thing to remember: If you are not paying for the product, you likely are the product. Organizations should take care to ensure the learning models they are using are not trained with personal or third-party data without consent and that their own data is not used to train learning models without permission.

There are also risks and challenges associated with developing products that include AI capabilities, such as defining the acceptable use of customer data for model training. As AI infiltrates every facet of business, these and many other considerations are bound to follow.

Developing comprehensive AI usage policies

Integrating AI into business processes and strategies has become imperative, but it requires developing a framework of policies and guidelines for responsible deployment and use. How this looks may vary based on an organization’s specific needs and use cases, but four overarching pillars can help organizations leverage AI for innovation while mitigating risks and upholding ethical standards.

Integrating AI into strategic organizational plans

Embracing AI requires aligning its deployment with the strategic objectives of the business. It’s not about adopting cutting-edge technology for technology’s sake; integrating AI applications that resonate with the organization’s defined mission and objectives should enhance operational efficiencies and drive growth.

Mitigating overconfidence

Acknowledging the potential of AI should not equate to unwavering trust. Cautious optimism (with an emphasis on “cautious”) should always prevail, as organizations need to account for the limitations and potential biases of AI tools. Finding a calculated balance between leveraging AI’s strengths and remaining aware of its current and future constraints is pivotal.

Defining guidelines and best practices in AI tool usage

Defining protocols for data privacy, security measures, and ethical considerations ensures consistent and ethical utilization across all departments. This process includes:

  • Involving diverse teams in policy creation: Teams including legal, HR, and information security should participate to create a holistic perspective, integrating both legal and ethical dimensions into operational frameworks.
  • Defining parameters on usage and restricting harmful applications: Articulate policies for AI usage in practical and technology applications, identify areas where AI can be employed beneficially, and prevent the use of potentially harmful applications while setting processes to evaluating new AI use cases that may align with the business’s strategic interests.
  • Performing regular policy updates and employee education: AI evolves continuously, and this evolution may only accelerate — policy frameworks need to adapt in tandem. Regular updates ensure that policies align with the quickly changing AI landscape, and comprehensive employee education ensures compliance and responsible use.

Implementing monitoring and detection for unauthorized AI use

Deploying strong endpoint or SASE/CASB-based detections and data loss prevention (DLP) mechanisms plays a huge role in identifying unauthorized AI usage within the organization and mitigating potential breaches or misuse. Scanning for intellectual property within open source AI models is also crucial. Meticulous inspection safeguards proprietary information and prevents unintended (and costly) infringements.

As businesses delve deeper into AI integration, formulating clear yet extensive policies enables them to harness the potential of AI while also mitigating its risks.

Effective policy design also fosters ethical AI usage and creates organizational resilience in a world that will only become more AI-driven. Make no mistake: This is an urgent matter. Organizations that embrace AI with well-defined policies will give themselves the best opportunity to effectively navigate this transformation while also upholding ethical standards and achieving their strategic goals.