Company executives can ensure generative AI is ethical with these steps

It’s becoming increasingly clear that businesses of all sizes and across all sectors can benefit from generative AI. From code generation and content creation to data analytics and chatbots, the possibilities are vast — and the rewards abundant.

McKinsey estimates generative AI will add $2.6 trillion to $4.4 trillion annually across numerous industries. That’s just one reason why over 80% of enterprises will be working with generative AI models, APIs, or applications by 2026. Businesses acting now to reap the rewards will thrive; those that don’t won’t remain competitive. However, simply adopting generative AI doesn’t guarantee success.

The right implementation strategy is needed. Modern business leaders must prepare for a future managing people and machines, with AI integrated into every part of their business. A long-term strategy is needed to harness generative AI’s immediate advantages while mitigating potential future risks.

Businesses that don’t address concerns around generative AI from day one risk consequences, including system failure, copyright exposure, privacy violations, and social harms like the amplification of biases. However, only 17% of businesses are addressing generative AI risks, which leaves them vulnerable.

Making good choices now will allow leaders to future-proof their business and reap the benefits of AI while boosting the bottom line.

Businesses must also ensure they are prepared for forthcoming regulations. President Biden signed an executive order to create AI safeguards, the U.K. hosted the world’s first AI Safety Summit, and the EU brought forward their own legislation. Governments across the globe are alive to the risks. C-suite leaders must be too — and that means their generative AI systems must adhere to current and future regulatory requirements.

So how do leaders balance the risks and rewards of generative AI?

Businesses that leverage three principles are poised to succeed: human-first decision-making, robust governance over large language model (LLM) content, and a universal connected AI approach. Making good choices now will allow leaders to future-proof their business and reap the benefits of AI while boosting the bottom line.

Prioritize human-first decision-making

The future for many businesses is a world where humans and machines work together. Pretending otherwise simply ignores the power and potential of AI.

But the critical point is that AI should support people in making decisions, not supplant them. Humans should always be in total control of what an AI system is doing. Its goals should be set by humans, and its output continually monitored and tracked by humans.

For C-suite leaders, this means ensuring constant, explainable oversight of what the generative AI systems they’re using — such as customer service chatbots or text transcription services — are doing and why. By ensuring that explainability is built in both structurally and algorithmically, staff across an organization can understand what these systems are doing and why, and subsequently make informed decisions. There should also be a triage system in place, so complex or contentious issues are allocated to humans for sign-off. For example, generative AI could offer a first draft of a sales pitch for a salesperson to then adapt and personalize.

Such an approach gives C-suite leaders total control of the output of generative AI, enabling biased, harmful or false information to be stopped at source — ensuring both high-performing models and ethical ones.

Implement a robust governance framework

While human-led decision-making relies on individual judgment, a governance framework sets system-wide rules and standards for how AI is developed, deployed, and managed. The frameworks serve as strict guidelines that ensure compliance, consistency of output, and accountability when using generative AI.

In practice, this can take the form of deploying automated monitoring of LLM content for inappropriate, confidential, or biased information. Custom policies, such as specific keyword blocking, help prevent rogue content from ever being produced. Beyond this, regularly auditing and analyzing the data used to train generative AI systems will help highlight and mitigate any biases that could lead to prejudiced outcomes.

Finally, those who overlook “shadow AI” do so at their peril. The security risks of shadow IT have been widely understood (if not always mitigated) for some time now. Staff using personal laptops and tools like Dropbox, without the oversight of IT teams, increases any organization’s risk profile — without the C-suite ever knowing. Now, as generative AI becomes more accessible, the threat of shadow AI looms larger.

Creating sensible technical governance frameworks from the outset, paired with human-first decision-making, helps prevent shadow AI from bleeding across your business and into your customer experience.

Ensure full connectivity across the business

No human is an island, and the same should be said for AI models. Today, most businesses deploy machine learning models in isolation — but the true power of AI comes from connecting these models. This integrated approach allows businesses to identify the causal relationships between two completely different parts of a business. For example, an LLM might help a research company analyze historic interview transcripts, yet greater insight would come if that data was connected to another model looking at current public perceptions — allowing deeper analysis and causal relationships to be identified.

To this end, computational twins are a great way of increasing connectivity between generative AI systems. These are slightly different to digital twins, which are a virtual representation of a system, like a manufacturing plant. Computational twins are a simulation — a model that captures an organization’s entire operations, telling leaders what’s happening inside their business in real-time by analyzing multiple data sources. Commercial benefits include demand intelligence, inventory optimization, risk monitoring, and workforce management.

Crucially, a computational twin is not a one-time thing. Rather than being fixed, it’s an ongoing replica of processes, which must always be adjusted and adapted by humans to optimize results. Executed wisely, they’re a striking example of augmented intelligence — humans and machines working together harmoniously.

Such a holistic approach enables all teams within a company to have a complete operational view of all their generative AI systems’ capabilities and limitations. Stand-alone tools can’t bring context to a decision — hence the importance that leaders ensure models are connected across their business to prevent silos.

Unlocking value and future-proofing generative AI

The benefits of generative AI are incredible and can produce immense value for businesses. But to navigate the hype cycle — and avoid becoming obsolete — C-suite leaders must ensure they’ve got the right technology, governance, and culture in place.

By following these guidelines, leaders can ensure the generative AI tools they use complement business activity and goals without compromising on ethics — a winning combination.