AI

A quick guide to ethical and responsible AI governance

Comment

A compass occupies the space over the man's head conveying the concept of morality and the choices we make. The man's head is reduced to its simple shape, devoid of any detail, and is silhouetted against the background. The man, dressed in a suit, stands at a profile to the camera.
Image Credits: DNY59 / Getty Images

Phani Dasari

Contributor

Phani Dasari is the chief information security officer at Hinduja Global Solutions (HGS), a global company specializing in digital-led customer experience for hundreds of world-class brands. Phani has over 18 years of experience across domains such as governance, risk, compliance, client security management, data privacy, and regulatory compliance, among others.

The rapid advancement of artificial intelligence (AI) technologies fueled by breakthroughs in machine learning (ML) and data management has propelled organizations into a new era of innovation and automation.

As AI applications continue to proliferate across industries, they hold the promise of revolutionizing customer experience, optimizing operational efficiency, and streamlining business processes. However, this transformative journey comes with a crucial caveat: the need for robust AI governance.

In recent years, concerns about ethical, fair, and responsible AI deployment have gained prominence, highlighting the necessity for strategic oversight throughout the AI life cycle.

The rising tide of AI applications and ethical concerns

The proliferation of AI and ML applications has been a hallmark of recent technological advancement. Organizations increasingly recognize the potential of AI to enhance customer experience, revolutionize business processes, and streamline operations. However, this surge in AI adoption has triggered a corresponding rise in concerns regarding the ethical, transparent, and responsible use of these technologies. As AI systems assume roles in decision-making traditionally performed by humans, questions about bias, fairness, accountability, and potential societal impacts loom large.

The imperative of AI governance

AI governance has emerged as the cornerstone for responsible and trustworthy AI adoption. Organizations must proactively manage the entire AI life cycle, from conception to deployment, to mitigate unintentional consequences that could tarnish their reputation and, more importantly, harm individuals and society. Strong ethical and risk-management frameworks are essential for navigating the complex landscape of AI applications.

The World Economic Forum encapsulates the essence of responsible AI by defining it as the practice of designing, building, and deploying AI systems in a manner that empowers individuals and businesses while ensuring equitable impacts on customers and society. This ethos serves as a guiding principle for organizations seeking to instill trust and scale their AI initiatives confidently.

Key components of AI governance

Ensuring ethical and responsible use of AI technologies that establishes a foundation of trust, accountability, and transparency in AI systems will be paramount. To achieve responsible AI initiatives and foster ethical practices, consider the following components.

AI ownership: Defining accountability and responsibility

Determining the ownership of AI systems and models within an organization is a critical starting point. The AI owner, often a senior business leader, assumes ultimate accountability to ensure the responsible, ethical, transparent, and fair deployment of AI. This involves understanding risks, addressing potential pitfalls, and fostering alignment across business processes to ensure ethical and responsible AI use.

The AI Governance Alliance: Ultimate approval and decision-making

The AI Governance Alliance serves as the apex body for AI decision-making. Its responsibilities include aligning AI goals with business objectives, prioritizing AI projects, overseeing risk assessments, approving data and model usage, and ensuring compliance with regulations and guidelines.

AI Center of Excellence: Promoting responsible AI practices

The AI Center of Excellence plays a pivotal role in standardizing AI architecture, developing guidelines, building guardrails, and collaborating with AI teams to ensure responsible AI implementation. It also fosters alignment with enterprise architectural practices, conducts training, and develops prototypes to share insights with the broader community.

AI/data science team: Implementing responsible AI solutions

The AI/data science team designs, deploys, and governs AI solutions. Responsibilities include aligning data usage with governance, conducting compliance assessments, collaborating with the AI Center of Excellence, and implementing access controls for AI systems and models.

AI governance process: Formalizing oversight mechanisms

The AI governance process includes formal data use approval and model review processes along with monitoring and oversight mechanisms. These processes ensure that policies and standards are followed, AI risks are addressed, and models remain compliant throughout their life cycles.

Policies and procedures for AI governance

Formal policies, such as the AI Governance Policy, lay the foundation for AI governance by defining roles, frameworks, and components. Organizations should review existing policies and update them to include AI-specific scenarios, ensuring alignment with the responsible AI practices.

Model governance: Data and model accountability

Model governance entails understanding and documenting the datasets used, data limitations, ownership, and compliance with regulations. It also involves detailing model creation, testing, deployment, and monitoring processes, as well as maintaining model performance, accuracy, and versioning.

Tools and technologies for AI governance

Utilizing appropriate tools and technologies is crucial for effective governance of AI. These tools should encompass data analysis, data visualization, model management, MLOps, and role-based access control to facilitate responsible and transparent AI deployment.

Monitoring AI systems in production

Continuous monitoring of AI systems in production is vital for ensuring ongoing performance, fairness, and compliance. This involves detecting data drift, addressing adversarial attacks, and maintaining model robustness, while safeguarding ethical and responsible AI use.

AI governance framework chart
Image Credits: InfoTech Research Group

The AI journey is no longer solely concerned with technological innovation; it is intrinsically tied to ethical, fair, and responsible AI deployment. AI governance serves as a linchpin that enables organizations to navigate this complex landscape, instill trust, and scale AI initiatives with confidence.

By embracing AI ownership, establishing robust governance frameworks, fostering collaboration across AI teams, and leveraging cutting-edge tools, organizations can realize the transformative potential of AI, while safeguarding individuals, society, and their own reputation. In a world increasingly shaped by AI, responsible AI governance is the compass that guides organizations toward a future where innovation and ethics coexist harmoniously.

More TechCrunch

Ahead of the AI safety summit kicking off in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its own efforts in the field. The AI…

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

8 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

2 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

2 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities