AI

IEEE puts out a first draft guide for how tech can achieve ethical AI design

Comment

Image Credits: cosmin4000/iStock / Getty Images

One of the barriers standing in the way of ethically designed AI systems that benefit humanity as a whole, and avoid the pitfalls of embedded algorithmic biases, is the tech industry’s lack of ownership and responsibility for ethics, according to technical professional association, the IEEE.

The organization has today published the first version of a framework document it’s hoping will guide the industry toward the light — and help technologists build benevolent and beneficial autonomous systems, rather than thinking that ethics is not something they need to be worrying about.

The document, called Ethically Aligned Design, includes a series of detailed recommendations based on the input of more than 100 “thought leaders” working in academia, science, government and corporate sectors, in the fields of AI, law and ethics, philosophy and policy.

The IEEE is hoping it will become a key reference work for AI/AS technologists as autonomous technologies find their way into more and more systems in the coming years. It’s also inviting feedback on the document from interested parties — there’s a Submission Guidelines on The IEEE Global Initiative’s website. It says all comment and input will be made publicly available, and should be sent no later than March 6, 2017.

The wider hope, in time, is for the initiative to generate recommendations for IEEE Standards based on its notion of Ethically Aligned Design — by creating consensus and contributing to the development of methodologies to achieve ethical ends.

“By providing technologists with peer-driven, practical recommendations for creating ethically aligned autonomous and intelligent products, services, and systems, we can move beyond the fears associated with these technologies and bring valued benefits to humanity today and for the future,” says Konstantinos Karachalios, managing director for IEEE Standard Association, in a statement.

The 136-page document is divided into a series of sections, starting with some general principles — such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable — before moving onto more specific areas such as how to embed relevant “human norms or values” into systems, and tackle potential biases, achieve trust and enable external evaluating of value alignment.

Another section considers methodologies to guide ethical research and design — and here the tech industry’s lack of ownership or responsibility for ethics is flagged as a problem, along with other issues, such as ethics not being routinely part of tech degree programs. The IEEE also notes the lack of an independent review organization to oversee algorithmic operation, and the use of “black-box components” in the creation of algorithms, as other problems to achieving ethical AI.

One suggestion to help overcome the tech industry’s ethical blind spots is to ensure those building autonomous technologies are “a multidisciplinary and diverse group of individuals” so that all potential ethical issues are covered, the IEEE writes.

It also argues for the creation of standards providing “oversight of the manufacturing process of intelligent and autonomous technologies” in order to ensure end users are not harmed by autonomous outcomes.

And for the creation of “an independent, internationally coordinated body” to oversee whether products meet ethical criteria — both at the point of launch, and thereafter as they evolve and interact with other products.

“When systems are built that could impact the safety or wellbeing of humans, it is not enough to just presume that a system works. Engineers must acknowledge and assess the ethical risks involved with black-box software and implement mitigation strategies where possible,” the IEEE writes. “Technologists should be able to characterize what their algorithms or systems are going to do via transparent and traceable standards. To the degree that we can, it should be predictive, but given the nature of AI/AS systems it might need to be more retrospective and mitigation oriented.

“Similar to the idea of a flight data recorder in the field of aviation, this algorithmic traceability can provide insights on what computations led to specific results ending up in questionable or dangerous behaviors. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms.”

Ultimately, it concludes that engineers should deploy black-box software services or components “only with extraordinary caution and ethical care,” given the opacity of their decision making process and the difficulty in inspecting or validating these results.

Another section of the document — on safety and beneficence of artificial general intelligence — also warns that as AI systems become more capable “unanticipated or unintended behavior becomes increasingly dangerous,” while retrofitting safety into any more generally capable, future AI systems may be difficult.

“Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems,” it suggests.

The document also touches on concerns about the asymmetry inherent in AI systems that are fed by individuals’ personal data — yet gains derived from the technology are not equally distributed.

“The artificial intelligence and autonomous systems (AI/AS) driving the algorithmic economy have widespread access to our data, yet we remain isolated from gains we could obtain from the insights derived from our lives,” it writes.

“To address this asymmetry there is a fundamental need for people to define, access, and manage their personal data as curators of their unique identity. New parameters must also be created regarding what information is gathered about individuals at the point of data collection. Future informed consent should be predicated on limited and specific exchange of data versus long-term sacrifice of informational assets.”

The full IEEE document can be downloaded here.

The issue of AI ethics and accountability has been rising up the social and political agenda this year, fueled in part by high-profile algorithmic failures such as Facebook’s inability to filter out fake news.

The White House has also put out its own reports into AI and R&D. And this fall a U.K. parliamentary committee warned the government of the need to act pro-actively to ensure AI accountability.

More TechCrunch

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine