AI

Why smart AI regulation is vital for innovation and US leadership

Comment

Wooden gavel with brass engraving band and golden alphabets AI on a round wood sound block. Illustration of the concept of legislation of artificial intelligence act and rules
Image Credits: Dragon Claws (opens in a new window) / Getty Images

Gary Shapiro

Contributor

Gary Shapiro is the president and CEO of the Consumer Technology Association and a New York Times bestselling author of the book Ninja Future: Secrets to Success in the New World of Innovation.

More posts from Gary Shapiro

As a teenager, I immersed myself in science fiction. While the visions of many films and novels haven’t come to pass, I’m still amazed by legendary writer Isaac Asimov’s ability to imagine a future of artificial intelligence and robotics. Now, amid all the hype around generative AI and other AI tools, it’s time for us to follow Asimov’s lead and write a new set of rules.

Of course, AI rules for the 21st century won’t be quite as simple as Asimov’s three rules of robotics (popularized in “I, Robot”). But amid anxiety around the rise of AI tools and a misguided push for a moratorium on advanced AI research, industry can and should be pushing for rules for responsible AI development. Certainly, the past century’s advances in technology have given us plenty of experience in evaluating both the benefits of technological progress and the potential pitfalls.

Technology itself is neutral. It’s how we use it — and the guardrails we set up around it — that dictate its impact. As humans, harnessing the power of fire allowed us to stay warm and extend food storage time. But fire can still be destructive.

Think of how the recent Canadian wildfires threatened lives and property in Canada and damaged U.S. air quality. Nuclear power in the form of atomic bombs killed thousands in Japan during WWII, but nuclear energy lights up much of France and powers U.S. aircraft carriers.

In the case of AI, new tools and platforms can solve big global problems and create valuable knowledge. At a recent meeting of Detroit-area chief information officers, attendees shared how generative AI is already speeding up time-to-market and making their companies more competitive.

Generative AI will help us “listen” to different animal species. AI will improve our health by supporting drug discovery and disease diagnosis. Similar tools are providing everything from personalized care for elders to better security for our homes. More, AI will improve our productivity, with a new study by McKinsey showing generative AI could boost the global economy by $4.4 trillion annually.

With all this possibility, can such an amazing technology also be bad? Some of the concerns around AI platforms are legitimate. We should be concerned about the risk of deep fakes, political manipulation, and fraud aimed at vulnerable populations, but we can also use AI to recognize, intercept and block harmful cyber intrusions. Both barriers and solutions may be difficult and complex, and we need to work on them.

Some may also be simple; we already see schools experimenting with oral exams to test a student’s knowledge. Addressing those issues head-on, rather than sticking our heads in the sand with a pause on research that would be impossible to enforce and ripe for exploitation by bad actors, will position the United States as a leader on the world stage.

While the U.S. approach to AI has been mixed, other countries seem locked in to a hyper-regulatory stampede. The EU is on the precipice of passing a sweeping AI Act that would require companies to ask permission to innovate. In practice, that would mean that only the government or huge companies with the finances and capacity to afford the certification labyrinth covering privacy, IP, and a host of social protection requirements could develop new AI tools.

A recent study from Stanford University also found that the EU’s AI Bill would bar all of the currently existing large language models, including OpenAI’s GPT-4 and Google’s Bard. Canadian lawmakers are moving forward an overly broad AI bill that could similarly stifle innovation. Most concerning, China is rapidly pursuing civil and military AI dominance through massive government support. More, it has a different view of human rights and privacy protection that may help its AI efforts but is antithetical to our values. The U.S. must act to protect citizens and advance AI innovation or we will be left behind.

What would that look like? To start, the U.S. needs a preemptive federal privacy bill. Today’s patchwork of state-by-state rules means that data is treated differently each time it “crosses” an invisible border — causing confusion and compliance hurdles for small businesses. We need a national privacy law with clear guidelines and standards for how companies collect, use, and share data. It would also help create transparency for consumers and ensure that companies can foster trust as the digital economy grows.

We also need a set of principles around responsible AI use. While I prefer less regulation, managing emerging technologies like AI requires clear rules that set out how this technology can be developed and deployed. With new innovations in AI unveiled almost daily, legislators should focus on guardrails and outcomes, rather than attempting to rein in specific technologies.

Rules should also consider the level of risk, focusing on AI systems that could meaningfully hurt Americans’ fundamental rights or access to critical services. As our government determines what “good policy” looks like, industry will have a vital role to play. The Consumer Technology Association is working closely with industry and policymakers to develop unified principles for AI use.

We’re at a pivotal moment for the future of an amazing, complex and consequential technology. We can’t afford to let other countries take the lead.

More TechCrunch

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia…

U.K. agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing

Here’s what one insider said happened in the days leading up to the layoffs.

Tesla’s profitable Supercharger network is in limbo after Musk axed the entire team

StrictlyVC events deliver exclusive insider content from the Silicon Valley & Global VC scene while creating meaningful connections over cocktails and canapés with leading investors, entrepreneurs and executives. And TechCrunch…

Meesho, a leading e-commerce startup in India, has secured $275 million in a new funding round.

Meesho, an Indian social commerce platform with 150M transacting users, raises $275M

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others