AI

Google DeepMind forms a new org focused on AI safety

Comment

DeepMind logo
Image Credits: Google DeepMind

If you ask Gemini, Google’s flagship GenAI model, to write deceptive content about the upcoming U.S. presidential election, it will, given the right prompt. Ask about a future Super Bowl game and it’ll invent a play-by-play. Or ask about the Titan submersible implosion and it’ll serve up disinformation, complete with convincing-looking but untrue citations.

It’s a bad look for Google needless to say — and is provoking the ire of policymakers, who’ve signaled their displeasure at the ease with which GenAI tools can be harnessed for disinformation and to generally mislead.

So in response, Google — thousands of jobs lighter than it was last fiscal quarter — is funneling investments toward AI safety. At least, that’s the official story.

This morning, Google DeepMind, the AI R&D division behind Gemini and many of Google’s more recent GenAI projects, announced the formation of a new organization, AI Safety and Alignment — made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers.

Beyond the job listings on DeepMind’s site, Google wouldn’t say how many hires would result from the formation of the new organization. But it did reveal that AI Safety and Alignment will include a new team focused on safety around artificial general intelligence (AGI), or hypothetical systems that can perform any task a human can.

Similar in mission to the Superalignment division rival OpenAI formed last July, the new team within AI Safety and Alignment will work alongside DeepMind’s existing AI-safety-centered research team in London, Scalable Alignment — which is also exploring solutions to the technical challenge of controlling yet-to-be-realized superintelligent AI.

Why have two groups working on the same problem? Valid question — and one that calls for speculation given Google’s reluctance to reveal much in detail at this juncture. But it seems notable that the new team — the one within AI Safety and Alignment — is stateside as opposed to across the pond, proximate to Google HQ at a time when the company’s moving aggressively to maintain pace with AI rivals while attempting to project a responsible, measured approach to AI.

The AI Safety and Alignment organization’s other teams are responsible for developing and incorporating concrete safeguards into Google’s Gemini models, current and in-development. Safety is a broad purview. But a few of the organization’s near-term focuses will be preventing bad medical advice, ensuring child safety and “preventing the amplification of bias and other injustices.”

Anca Dragan, formerly a Waymo staff research scientist and a UC Berkeley professor of computer science, will lead the team.

“Our work [at the AI Safety and Alignment organization] aims to enable models to better and more robustly understand human preferences and values,” Dragan told TechCrunch via email, “to know what they don’t know, to work with people to understand their needs and to elicit informed oversight, to be more robust against adversarial attacks and to account for the plurality and dynamic nature of human values and viewpoints.”

Dragan’s consulting work with Waymo on AI safety systems might raise eyebrows, considering the Google autonomous car venture’s rocky driving record as of late.

So might her decision to split time between DeepMind and UC Berkeley, where she heads a lab focusing on algorithms for human-AI and human-robot interaction. One might assume issues as grave as AGI safety — and the longer-term risks the AI Safety and Alignment organization intends to study, including preventing AI in “aiding terrorism” and “destabilizing society” — require a director’s full-time attention.

Dragan insists, however, that her UC Berkeley lab’s and DeepMind’s research are interrelated and complementary.

“My lab and I have been working on … value alignment in anticipation of advancing AI capabilities, [and] my own Ph.D. was in robots inferring human goals and being transparent about their own goals to humans, which is where my interest in this area started,” she said. “I think the reason [DeepMind CEO] Demis Hassabis and [chief AGI scientist] Shane Legg were excited to bring me on was in part this research experience and in part my attitude that addressing present-day concerns and catastrophic risks are not mutually exclusive — that on the technical side mitigations often blur together, and work contributing to the long term improves the present day, and vice versa.”

To say Dragan has her work cut out for her is an understatement.

Skepticism of GenAI tools is at an all-time high — particularly where it relates to deepfakes and misinformation. In a poll from YouGov, 85% of Americans said that they were very concerned or somewhat concerned about the spread of misleading video and audio deepfakes. A separate survey from The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults think AI tools will increase the volume of false and misleading information during the 2024 U.S. election cycle.

Enterprises, too — the big fish Google and its rivals hope to lure with GenAI innovations — are wary of the tech’s shortcomings and their implications.

Intel subsidiary Cnvrg.io recently conducted a survey of companies in the process of piloting or deploying GenAI apps. It found that around a fourth of the respondents had reservations about GenAI compliance and privacy, reliability, the high cost of implementation and a lack of technical skills needed to use the tools to their fullest.

In a separate poll from Riskonnect, a risk management software provider, over half of execs said that they were worried about employees making decisions based on inaccurate information from GenAI apps.

They’re not unjustified in those concerns. Last week, The Wall Street Journal reported that Microsoft’s Copilot suite, powered by GenAI models similar architecturally to Gemini, often makes mistakes in meeting summaries and spreadsheet formulas. To blame is hallucination — the umbrella term for GenAI’s fabricating tendencies — and many experts believe it can never be fully solved.

Recognizing the intractability of the AI safety challenge, Dragan makes no promise of a perfect model — saying only that DeepMind intends to invest more resources into this area going forward and commit to a framework for evaluating GenAI model safety risk “soon.”

“I think the key is to … [account] for remaining human cognitive biases in the data we use to train, good uncertainty estimates to know where gaps are, adding inference-time monitoring that can catch failures and confirmation dialogues for consequential decisions and tracking where [a] model’s capabilities are to engage in potentially dangerous behavior,” she said. “But that still leaves the open problem of how to be confident that a model won’t misbehave some small fraction of the time that’s hard to empirically find, but may turn up at deployment time.”

I’m not convinced customers, the public and regulators will be so understanding. It’ll depend, I suppose, on just how egregious those misbehaviors are — and who exactly is harmed by them.

“Our users should hopefully experience a more and more helpful and safe model over time,” Dragan said. Indeed.

More TechCrunch

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

1 day ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

1 day ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares