Featured Article

TRI is developing a new method to teach robots overnight

Toyota Research Institute discusses the work it’s doing to train robots using Large Behavioral Models

Comment

Toyota Research Institute
Image Credits: Toyota Research Institute

Learning may well be the most exciting frontier in the whole of robotics. The field itself dates back decades. The 80s, for instance, brought exciting breakthroughs in learning by demonstration, but a slew of research projects out of schools like CMU, MIT and UC Berkeley point to a future in which robots learn much like their human counterparts.

Today at TechCrunch Disrupt’s Hardware Stage, the Toyota Research Institute (TRI) is showcasing advancements in research that can teach a robot a new skill quite literally overnight.

“It’s remarkable how fast it works,” says TRI CEO and Chief Scientist Gill Pratt. “In machine learning, up until quite recently there was a tradeoff, where it works, but you need millions of training cases. When you’re doing physical things, you don’t have time for that many, and the machine will break down before you get to 10,000. Now it seems that we need dozens. The reason for the dozens is that that we need to have some diversity in the training cases. But in some cases it’s less.”

The system demonstrated by TRI uses some more traditional robot learning techniques, coupled with diffusion models — similar to the processes that power generative AI models like Stable Diffusion. The automaker’s research wing says it has trained robots on 60 skills and counting using this method. But existing models won’t solve the problem themselves.

“We’ve seen some big progress with the advent of [large language models], using them to impart this high level of cognitive intelligence into robots,” says TRI Senior Research Scientist Benjamin Burchfiel. “If you have a robot that picks up a thing, now instead of having to specify an object, you can tell it to pick up the can of Coke. Or you can tell it to pick up the shiny object, or you can do the same thing and do it in French. That’s really great, but if you want a robot to plug in a USB device or pick up a tissue, those models just don’t work. They’re really useful, but they don’t solve that part of the problem. We’re focused on filling in that missing piece, and the thing we’re really excited about now is that we actually have a system and that the fundamentals are correct.”

Among the advantages to the method is the ability to program skills that are capable of functioning in diverse settings. This is an important aspect, as robots have difficulty functioning in less- or unstructured environments. That’s a big part of the reason why it’s easier for a robot to, say, function in a warehouse versus a road or even a house. Warehouses are generally built to be structured, with little change, aside from navigating moving objects like people or forklifts.

Ideally, you want a robot that can roll with the punches. Take the home. One of TRI’s primary focuses has been developing systems that can help older people continue to live independently. That’s an increasingly large concern in places with an aging population, like Toyota’s native Japan. One of the goals is the creation of a system that can both operate in different environments and navigate changes therein.

People move furniture, leave messes and don’t always put things back where they belong. Traditionally, roboticists have to take a kind of brute force approach to this stuff, anticipating any edge cases/deviations and programming the robot to manage them in advance.

This is important stuff if robots are going to function as advertised in the real world. Equally important is what roboticists deem “general purpose” systems. Those are robots that can learn and adapt to new tasks. It’s a radical shift away from more traditional single-purpose systems that are trained to do one thing well over and over again. It’s worth remembering, however, that we’re still a ways away from anything that can credibly be considered “general purpose.”

Image Credits: Toyota Research Institute

Roboticists at TRI begin by teaching the systems through teleoperation, a common tool in robot learning. Here, that process can take a monotonous couple of hours, wherein the system is made to repeat the same task over and over.

“You can think of it as remotely driving a robot through demonstrations,” says Burchfiel. “Currently that number is usually several dozen. It usually takes you about an hour to teach a basic behavior. The system doesn’t really care how you control a robot. The one that we’ve been using most recently, which has enabled a lot more of these more dexterous behaviors, is a teleop device that’s actually transmitting force between the robot and person. This means that the person can feel what the robot is doing as it’s interacting with the world. It lets you do other things that you can’t otherwise coordinate.”

The system utilizes all the data presented to it, including sight and force feedback, to produce a fuller picture of the task. As long as there is some overlap between the collected data (say, associating sight with touch), it’s able to replicate that activity using its built-in sensors. Force feedback is the key to understanding that you are, say, holding a tool correctly.

TRI says its initial experiments with tactility “have been extremely promising.” Flipping pancakes, for example, had a 90% success rate, with 27 out of 30 flips — a slight improvement over the non-tactile trials, which scored an 83%. On the other hand, the number is very stark with dough rolling (96%) and food serving (90%). Without the tactile sensing, those numbers drop to 0 and 10%, respectively.

Once that aspect of the training is completed, the systems are left alone, as their neural networks get to work training overnight. If things go as planned, the skill will have been fully learned by the time researchers return to the lab the next morning.

Image Credits: Toyota Research Institute

The system relies on diffusion policy, which is, “a new way of generating robot behavior by representing a robot’s visuomotor policy as a conditional denoising diffusion process,” according to the researchers behind it. In simpler terms, what it does is find meaning in randomized images by removing “noise” from the process. Again, it’s similar to much of what we’ve seen in the generative AI world, but this research is utilizing processes to create behaviors in the robot.

I recognized recently that I was thinking about robotic learning wrong. I had previously considered different methods of teaching robots to be in conflict with one another — that ultimately one superior method would run the rest out. It’s clear to me that the way forward will be a combination of different methods, in much the same way that humans learn. Another important facet in all of this is fleet learning — effectively a centrally accessible cloud-based system, which robots can use to teach and learn from one another’s experiences.

One of the key next steps is the creation of Large Behavior Models to help robots learn. “We’re trying to scale,” says Vice President of Robotics Research Russ Tedrake. “We’ve trained 60 skills already, 100 skills by the end of the year, thousands of skills by the end of next year. We don’t really know the scaling laws yet. How many skills are we going to have to train where something completely new comes out the other end? We’re studying that. We’re in the regime now where we can start asking these pretty fundamental questions and start looking for the laws to know what kind of timeline we’re on.”

Image Credits: Toyota Research Institute

Further down the road, the team hopes such findings will lead to more capable robots, which can interact with novel objects in new settings, while creating actions on the fly based on trained behaviors. In many cases, tasks are comprised of smaller behaviors that can be strung together and executed. All in due time, of course.

In the meantime, Pratt is set to join Boston Dynamics AI Institute Executive Director Marc Raibert on Thursday as part Disrupt’s Hardware Stage. The pair will discuss these breakthroughs and more.

More TechCrunch

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Everything announced so far

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google gets serious about AI-generated video at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google reveals plans for upgrading AI in the real world through Gemini Live at Google I/O 2024

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets

The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch has learned. The…

Threat actor scraped Dell support tickets, including customer phone numbers

If you write the words “cis” or “cisgender” on X, you might be served this full-screen message: “This post contains language that may be considered a slur by X and…

On Elon’s whim, X now treats ‘cisgender’ as a slur

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch the AI reveals live

Facebook once had big ambitions to be a major player in enterprise communication and productivity, but today the social network’s parent company Meta will be closing a very significant chapter…

Meta is shutting down Workplace, its enterprise communications business

The Oversight Board has overturned Meta’s decision to take down a documentary revealing the identities of child abuse victims in Pakistan.

Meta’s Oversight Board overturns takedown decision for Pakistan child abuse documentary

Adam Selipsky is stepping down from his role as CEO of Amazon Web Services, Amazon has confirmed to TechCrunch.  In a memo shared internally by Amazon CEO Andy Jassy and…

AWS CEO Adam Selipsky steps down