How to fake a robotics demo for fun and profit

As robot videos become more viral, it’s important to develop a critical eye

In March 2008, a roboticist in winter wear gave Big Dog a big kick for the camera. The buzzing DARPA-funded robot stumbled, but quickly regained its footing amid the snowy parking lot. “PLEASE DO NOT KICK THE WALKING PROTOTYPE DEATH MECH,” pleads the video’s top comment. “IT WILL REMEMBER.”

“Creepy as hell,” notes another. “Imagine if you were taking a walk in the woods one day and saw that thing coming towards you.” Gadget blogs and social media accounts variously tossed out words like “terrifying” and “robopocalypse,” in those days before Black Mirror gave the world an even more direct shorthand. Boston Dynamics had a hit. The video currently stands at 17 million views. It was the first of countless viral hits that continue to this day.

It’s hard to overstate the role such virality has played in Boston Dynamics’ subsequent development into one of the world’s most instantly identifiable robotics companies. Big Dog and its descendants like Spot and Atlas have been celebrated, demonized, parodied and even appeared in a Sam Adams beer ad. Along with developing some of the world’s most advanced mechatronics, the Boston Dynamics team have proven themselves to be extremely savvy marketers.

There’s much to be said for the role such videos have played in spreading the gospel of robotics.

It seems likely videos like this have inspired the careers of countless roboticists who are currently thriving in the field. It’s a model countless subsequent startups have adopted to a wide range of success. Boston Dynamics certainly can’t be held responsible for any of those companies that might have taken a few shortcuts along the way.

In recent decades, viral robot videos have grown from objects of curiosity among the technorati to headline-grabbing hits filtered through TikTok and YouTube. As the potential rewards have increased, so too has the desire to soften the edges. Further complicating matters is the state of CGI, which has become indistinguishable from reality for many viewers. Confirmation bias, attraction to novelty and a lack of technical expertise all play key roles in our tendency to believe fake news and videos.

You can forgive the average TikTok viewer, for instance, for not understanding the intricacies of generalization. Many roboticists have — perhaps unintentionally — added fuel to that fire by implying that the systems we’re seeing in videos are “general purpose.” Multi-purpose, perhaps, but we’re still some ways off from robots that can perform any task not hampered by hardware limitations.

More often than not, the videos you see are the product of months or years of work. Somewhere on a hard drive sits the hours of video that didn’t make it into the final cut, featuring a robot stumbling, sputtering or stopping short. This is precisely why I’ve encouraged companies to share some of these videos with the TechCrunch audience. Perhaps unsurprisingly, few have taken me up on the offer. I suspect much of this comes down to how people perceive such information. Among robotics, the hours and days of trial and failure are an indication of how hard you’ve worked to get to the final product. Among the general public, however, such robot failures may be seen as a failure on the part of the roboticists themselves.

Back in a 2023 issue of Actuator (RIP), I praised Boston Dynamics for the “blooper reel” they published featuring Atlas losing its footing and falling in between successful parkour moves. As usual, a lot more ended up on the cutting room floor than made the final cut. Even when not dealing with robots, that’s just how things go.

A few weeks back, I attended a talk by director Kelly Reichardt following a screening of her wonderful new(ish) film, “Showing Up.” She reiterated that old W.C. Fields chestnut about never working with children or animals. In most cases, I would probably add advanced mechatronics to that list.

Along with CG/renders, creative editing is just one of many potential ways to sweeten a robotics demo. More often than not, the intent is not malicious. A sentiment musicians frequently share with me on my podcast is that once a song is released into the world, you no longer have control over it. To a certain extent, I believe the same can be true with video. Choices are made to tighten things up and sweeten the presentation. These are an essential part of making consumable online videos. Especially in the age of TikTok, however, context is the first casualty.

There’s no rulebook for what information one needs to include in a robotics demo. The more I think about it, however, the more I believe there should be — at the very least — some well-defined guidelines. I am not a roboticist. I’m just a nerd with a BA in creative writing. I do, however, regularly speak with people far smarter than myself about the subject.

Just ahead of CES, a LinkedIn post caught my eye (as well, it seems, the eyes of much of the robotics community). It was penned by Brad Porter, the Collaborative Robotics founder and CEO who formerly headed Amazon’s industrial robotics efforts. I rarely recommend LinkedIn follows, but if you care about the space at all, he’s a good one.

In the piece, Porter notes that CES would likely be lousy with cool robotics demos (it was), but adds, “there are also a lot of amazing trick-shot videos out there. Separating reality from stagecraft is hard.” The executive wasn’t implying any of the negative baggage that a word like “stagecraft” might have in this context. He was instead simply suggesting that viewers approach such videos with a discerning and — perhaps — skeptical eye.

I’ve been covering this space for a number of years and have developed some of the skills to spot robotic kayfabe. But I still often lean on experts in the field like Porter when a demo feels off. Of course, not every viewer has my experience or access to these folks. They can, however, equip themselves with the knowledge of how such videos are sweetened — maliciously or otherwise.

Porter identifies five different points. The first is “stop-motion.” This refers to a succession of rapid edits that make it appear as though the robot is moving in ways it’s incapable of in real life.

“If you see a robotics video with a lot of frame skips or camera cuts, [be] wary,” he writes. “You’ll notice Boston Dynamics videos are often one cut with no camera cuts, that’s impressive.”

The second is simulation. This is, in practice, the CG example I gave above. Simulation has become a foundational tool in robotic deployment. It allows people to run thousands of scenarios simultaneously in seconds. Along with other computer graphics, robotic simulation has grown increasingly photorealistic in recent years. Creating and sharing a realistic simulation isn’t a problem in and of itself. The issue, rather, arises when you pass off such things as reality.

Issue three has a fun name. Wizard of Oz demos are called such due to the heavy lifting being done by the [person] behind the curtain (pay no attention). Porter cites Stanford’s Mobile ALOHA demo as an example. I strongly believe there was no malice involved in the decision to run the (still extremely impressive) demo via off-screen teleop. In fact, the “robot operator,” Tony Zhao, appears in both the video and end credits.

Unfortunately, the appearance occurs two-and-a-half minutes into a three-and-a-half minute demo. These days, however, we have to assume that:

  1. No one actually has the attention span to sit through two-and-a-half minutes of incredible robot footage anymore.
  2. This thing is going to get sliced up and stripped of all context.
  3. Your average TikTok X (Twitter) viewer isn’t going to hunt down the video’s source.

For another example that arrived shortly after Porter’s post, take a look at Elon Musk’s X video of the Optimus humanoid robot folding laundry. The video ran with the text “Optimus folds a shirt.” Eagle-eyed viewers such as myself spotted something interesting in the lower right-hand corner: a gloved hand that occasionally popped partially into frame that matched the robot’s movement.

“Framing the Optimus laundry video just a few more inches to the left and you would have missed what looks like a tele-op hand controlling Tesla Bot,” I noted at the time. “Nothing wrong with tele-op, of course It has some excellent applications, including training, troubleshooting and executing highly specialized tasks like surgery. But it’s nice to know what we are (and are not) seeing. This strikes me as a obvious case of the original poster omitting key information, understanding that his audiences/fans will fill in the gaps with what they believe they’re seeing based on their feelings about the messenger.”

It could be wrong to accuse Musk of intentionally fully obfuscating the truth here. Twenty-three minutes after the initial tweet, he added, “Important note: Optimus cannot yet do this autonomously, but certainly will be able to do this fully autonomously and in an arbitrary environment (won’t require a fixed table with box that has only one shirt).”

As not-Mark Twain famously noted, “a lie can travel halfway around the world while the truth is still putting on its shoes.” A similar principle can be applied to online video. The initial tweet isn’t exactly a lie, of course, but it can certainly be classified as an omission. It’s the old newspaper thing of hiding your corrections on page A12. Far more people will be exposed to the initial error.

Again, I’m not here to tell you whether or not that initial omission was intentional (if you chose to apply the benefit of the doubt here, you can absolutely see the follow-up tweet as a genuine clarification of incomplete context). In this specific instance, I suspect most opinions on the matter will be directly correlated with one’s personal feelings about its author.

Porter’s next example is “Single-task Reinforcement Learning.” You can do a deeper dive on reinforcement learning here, but for the sake of brevity in a not-at-all brief article, let’s just say it’s a way to teach robots to perform tasks with repetitive real-world trial and error.

“Open a door, stack a block, turn a crank,” writes Porter. “Learning these tasks is impressive and they look impressive and they are impressive. But a good RL engineer can make this work in a couple of months. One step harder is to make it robust to different subtle variations. But generalizing to multiple similar tasks is very hard. In order to be able to tell if it can generalize, look for multiple trained tasks.”

Like teleop, there’s absolutely nothing wrong with reinforcement learning. These are both invaluable tools for training and operating robots. You just need to disclose them as clearly as possible.

Porter’s final tip is monitoring environment and potential omissions. He cites the then-recent video of Figure’s humanoid making coffee. “Fluid, single-cut, shows robustness to failure modes,” he writes. “Still just a single task, so claims of robotic’s ChatGPT moment aren’t in evidence here. Production quality is great. But you’ll notice the robot doesn’t lift anything heavier than a Keurig cup. Picking up mugs has been done, but they don’t show that. Maybe the robot doesn’t have that strength?”

When I spoke with Porter about the intricacies of the post today, he was once again quick to point out that these observations don’t detract from what is genuinely impressive technology. The issue, however, is that our brains have the tendency to fill in gaps. We anthropomorphize or humanize robots and assume they learn the way we do, when in reality, watching a robot open one door absolutely doesn’t guarantee that it can open another — or even the same door under different lighting. TVs and movies have also given us unrealistic expectations of what robots can — and can’t — do in 2024.

One last point that didn’t make it into the post is speed. The technology can be painfully slow at times, so it’s common to speed things up. For the most part, universities and other research facilities do a good job noting this via a text overlay. This is the way to do it. Add the pertinent information on screen in a way that is difficult for a click-hungry influencer to crop out. In fact, this phenomenon is how 1X got its name.

 

A recent video from the company showcasing its use of neural networks draws attention to this fact. “This video contains no teleoperation, no computer graphics, no cuts, no video speedups, no scripted trajectory playback,” the company explains. “It’s all controlled via neural networks.” The result is a three-minute video that can feel almost painfully slow compared to other humanoid demos.

As with the blooper videos, I applaud this — and any — form of transparency. For truly slowly moving robots, there’s nothing wrong with speeding things up, so long as you stick to three import rules:

  1. Disclose
  2. Disclose
  3. Disclose

Much like the songwriter, companies have to acknowledge that you can’t control what happens to a video once it belongs to the world. But ask yourself: Did I do everything within my power to stem the spread of potential fakery?

It’s probably too much to hope that such videos are governed by the same truth in advertising legislation that governs television advertisement. I would, however, love to see a group of roboticists join forces to standardize how such disclosures can — and should — work.