It was genuinely a bit surreal seeing deep green hills in the South Bay last week. Growing up in Fremont, I know the change from brown to green is about as close as we get to having seasons, but it’s been so long since I’ve seen them, I’d genuinely forgotten they can exist.
It’s an understatement to say that the return of rain has been something of a mixed blessing in Northern California. I know several people who are still reeling from the recent floods, but this brave new world to which we all belong seems to only operate in extremes when it comes to the weather.
Since returning home to Queens for a few days (I fly out for Mobile World Congress on Friday), several people have commented about how nice it must have been to get out of the cold in February. These are all people who, presumably, have never been to Northern California. It’s a place that will happily turn your notions of sunny beach weather completely on their head — and I love it for that. Certainly, it beats the rolling-blackout-inducing 105-degree heat wave I experienced in Cupertino over the summer.
Reports that Mark Twain, who moved out west to San Francisco as a young journalist in the 1860s, never actually said, “The coldest winter I ever spent was a summer in San Francisco.” But it’s a sentiment as clever as it is true, and that’s why we keep attributing it to the man 100+ years after. The equally accurate “If you don’t like the weather in New England, just wait a few minutes” also appears to be of dubious origins.
Apologies to the friends and family who are first learning about this trip through this newsletter — it was a 48-hour trip. I was in town for the TRI (Toyota Research Institute) event in Los Altos, the little town next to Palo Alto and Mountain View that was home to the garage where Apple was founded by a pair of Steves.
As the Uber driver approached the address I’d given him, I assumed something was wrong. We were dead in the center of a massive shopping center. But no, the Mountain View Hyatt is right there, in between a Trader Joe’s and a Walmart — a perfect setting if you want the real South Bay experience the locals get.
Wednesday was TRI day. I wrote a bit about it in last week’s Actuator, and you can read a more in-depth write-up of the experience over here. I find the world of corporate research institutes a fascinating one. The nature of the organization is clearly dependent on the governing corporate body. It’s a question of both resources and focus. All big companies are, on a fundamental level, driven by the same central forces of capitalism — namely revenue and shareholder profit. It’s a crass way of looking at the world, but it’s also the most honest.
That said, organizations have different expectations when it comes to research. When we traditionally think of R&D, it’s work that’s directly plugged into a company’s roadmap. Those engineers are often working on one or two product iterations down the road. Larger companies with more resources are afforded the ability to take on a broader definition of what research entails under their umbrella.
Consider Alphabet X one of the most prominent examples of this model. The 13-year-old “moonshot factory” has given us Waymo, Google Glass, Loon, Wing and — more recently — robotics firms Intrinsic and Mineral. Your individual mileage will vary on those projects, of course, but the driving force is clear: It’s a venue to take the sorts of risks that are traditionally verboten in a large corporation. We tend to think of X as more of an in-house accelerator. Obviously not every team graduates to the status of startup, but that’s presumably the goal.
At the other end of the spectrum is the Boston Dynamics AI Institute. This is top of mind for me at the moment, mostly because I’ll be speaking with executive director Marc Raibert as part of Monday’s TC City Spotlight: Boston event.
Here’s a quick preview of what Raibert has to say with regard to the organization’s relationship to productizing research:
We have a multi-prong plan. We can do spinouts. For some, spinouts are seen as a way to commercialize. For me, it’s a way to protect the institute from products.
There’s a sense in which the Institute exists as a direct result of Boston Dynamics’ move to commercialize research as part of Hyundai. Granted, the whole thing is quite new, but its work is separate from Boston Dynamics’ own internal R&D. The revelation of potential spinouts is an interesting one as well.
Obviously, TRI is its own beast, but I would generally position it somewhere between these two approaches. Clearly there is research being done on things like autonomy and EVs that are destined for Toyota’s automotive products, but the teams also appear to have a lot of control over the direction of their research. The question of milestones was important, given the layoffs going around the tech industry now. The standard for measuring success is often different from what you’ll find in a university setting, for instance.
It can be especially difficult for researchers to quantify progress in the context of a moneymaking enterprise. Last week I had a great conversation with TRI SVP Max Bajracharya about the ups and downs of the institute’s work in robotics. Here’s part of that conversation. And below is another chunk of my conversation with him — read the full interview here:
TC: How do you measure milestones? What does success look like for your team?
MB: Moving from the home to the grocery store is a great example of that. We were making progress on the home but not as fast and not as clearly as when we move to the grocery store. When we move to the grocery store, it really becomes very evident how well you’re doing and what the real problems are in your system. And then you can really focus on solving those problems. When we toured both logistics and manufacturing facilities of Toyota, we saw all of these opportunities where they’re basically the grocery shopping challenge, except a little bit different. Now, instead of the parts being grocery items, the parts are all the parts in a distribution center.
You hear from 1,000 people that you know, home robots are really hard, but then you feel like you have to try for yourself and then you like, really, you make all the same mistakes that they did.
I think I’m probably just as guilty as everybody else. It’s like, now our GPUs are better. Oh, we got machine learning and now you know we can do this. Oh, okay, maybe that was harder than we thought.
Something has to tip it at some point.
Maybe. I think it’s going to take a long time. Just like automated driving, I don’t think there’s a silver bullet. There’s not just like this magical thing, that’s going to be ‘okay, now we solved it.’ It’s going to be chipping away, chipping away, incrementally. That’s why it’s important to have that kind of roadmap with the shorter timelines, you know, shorter or shorter milestones that give you the little wins, so you can keep working at it to really achieve that long-term vision.
What’s the process for actually productizing any of these technologies?
That’s a very good question that we are ourselves trying to answer. I believe we kind of understand the landscape now. Maybe I was naïve in the beginning thinking that, okay, we just need to find this person that we’re going to throw the technology over to a third party or somebody inside of Toyota. But I think we’ve learned that, whatever it is — whether it’s a business unit, or a company, or like a startup or a unit inside of Toyota — they don’t seem to exist. So, we are trying to find a way of creating and I think that’s the story of TRI-AD, a little bit as well. It was created to take the automated driving research that we were doing and translate into something that was more real. We have the same problem in robotics, and in many of the advanced technologies that we that we work on.
I recommend reading the full interview for more insight into the state of home robots, and why we’re all so driven to find out firsthand how difficult things are. It was a candid conversation of a kind we don’t see often enough. Like so:
The problem is we couldn’t measure how well we were doing. Let’s say we were a little better at tidying this one house, we don’t know if that’s because our capabilities got better or if that house was a little easier. We were doing the standard, “show a demo, show a cool video. We’re not good enough yet, here’s a cool video.” We didn’t know whether we were making good progress or not. The grocery challenge task where we said, we need an environment where it’s as hard as a home or has the same representative problems as a home, but where we can measure how much progress we’re making.
As edifying and interesting as the conversation was, I’m not at a point in my life where I’m into doing a 12-hour roundtrip flight for a 24-hour trip (shoutout to my emergency rabbit sitters). Thankfully, the South Bay was more than happy to oblige. I bookended Thursday with a pair of 90-minute Palo Alto VC meetings. In the morning, I chatted with Rohit Sharma of True Ventures and ended the afternoon talking to Bruce Leak and Peter Barrett 10 minutes away at Playground.
True Ventures was a nice little bit of kismet. I reached out because I happened to be in the process of writing up two of their portfolio companies, Bigscreen and Current Surgical, which each recently raised rounds. Neither one is a robotics firm, so I won’t bore you with the details here.
There’s a lot of great insight I’m still going over, but I wanted to share a handful of interesting quotes from the three very smart VCs I spoke with.
We’re not looking at general robotics or a humanoid robot walking around and assisting. That’s very far away. That’s a good mental construct to have, but it’s not product or tech.
Manual labor on the edges of all robotics is always going to be there. The exceptions that happen — let’s say I could invent a machine that prepares plants and perhaps the field and handles the produce — even in that scenario, in every 100-foot operation, there’s going to be exceptions that need to be manually handled. Everybody, when they pitch their strawberry picking robot or their favorite picking robots show 10 people clustered around it.
Bruce Leak: Is it too soon? It’s all about timing. In the limit, there will be a humanoid robot-ish thing that is everywhere. That there’s an app store for. You want that thing to clean your garage or C-3PO shoveling the snow?
Peter Barrett: The Rosie the Robot thing is just a ’60s cartoon fantasy. And it will continue to be so. […] I think the art of investing, in essence, is to know where to put the money. We can achieve something that is practical, deployable and robust.
BL: Over the last four years, everyone’s rotated out of autonomous taxis to: can I do that on a sidewalk? Can I do that in a planned community? Can I use that for both a lawn and a golf course? Can I use that on an agricultural field? What don’t I want? I don’t want regulatory, I don’t want people in the way. I don’t like highspeed.
PB: The difference between unloading a truck at Amazon and then climbing out of an autonomous vehicle to deliver a package is a software problem. That thing is physically capable of doing all kinds of dexterous things, but we don’t have software that can deal with grass instead and hoses and dogs and all that sort of stuff. The mechatronics are well ahead of the intelligence.
PB: People are capable of doing things that robots haven’t the slightest idea how to do. That will be true for decades.
Swap Robotics, which made the Battlefield finals at last year’s Disrupt, just announced a $7 million seed. The company turned heads for its modular systems, which can mow lawns and clear snow, coupled with the narrow focus of solar farms. CEO Tim Lichti told TechCrunch:
The solar market is a beachhead market for us. Starting with solar vegetation cutting solves a massive problem for customers, since cutting the grass and vegetation is the biggest ongoing expense once utility-scale solar installations are built. Large solar sites are fenced off from the public, and that allows our 100% electric robots to safely run 24/7 and maximize acreage cut per month. The global market for solar cuts is expected to reach tens of billions of dollars annually by the 2030s.
And here’s $15 million in Series C3 (yeah, I dunno) funding for Pudu. The Shenzhen firm has really squeezed all it can out of the C round here, having raised a $77 million Series C2 in 2021. Presumably the C4 will be truly explosive. The company makes a broad range of robots, largely focused on the food service industry. Per a press release:
Felix Zhang, founder and CEO of Pudu Robotics, said the company is developing steadily in the delivery segment, with growth in overseas shipments continuing to accelerate. The company derived 90% of its revenue from sales in 2022. Pudu Robotics has steadfastly refused to chase fast growth by rolling out low-price models or allowing leased products to take up too large a proportion of revenue, ensuring solid operating indicators in terms of payments and collections. In 2022, cash revenue from operations increased by nearly 40% year on year, with shipments exceeding 20,000 units. High-end models, most notably the BellaBot, are widely recognized across global markets. Japan’s leading restaurant group Skylark ordered 3,000 BellaBots in one go, setting an industry record.
One topic that was unavoidable last week (and, frankly, every week now) was ChatGPT. I suppose we’ve made positive progression from the days of blockchain robotics in the metaverse (hard to believe January was only three weeks ago). I wrote a big, long thing about hype cycles a couple of weeks ago, so keep that firmly in mind as I share this story from Microsoft about using ChatGPT to control robotic systems. The paper describes the familiar bottleneck of generate code to operate robotic system:
ChatGPT unlocks a new robotics paradigm, and allows a (potentially non-technical) user to sit on the loop, providing high-level feedback to the large language model (LLM) while monitoring the robot’s performance. By following our set of design principles, ChatGPT can generate code for robotics scenarios. Without any fine-tuning we leverage the LLM’s knowledge to control different robots form factors for a variety of tasks. In our work we show multiple examples of ChatGPT solving robotics puzzles, along with complex robot deployments in the manipulation, aerial, and navigation domains.
It’s a problem a lot of people are spending a lot of money to solve. While I like the idea of creating a system that can effectively go all WYSIWYG on robotic programming, we’re at the stage in the process where these things need to be viewed as (a) interesting ideas (b) hype. At very least, though, these are interesting avenues to explore as we work to make robots more accessible to nonroboticists.
Robotics Companies That Hiring:
Righthand Robotics (7 Roles)
Chef Robotics (13 Roles)
Impossible Metals (2 Roles)
All right, I’m off to Catalonia. Meantime, why not subscribe to Actuator?