
Description: Rodney Brooks brought us the Roomba, then turned his attention to robots for manufacturing, and now says they will revolutionize elder care, among other things. He says they need not only dexterity but common sense. He is not as alarmed as you are about the singularity.
Speaker: Rodney Brooks, MIT
Interviewer: Sophia Stuart
(Transcription by RA Fisher Ink)
Stuart: So now it’s my pleasure to introduce Dr. Rodney Brooks. He’s the MIT Professor Emeritus, the founder of iRobot, the founder of Rethink Robotics, and we’re going to talk about robots without fear. So nice to see you again. When I interview people about the robots that they make or they’ve seen, I usually start by asking which robot they first met. But with you, it was the first robot that you made, back in Australia. So would you talk to us about that?
Brooks: Yeah. So I was fascinated by computers, which I’d never seen, and robots, and started trying to build them when I was about eight years old. But robots were much too hard, because they had to mechanically work together. So I finally built my first robot when I was about 16.
Stuart: And what did you call it?
Brooks: It was called Norman.
Stuart: Because?
Brooks: Because there was a silly TV show called Norman Gunston, and it was named after. But it was really based on some robots that Dr. Grey Walter had built in Bristol in the 1940s.
Stuart: Oh wow.
Brooks: With vacuum tubes. And mine were transistorized versions. So Norman would wonder around the floor, follow lights, bump into things and back away, but it would tend to get stuck under four legged chairs. It would find its way in and then couldn’t find its way out.
Stuart: So almost there—well, we’ll get to the Micro Rovers, but I just want to deal with MIT. So you built many, many robots at MIT. Is there one in particular—I remember when I first came to lab, sadly, after you had gone, and I met Kismet for the first time and had a real response. That was my understanding of real human-robot interaction.
Brooks: Yeah. We built lots and lots of robots. I want to say one thing. People here saw Art Schectman, who talked about people on the spectrum. He actually was in my lab. He built an autonomous colon crawling robot. We called it butt-bot. But that wasn’t my favorite.
[LAUGHTER]
Stuart: What’s your favorite?
Brooks: My favorites were really Cog and Kismet, which were humanoid in form, and what I liked about it was that we discovered how easy it was to get people to engage in social interaction without content. So Kismet would sit like you, nod, respond to—
Stuart: I remember the blinking.
Brooks: Take turns, and extract, from the prosody of the voice, some emotional content, provide emotional content, and to people who didn’t know better, it seemed like they were having an actual conversation. And when someone said, “I’d like to show you my watch,” Kismet, which had very simple things about tracking moving objects, would go and look at the watch.
Stuart: And literally sort of move. I remember that. Sort of move and real intent.
Brooks: Yeah. And so I likened it to a conversation in a bar at 2:00 a.m.
Stuart: Okay [LAUGHS]
[LAUGHTER]
Brooks: Not actual content, but you’re sort of going through the—
Stuart: Yeah. We’re on the same wavelength somewhere. I love that. Okay. So whenever we ask people, “Would you have a robot in your house?” most people do that kind of horrified look and say no, and yet 20 million people have a Roomba, which you were behind with iRobot. But I really wanted to focus on the fact that your original Micro Rover concept was much, much earlier than that. And I’m not sure—many people should know and we should talk about this. It was deployed within hours of the Fukushima. Would you talk about what was the original intent behind that and how it’s been used?
Brooks: Actually, iRobot started out as a space exploration company. We were going to send private missions to the moon and to mars. And that was our original model. We had graphics built around that. We were going to send little, six legged rovers to the moon, covered in advertising decals, multiple ones, and then we were going to sell advertising on the moon. And that was inspired by a robot that Colin Angle and I had built in 1988 that was called Genghis, six legged, walking robot. And it spent many years in the Smithsonian. It’s now in my living room. Everyone should have a museum robot in their living room. But Colin then went and worked at JPL for a summer. He was in undergrad at MIT at the time. He’s now the CEO of iRobot. And out of that came a Micro Rover project and we were really excited. We thought maybe with NASA we could send it to the moon or mars. Things were slow though, so we teamed up with the Ballistic Missile Defense Organization, BMDO, who had recently lost their reason for being when the Soviet Union collapsed. And we built a little Rover that was launched at Edwards Air Force Base, did a soft landing and was going to go to the moon, ultimately, but then NASA got involved and, ultimately, they took the prototypes that Colin had worked on at JPL and had since been modified, and that’s how the first Rovers got to mars in 1997, the Sojourner.
Stuart: So there’s some of that within Pathfinder, right?
Brooks: Yup, Pathfinder was absolutely based on that. And then Fukushima, during the—we had 14 failed business models at iRobot. But we had two successful ones: the Roomba and the PackBot which as used in Afghanistan and Iraq for roadside bombs. When Fukushima happened, March 11, 2011, there were no—they didn’t know what was going on in the buildings. They knew there was high radiation. We got a call about a week later saying, “It would be perhaps helpful if you could send some robots.” We got them there within 48 hours, and we sent out those robots. The went into the very high radiation areas. It was a 40-year-old GE—sorry GE—40-year-old GE nuclear power plant. And there was nothing digital there. There were just analog dials. So we sent out one robot that acted as a WiFi hotspot, the next robot went out, put a camera on the dial, the pressure, etcetera, put those images back and that’s how they knew what was going on. We were able to cold shutdown.
Stuart: Amazing.
Brooks: And those robots are still there.
Stuart: Yeah. I was going to say you can’t extract them, can you?
Brooks: No. And I’ve bought many more of them, and I’ve been to Fukushima. It was the most sobering day of my life going into there.
Stuart: Yeah. It must have felt amazing actually being able to be part of the solution then.
Brooks: Very gratifying. Yeah.
Stuart: Yeah. I’m sure. I’m sure. We should briefly talk about Rethink Robots, and then I want to get onto robots in eldercare. So sadly, it is no more. But it was just ahead of its time, and it’s still very important. You know, I met many Baxters and Sawyers. Do you want to just talk about that?
Brooks: Yeah. So Rethink Robotics was trying to—it was a fantastic artistic success. It changed what people expect about robots, that you can have them this close to a person—
Stuart: They were very personable. Especially Sawyer.
Brooks: In a factory that you no longer have to use a 1980s scripting language, but you can show them what to do. But now I’m started to think of them as the Segway of personal transportation, but of industrial robots. The Segway had a critical ideal, a personal, electric vehicle that you could drive around in the city. But it didn’t get everything right. One was the individual owned the Segway.
Stuart: Right. And the form factor has some issues.
Brooks: And the form factor. But now we see the little scooters.
Stuart: Yeah. The Bird.
Brooks: And everyone’s “yeah. That’s sort of the right thing.” But they had to adjust a whole bunch of other things which were beyond that original idea of a personal electric vehicle.
Stuart: I always liked the Albert Einstein quote “if we knew what we were doing, it wouldn’t be called research.”
[LAUGHTER]
Brooks: Yeah. And it wouldn’t be called startups either.
Stuart: Yeah. Exactly. Exactly. So in the last five minutes, I really want to talk about something that I know you’re very passionate about. I’ve read many of your writings on this, and I was at the VA hospital recently where they’re using robots for cognitive decline. But you’re talking about robots that are dexterous, that have common sense. Can you explain that?
Brooks: Yeah. So a few different speakers have talked about the demographic change in the world. In North America, in Japan, China, coming, in Europe, definitely, Italy, already there, many, many more older people relative to the number of younger people. Ratios are going from something like nine younger people to one elderly person to two younger people for one elderly person. So what does that mean for eldercare? How are people going to be looked after as they decline? And one of the things about decline and health costs is I think people, mostly, as they get older, they want independence, and they want dignity. And going into managed care, you sort of give up both those things, and it costs a lot. So if people can stay in their own homes longer and maintain their independence and their dignity, I think there’ll be a great pull for that, especially since there won’t be enough workers in the managed care. Most places have relied on immigrants for that. And that’s not going to work. So how can robots help people in the home? And here’s a simple example: getting the right form factor, getting it to work is difficult. But many people have to go into managed care when they can no longer get into and out of bed by themselves, when they can no longer go to the bathroom by themselves. That’s the thing that breaks down. So a robot that can manipulate people—
Stuart: Can lift.
Brooks: Can lift them. And now, a robot that can lift them has to be able to sense force just as my robots at Rethink Robotics did, for safety.
Stuart: I could see a reformed Baxter being able to do that.
Brooks: Yes, but more than that, you know, we’ll expect, everyone will expect, to be able to talk to these robots. And I think we’ve all probably had frustrating interactions with Alexa, the Echo, or with Google at home. But even more than that, as someone gets older, giving them a frustrating experience as they’re saying, “Get me in the bed. No! It’s over there.” The speech understanding systems also will have to understand “oh. Today this person is not making their sentences so well. Maybe there’s a problem here?” So it’s got to be a much more closer interaction.
Stuart: So more sort of sentient analysis and voice recognition as well as some facial recognition.
Brooks: And changing over time the way they interact with a person as the person declines. Or just, in my case I hope, I’ll just get grumpier and not decline.
Stuart: Me too.
Brooks: I’ll never decline, but I certainly will get grumpier. So I think that’s a much more personal version of an AI with common sense, with understanding of people that most of the neural net-based systems we see today do not have any of that level of understanding. I love that when Kai-fu Lee, sitting here, pointed out that the last big breakthrough in AI was deep learning nine years ago.
Stuart: Right. It’s kind of depressing.
Brooks: There’s room for a lot more breakthroughs, and the need for a lot more breakthroughs as we go into this much more elderly population than we’ve had.
Stuart: So how do you—in the last couple of minutes—how do you teach robots common sense. Like where do you start with that kind of concept?
Brooks: Well, you know, John McCarthy, one of the founders of AI, published his first paper about that in 1958. We’ve been working on it for a little while. DARPA has just announced a $2 billion program to try to get to the level of common sense that an 18-month-old has. I think they’re being realistic in how much work there is to be done. I think it’s something that we really need to try to get away—amongst the researchers in AI who are getting pulled into these big companies with high salaries to do exploitation of existing techniques. I think it’s going to be important for the world for those researchers to go back to exploration of new techniques.
Stuart: Right. Redirect their energies toward saving the world.
Brooks: Yes. But not quite enough high salaries at the moment.
Stuart: Yes. Yeah.
Brooks: I hope that blip changes.
Stuart: But your point is that many populations in the world are aging rapidly, so it is a huge business opportunity.
Brooks: It is going to be—for people who have something that works there, it’s going to be a tremendous business opportunity.
Stuart: And so just in the last minute, I read you one line. So you’ve been writing prodigiously. Are you doing a book? You’re developing chapters online? I read all your stuff on AI.
Brooks: Yeah. So I have a blog. And originally it was going to be a book, but then I talked to my agent, and he said, “Yeah. You’ll work on it for two years and then we’ll have a big”—and I thought, “No, I have to get these ideas out here quickly, because the world is changing very fast.” But I have recently figured out what book I am going to write which is not quite so time sensitive. And it’s about the impact across technology of Moore’s Law, which people sort of understand at one level, but I think there’s much deeper implication of Moore’s Law that has changed our attitude to technology, perhaps made us a little too overoptimistic, because Moore’s Law just kept delivering for 50 years all the time. Other technologies are not going to deliver on that timescale so regularly.
Stuart: Right.
Brooks: So I think there’s a lot to be explored about—
Stuart: So you’re going to bring welcome note of pragmatism to the field?
Brooks: Yeah. Some people would call me a skeptic, but I—
Stuart: Well, at least you know what you’re talking about, which is always refreshing.
[LAUGHTER]
Brooks: Yeah. I’ve been involved for quite a while. Yeah.
Stuart: Well, I have to wrap up. This was such a pleasure. Thank you.
Brooks: Thank you.