To be fair, PR2 and HERB are research robots, and I wouldn’t trust any research that didn’t cost hundreds of thousands of dollars and fail to be useful in the real world. In their further defense, the seemingly simple things we’d like robots to do around the house—like towel folding—turn out to be surprisingly complicated. Because towels are floppy, it’s difficult to program a robot to recognize their shape or to know where the corners are so it can spread them out flat. When you pick up a mug, you rely on the ability to recognize exactly where the handle is and how it’s oriented, in spite of the fact that mugs come in all sorts of shapes and sizes and can be oriented to you in dozens of different angles.
Then the mug problem gets even harder, notes Sven Wachsmuth, a computer scientist at the University of Bielefeld in Germany who helps run an annual worldwide competition in feats of home robotics. “We humans rely on tactile feedback from a five-fingered hand to make sure we have a good grip and to adjust our fingers in complex ways if we sense any slipping or imbalance,” he says. “Those are critical capabilities for doing any sophisticated manipulation of objects in the home, like opening a jar, and right now it’s difficult and expensive to have even crude versions of them in robots.”
And climbing stairs? Forget it. “You don’t see many home robots with legs,” notes Wachsmuth. (The good news: It’s only near the entrance to the basement stairs that you have to watch your back when your robot is around.)
Roboticists are working on all this. “One of the final frontiers of robotics is getting robots to understand the complex structure of the home environment so they can perform useful tasks,” says robotics researcher Siddhartha Srinivasa, who founded and runs the Carnegie Mellon lab that birthed HERB. The perceptual and physical acts we humans mindlessly pull off every minute of the day turn out to be the real challenges of artificial intelligence, Srinivasa notes. Chess, solving equations, and other acts of supposed high-level intelligence are relative pieces of cake for machines. Dusting and coffee-mug cleaning—now that takes brains.
The key strategy at Carnegie Mellon now is to try to get herb to figure out how to use its arms and grippers deftly by having it tackle a task over and over again to learn from its mistakes, rather than having a programmer spell out all the details. That way the robot picks up strategies that can be applied to similar challenges in other conditions. “If a robot learns how to slide objects in the refrigerator aside so it can get at something at the back of the shelf, then it should be able to do it not just in our refrigerator but in your refrigerator, too,” says Srinivasa.
To save time and pickle jars, HERB doesn’t have to actually try out every dumb idea that pops into its CPU; it can quickly run through computer simulations to eliminate most of the worst strategies before it lifts a pincer.
By the time affordable, fully capable versions of HERB- and PR2-like home robots eventually become available—in 2032, by my calculations—I’ll already be looking forward to the robot class of 2052. That’s when robots won’t merely be useful around the home, they’ll be socially competent. I’m not talking about the many research efforts to add facial expressions to robot heads, which to me are a step in the wrong direction; I’m already surrounded by humans who are all too happy to make annoyed faces at me. I’m much more excited by current efforts to enable robots to read my facial expressions, body language, gestures, and tone of voice, so that my robot will understand my needs without my spelling them out.
“If something bad is happening, you’re much more likely to gesture than to issue a detailed verbal command to a robot,” notes Alan Winfield, a roboticist at the University of the West of England at Bristol. Even better, my robot will be able to improve in that regard, as in everything it does, by tapping into a networked database of the experience of other robots—a database that’s already under construction as part of the RoboEarth collaboration of robotics researchers around the world.
Too much help could have its downsides. Americans are already getting fat from not doing enough physical tasks, and having robots take over the few we still do could leave us all morbidly obese. But I predict that in the end, all robots will become highly fitness-promoting. Eventually one of them will figure out it would be better off without its obnoxiously gesturing, beer-guzzling owner and will share its homicidal schemes via the robot grapevine.
That ought to get us running. Even more invigorating, it will get us climbing stairs, the one place robots can’t follow. Unless, that is, you’re dumb enough to buy a robot with legs.
David H. Freedman is a freelance
journalist, author, and longtime contributor to DISCOVER.
You can follow him on Twitter here: @dhfreedman