By Paul Miller December 11, 2014

AI for robots

Why software needs a body

I began this series with the non-committal headline: "Robots will / won't kill us all." But mostly I've been talking about general applications of AI and machine intelligence. The sad truth is that our best AI isn't in robots, and our best robots aren't very smart at all.

The most exciting advances in actual "robots" are largely advances in remote control. Drones and telepresence robots, two booming industries, are interfaced with immediately through desktop and mobile apps. The emphasis is on wireless reliability, responsive controls, and high quality video streaming. Meanwhile, most AI these days is about finding meaning in large data sets, which is often a slow-running, long-running process. Neural networks, machine learning, and pattern recognition have little place in the robots we interact with in the physical world.

Neural networks, machine learning, and pattern recognition have little place in the robots we interact with in the physical world.

But I actually think we're in a good place right now. Robotics has long suffered from a chicken / egg problem: robots have to be useful before we'll buy them, but they have to be smart before they'll be useful, and because we don't buy them there's no money to build the software which would make them smart. By putting "dumb" robotic hardware into the world, we're actually providing a canvas upon which we can finally build smart software.

For instance, an aerial drone is perfectly suited to remote control. Even Amazon, who would undoubtedly like to automate its upcoming drone fleet, is hiring pilots for now. But what happens when you lose your connection? Ideally, the drone will autonomously fly safely to the ground, or, even better, fly toward its last known signal. The same thing goes with telepresence robots. The next step after that is to tell a robot to go to a spot on the map, and then tune in for manual control once it arrives.

That's how iRobot's Ava telepresence robot works, for instance. The robot maps its environment in real time, which an installer matches up with a blueprint of the building, letting users just tap a place on the map they'd like Ava to go. Less smart telepresence robots are likely to get stranded and need human intervention, so this sort of intelligence is a real competitive advantage.

Radiation, specifically, is a whole bag of hurt for robots.

I think DARPA is showing a great understanding of the limitations and appropriateness of remote control in its ongoing Robotics Challenge. DARPA's goal is what they call "task-level autonomy." An operator tells the robot to open a door, and the robot figures out the actual mechanical specifics of completing that task. To incentivize autonomy in competition, DARPA degrades communications between the operator and the robot — latency, packet loss, random disconnects. You can still pass every step-by-step instruction to your robot, it's just going to be very slow and very error prone. At the very least, the robot needs to be able to keep its balance and not freak out when the connection drops.

DARPA's Robotics Challenge was inspired by the Fukushima nuclear disaster, and the problems robot operators faced in tele-operation. Radiation, specifically, is a whole bag of hurt for robots. Not only does it disrupt wireless communication, but it can actually interfere with modern computer processors.

Which brings us back to why mechanical engineering is hugely important for robots. No matter how smart you make the software, the hardware has to be reliable, efficient, and effective. This is why wheels are still the most popular form of robot locomotion: legs are cool, but walking is hard. If there's a mechanical solution to a difficult robotics problem, it's almost always preferable to a software solution. Just think about how much dexterity you lose when you're wearing mittens. Sure, you could probably still fold your laundry while wearing mittens, because you're very smart and resourceful, but taking off the mittens is always option #1.

Some theorists don't think we can make a truly intelligent system until we have a competent robot body to put it in. A lot is still unknown about how the human brain works, but it clearly wires (and rewires) itself based on the multitude of senses it's connected to.

My favorite theorist along these lines is Dr. Pete Markiewicz, who has been blogging for over a decade on this topic. He writes:

No animal – or plant (which do a surprising amount of computing) has ever evolved with a tiny number of sensors and a large brain. In contrast, the opposite is always true – animals with tiny brains always have comparably rich sensation.

Instead of taking the Ava approach, where the robot attempts to make a blueprint style map of the world and navigate accordingly, robots should be designed to react more instinctually to situations.

An alternative example is walking: most robots are designed to be in perfect balance at all times. In comparison, humans dynamically compensate for balance continually. The "Zero Moment Point" solution, as it's called, needs a powerful central processor to churn all the math and position the entire robot accordingly. The lazy mammal way of standing and walking, in comparison, requires almost none of our conscious thought. Most of the work is done in short loops between our senses and our unconscious mind — even predictions like how high the next stair step will be are rarely questioned unless they fail.

Maybe that's why Google spent unspecified millions on Boston Dynamics, the company that built PETMAN, one of the first humanoid bipeds to ever walk dynamically. Google also snapped up Japanese robotics company SCHAFT after it outscored the competition by a wide margin in the 2013 DARPA Robotics Challenge trials — most of the competition was, by the way, using the ATLAS platform... which was also built by Boston Dynamics.

So I guess Google is into robots. And, most importantly, they're into every single aspect of robots. From software to pneumatics. Oh, and I haven't even talked about cars — one of the few places where large scale machine learning (resulting in a vast, detailed, live-updated map of a city) is combined with lightning-quick reactions in the physical world.

Maybe we'll have to wait a few years to see what Google wants to do with all this technology. Maybe they're melting all those PhDs down into web servers so we can view more Google AdWords. But either way, I think we're over the robotics hump. It's getting better, fast.


About Paul Miller

That guy who left the internet for a year