I'm good at predicting what people will do. Or, at least I think I am; maybe it's confirmation bias. But I try anyway. I read their faces, I look at their past actions, I look at the situation and watch them take in facts, then I make a snap judgement about what will happen next. Humans actually do this all the time. We fill in the gaps with predictions. We can finish each other's sentences, or fill in the middle part we missed.

Also, as a human, I tend to act in ways that are predictable to other humans. It's a self-reinforcing cycle — prediction and predictability — that makes human society possible.

It's a self-reinforcing cycle — prediction and predictability — that makes human society possible.

Which is what makes robots scary to people who don't build robots, and benign to those who do. It's what makes Siri intensely frustrating to me, as a user, because she never gives the answer I want, while also a real achievement for her creators, because of how many answers she could give if humans like me didn't ask such stupid questions.

Recently, two diametrically opposed takes on the common "robots will kill us all" theme hit the internet, from two undeniable geniuses.

Baxter, hard at workBaxter, hard at work

Rodney Brooks is a co-founder of Rethink Robots, and a founder of iRobot before that. Rethink Robots makes Baxter, a co-working robot (an industrial robot designed to work safely near people, instead of in a cage) that's a true achievement in the practical applications of robotics and AI. He says people need to stop freaking out about the coming robopocalypse:

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them.

He doesn't think that these problems will be solved merely by the exponential growth of computer power, or plugging one smart thing into another:

It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.

On the opposite side of the fence is Elon Musk, the PayPal / Tesla / SpaceX guy. In a comment he wrote on Edge.org, which was later deleted, he says we should be scared:

The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...

I don't want to pick on Musk, especially if he's officially retracting this statement, but it's a good reflection of what makes a smart person — who's not just thinking about the Terminator movies — scared of AI.

My own concern with the topic is as a generally optimistic person that wants robots and AI as a useful part of daily life. And what worries me is if Rodney Brooks is right, that might mean we're "centuries" away from cool robot pals.

Because it's exactly this "deep understanding of humans" that seems so lacking in our existing AI, and why the most ambitious projects seem to fail the most spectacularly.

As Brooks explains, Roomba "doesn’t know that houses exist." Roomba has no concept of the people who own it, or that they might be responsible for the dirt it's tasked with cleaning. It's just running an algorithm, matching input to output.

Deepmind is chillingly good at BreakoutDeepmind is chillingly good at Breakout

Deepmind, the machine learning project Musk refers to, which is "growing at a pace close to exponential," knows just as little as Roomba does about people and houses. Deepmind's biggest claim to fame is playing retro arcade games by only looking at the pixels on screen. Success is playing the game well, not understanding it. Matching an input to an output.

Success is playing the game well, not understanding it.

But I have a hard time imagining a complex, broadly useful machine intelligence that could interact with people "successfully" with zero understanding of them. Humans are a lot more than a set of pixels on the screen. We can't even be fully described by a comprehensive list of our external actions and appearance.

I recently read an incredible book about the neocortex by Jeff Hawkins, the creator of the Palm Pilot. Called On Intelligence, the book puts forward a theory of a "memory-prediction framework" as the foundation of human-type intelligence. Hawkins even thinks this process can be converted into computer algorithms, and his new company Numenta is founded around that purpose.

But Hawkins mirrors Brooks in his dismissal of the imminent dangers of AI. As he points out in his book, even a perfectly modeled human-size neocortex would lack all the other instinctual, mammalian parts of the brain that make us people. Without the instincts and motives that seem to stem from these parts of the brain, a machine could be "intelligent" in a useful sense, but so far different from us that the idea of it being "volitional" and "malevolent" in the human sense seems absurd to Hawkins.

Machine intelligence can still be hugely important, of course. It already is. IBM's Watson is sort of a twist on Google, capable of answering Jeopardy questions and searching medical databases for a diagnosis. Google itself keeps getting better, in large part due to machine learning techniques. Numenta has a product which tracks and predicts anomalies in Amazon Web Services. And Brooks' Baxter can automate repetitive tasks with minimal "training" from a human — actually express its intelligence in the physical world.

It's not surprising that any sufficiently advanced software along these lines can seem scary to someone who can't see their inner workings, but the engineers who build them know it's all just math and silicon underneath. Put the same inputs in and you get the same outputs out, like with any machine. Watson gives you an answer for a question, Google gives you a list of results for a query, Numenta gives you predictions for a data set, Baxter gives you movement for an instruction, Deepmind gives you joystick moves for pixels. The algorithms in-between are elegant and incredible, but not self-aware, and the output has a strictly defined impact on the outside world which the algorithm itself is incapable of expanding.

The state of the art of AI works by recognizing patterns through intensely applied CPU-driven math. To move beyond that, the software would need to somehow understand the motivations of the forces behind those patterns. That's an expansion of scope, not merely efficacy.

The algorithms in-between are elegant and incredible, but not self-aware.

In a recent talk, Musk mused about an AI that's built to remove spam, and determines the best way to get rid of spam is to get rid of humans.

But the long-running ingenuity of human attempts to dodge spam filters shows exactly where this analogy falls short. As Brooks makes clear, a malevolent AI would have to understand humans well enough to outsmart them. Then it would need enough agency to act on this new agenda. Simply deciding, as a binary decision, to delete humanity in order to get rid of spam, is lightyears away from actually devising and executing a method.

I can write killAll(humans) in a computer program, but that doesn't kill all humans. HAL, in 2001: A Space Odyssey, comes to the perfectly logical conclusion that the humans on board are a risk to the mission, which could be simple to represent with code as a logical construct. But the scope of intelligence required to monitor, judge, and kill the actual humans on board is still far out science fiction.

We'll keep being disappointed — and, pleasantly, alive — if we expect robots to see us and treat us the way humans do. But when we can embrace them as merely tools — Google, not Johnny Five — I think we'll be continually surprised by how much they can help us.

In the coming weeks I'll examine the ways modern machine intelligence is a clear path for the improvement of wearables, smart homes, and, naturally, robots. All the delightful little ways software can make our lives better, while hardly ever murdering us all in our sleep.


About Paul Miller

That guy who left the internet for a year