AI: Rumors of my death have been greatly exaggerated
Today the New York Times reports on more than 58,000 people from all over the world signing up for Peter Norvig and Sebastian Thrun‘s online course in AI. That’s artificial intelligence, by the way. And the count is over 72,000 already.
That number is in great contrast to the number of philosophers who like to point out that computers are useless because they lack that certain je ne sais quoi that carbon-based life forms effortlessly possess. See Searle, Dreyfus, and others.
Of course the numbers themselves are not a good argument for AI against detractors. After all, millions of people worldwide believe that their religion is the only true one and everybody else is mistaken and will burn for all eternity after they die. Doesn’t make any of it true.
But it warms my robotic heart to see these numbers because that means that people are still interested in making computers smarter, and in figuring out how intelligence works.
Despite (or because of) the fact that it turned out to be an incredibly difficult problem. Forty-five years ago Seymour Papert thought his group of students could solve the problem of visual scene understanding in a summer project. Only four years ago autonomous cars driving 55 miles in the Urban Challenge barely used cameras for perception, relying instead on 3D information gathered from much more precise and efficient LIDARs. We have chipped and chipped away at the problem of visual scene understanding, but we’re haven’t yet solved it.
This chipping away produced some pretty impressive machines and software. It also produced something that should be of interest to philosophers: machines that don’t just “follow instructions”. One of the biggest arguments against the claim that machines can be intelligent is that they will always just follow the programmer’s instructions and therefore that they lack meaning. No semantics, all syntax. No understanding, just pushing symbols around.
But there are no specific instructions — beyond rules of the road such as stopping at a stop sign — that an autonomous car can follow and still “survive” in a fairly unpredictable urban driving environment.
What the car’s program does is perceive the world (using a whole slew of sensors — radars, LIDARs, cameras, gyroscopes, etc.), build a model of what is happening based on that perception, its history, and the rules of the road, and make very fast decisions about how to act (stop, turn, speed up) based on that model and its goal of completing the driving course as fast and safely as possible. It isn’t a giant table of possibilities with a code for what to do next in each one. There could never be such a table for the many possible things that can go slightly wrong, or even right, in the dynamic world of driving at 30mph (maximum speed in the Urban Challenge).
And guess what — human drivers do the same thing! Using different sensors (eyes, ears, proprioception), and probably significantly different ways of perceiving and modeling the world, as well as decision-making, we still act to the best of our understanding and ability to further our goals. So if a system like Boss, CMU’s winning car in the 2007 Urban Challenge, is just following the programmer’s instruction, then why isn’t a human driver just following instructions (maybe from whoever taught him to drive) too? If there’s intelligence in driving a car in the city, then Boss has that intelligence.
And so little by little… 72,000 more people will learn how to make machines more intelligent this fall. And although it would take them more than one course to make their own autonomous car — that’s just cool.