NVIDIA decided that their autonomous car could skip the usual steps of rule-based instruction and just watch good drivers.
(NVIDIA's DAVE2 Autonomous Car Learns From Drivers)
A team of engineers from NVIDIA based in our New Jersey office — a former Bell Labs office that also happens to be the birthplace of the deep learning revolution currently sweeping the technology industry — decided that they would use deep learning to teach an autonomous car to drive. They used a convolutional neural network (CNN) to learn the entire processing pipeline needed to steer an automobile.
The project, called DAVE2, is part of an effort kicked off nine months ago at NVIDIA to build on the DARPA Autonomous Vehicle (DAVE) research to create a robust system for driving on public roads. We wanted to bypass the need to hardcode detection of specific features — such as lane markings, guardrails or other cars — and avoid creating a near infinite number of “if, then, else” statements, which is too impractical to code when trying to account for the randomness that occurs on the road.
So how did our test car learn to drive?
Using the NVIDIA DevBox and Torch 7 (a machine learning library) for training, and an NVIDIA DRIVE PX self-driving car computer to process it all, our team trained a CNN with time-stamped video from a front-facing camera in the car synced with the steering wheel angle applied by the human driver.
We collected the majority of the road data in New Jersey, including two-lane roads with and without lane markings, residential streets with parked cars, tunnels and even unpaved pathways. More data was collected in clear, cloudy, foggy, snowy and rainy weather, both day and night.
This very process was foreseen in Anthony Boucher's 1943 short story Q.U.R.. Can a robot bartender make a perfect Three Planets drink? Only by watching detailed videos with high accuracy:
Quniby said, "Three Planets," and he [the robot] went into action. He had tentacles, and the motions were exactly like Guzub's...
...I got one of those new electronic cameras - you know, one thousand exposures per second... So we took pictures of Guzub making a Three Planets, and I could construct this one to do it exactly right down to the thousandth of a second. The proper proportion of vuzd, in case you're interested, works out to three-point-six-five-four-seven eight-two-three drops. It's done with a flip of the third joint of the tentacle on the down beat.
(Read more about Boucher's robot bartender)
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
A System To Defeat AI Face Recognition
'...points and patches of light... sliding all over their faces in a programmed manner that had been designed to foil facial recognition systems.'