A robot able to accept gesture commands was announced by Brown University researchers. The group demonstrated that structured light-based depth sensing with standard perception algorithms can enable mobile peer-to-peer interaction between humans and robots. The gesture-command robot is based on iRobot's Packbot. iRobot also participated in funding along with DARPA.
"We have created a novel system where the robot will follow you at a precise distance, where you don't need to wear special clothing, you don't need to be in a special environment, and you don't need to look backward to track it," said Chad Jenkins, assistant professor of computer science at Brown University and the team's leader.
Other contributors to the research include Matthew Loper, a Brown graduate student and lead author on the paper announcing the research. Contributors include former Brown graduate student Nathan Koenig, now at the University of Southern California; former Brown graduate student Sonia Chernova; and Chris Jones, a researcher with the Massachusetts-based robotics maker iRobot Corp.
The researchers made two key advances with their robot. The first involved what scientists call visual recognition. Applied to robots, it means helping them to orient themselves with respect to the objects in a room. "Robots can see things," Jenkins explained, "but recognition remains a challenge."
The team overcame this obstacle by creating a computer program, whereby the robot recognized a human by extracting a silhouette, as if a person were a virtual cutout. This allowed the robot to home in on the human and receive commands without being distracted by other objects in the space.
"It's really being able to say, 'That's a person I'm looking at, I'm going to follow that person,'" Jenkins said.
The second advance involved the depth-imaging camera. The team used a CSEM Swiss Ranger, which uses infrared light to detect objects and to establish distances between the camera and the target object, and, just as important, to measure the distance between the camera and any other objects in the area. The distinction is key, Jenkins explained, because it enabled the Brown robot to stay locked in on the human commander, which was essential to maintaining a set distance while following the person.
Douglas Adams wrote about the idea of a gesture-controlled system in his 1979 blockbuster The Hitchhiker's Guide to the Galaxy. He also illustrated some potential problems with such a system.
The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitive--you merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure, of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same program.
JVC Clapper Gesture Recognition TV
JVC demonstrated a handclap and gesture recognition TT, apparently unaware that American "clapper" technology leads the world.
Gesture-Controlled TV Update
I've been gesturing at my TV for years - but I haven't been able to control anything...
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
A System To Defeat AI Face Recognition
'...points and patches of light... sliding all over their faces in a programmed manner that had been designed to foil facial recognition systems.'
Smart TVs Are Listening!
'You had to live -- did live, from habit that became instinct -- in the assumption that every sound you made was overheard...'