So Patrick Buehler and Andrew Zisserman at the University of Oxford, along with Mark Everingham at the University of Leeds, started by designing an algorithm that could let an artificially intelligent computer system identify individual signs.
Then, they let the system watch TV shows with both text subtitles and British Sign Language. After about ten hours of watching TV - well, watch the video and see for yourself.
The software correctly learned about 65% of the signs that it was exposed to.
Would this have been enough to betray Bowman and Poole in the famous HAL 9000 lip-reading incident in 2001: A Space Odyssey? Hopefully, we'll never know.
AI Note-Taking From Google Meet
'... the new typewriter that could be talked to, and which transposed the spoken sound into typed words.' - Dr. David H. Keller, 1934.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
A System To Defeat AI Face Recognition
'...points and patches of light... sliding all over their faces in a programmed manner that had been designed to foil facial recognition systems.'
Smart TVs Are Listening!
'You had to live -- did live, from habit that became instinct -- in the assumption that every sound you made was overheard...'