Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
So here’s one surprise: neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too.
Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations...
We have seen a lot of interest and received some great questions, from programmers and artists alike, about the details of how these visualizations are made. We have decided to open source the code we used to generate these images in an IPython notebook, so now you can make neural network inspired images yourself!
The code is based on Caffe and uses available open source packages, and is designed to have as few dependencies as possible. To get started, you will need the following (full details in the notebook):
NumPy, SciPy, PIL, IPython, or a scientific python distribution such as Anaconda or Canopy.
Caffe deep learning framework (Installation instructions)
Once you’re set up, you can supply an image and choose which layers in the network to enhance, how many iterations to apply and how far to zoom in. Alternatively, different pre-trained networks can be plugged in.
It'll be interesting to see what imagery people are able to generate. If you post images to Google+, Facebook, or Twitter, be sure to tag them with #deepdream so other researchers can check them out too.
AI Note-Taking From Google Meet
'... the new typewriter that could be talked to, and which transposed the spoken sound into typed words.' - Dr. David H. Keller, 1934.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
A System To Defeat AI Face Recognition
'...points and patches of light... sliding all over their faces in a programmed manner that had been designed to foil facial recognition systems.'