Google Inceptionism: Retrieving Images by Class

In the famous Google Inceptionism article, http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html they show images obtained for each class, such as banana or ant. I want to do the same for other datasets.

The article describes how it was obtained, but I believe that the explanation is not enough.

There is linked code https://github.com/google/deepdream/blob/master/dream.ipynb

but what he does is to create a random dreamy image, not to indicate the class and find out how it looks on the network, as shown in the article above.

Can someone give a more specific overview or code / tutorial on how to create images for a particular class? (preferably taking the caffe structure)

+5
source share
1 answer

I think this code is a good starting point for reproducing images posted by the Google team. The procedure looks clear:

  • Start with a clean noise image and class (say "cat")
  • Scroll forward and backward to resolve an overlaid class label error
  • Refresh the initial image with a gradient calculated at the data level

There are some tricks that can be found in the original paper .

It seems that the main difference is that Google people tried to get a more "realistic" image:

By itself, this does not work very well, but it happens if we impose a preliminary restriction on the fact that the image should have similar statistics with natural images, such as neighboring pixels, which need to be correlated.

+3
source

Source: https://habr.com/ru/post/1232582/