How do we get / define filters in convolutional neural networks?

How to implement deep autocoding (eHow can I get filters from a surrogate neural network (CNN)? My idea is something like this: Make random images of input images (28x28) and get random patches (8x8). Then use autoencoders to find out the general features of patches (functions = hidden units, for example, about 100.) Then apply function filters to the input images and collapse. Is this correct?

I got confused because once the literary state is used only using, for example, 8 filters, but in my case I have 100..g. 2 or 3 layers)? Any ideas or resources?

+5
source share
1 answer

You can follow the tutorial: http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial

This is like a lecture on both auto-encoders and simple things about CNN (convolution and concatenation). When you finish this tutorial, you will have both an automatic encoding implementation and a stacked-auto-encoder, a deep implementation of automatic encoding will be implemented in your words.

This tutorial will indicate that you are asking for:

  • 28x28 MNIST Images

  • Getting 8x8 patches and training filters using auto-encoders

  • minimize these images with these 8x8 filters

  • combining them

  • using the combined vectors / images and putting them in the soft-max classifier to learn 10 different classes of the MNIST database.

+4
source

Source: https://habr.com/ru/post/1205298/


All Articles