Visualize the output of each layer in anano doppler MLP

I am reading a manual on convolutional neural networks . I want to visualize the output of each level after training the model. For example, in the function "evaluation_lenet5" I want to pass an instance (which is an image) to the network and see the output of each level and the class that Neural Network trained for input. I thought it could be easy, like making a point product on an image and the weight vector of each layer, but it doesn't work at all.

I have objects of each layer as:

# Reshape matrix of rasterized images of shape (batch_size, 28 * 28) # to a 4D tensor, compatible with our LeNetConvPoolLayer # (28, 28) is the size of MNIST images. layer0_input = x.reshape((batch_size, 1, 28, 28)) # Construct the first convolutional pooling layer: # filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24) # maxpooling reduces this further to (24/2, 24/2) = (12, 12) # 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12) layer0 = LeNetConvPoolLayer( rng, input=layer0_input, image_shape=(batch_size, 1, 28, 28), filter_shape=(nkerns[0], 1, 5, 5), poolsize=(2, 2) ) # Construct the second convolutional pooling layer # filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8) # maxpooling reduces this further to (8/2, 8/2) = (4, 4) # 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4) layer1 = LeNetConvPoolLayer( rng, input=layer0.output, image_shape=(batch_size, nkerns[0], 12, 12), filter_shape=(nkerns[1], nkerns[0], 5, 5), poolsize=(2, 2) ) # the HiddenLayer being fully-connected, it operates on 2D matrices of # shape (batch_size, num_pixels) (ie matrix of rasterized images). # This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4), # or (500, 50 * 4 * 4) = (500, 800) with the default values. layer2_input = layer1.output.flatten(2) # construct a fully-connected sigmoidal layer layer2 = HiddenLayer( rng, input=layer2_input, n_in=nkerns[1] * 4 * 4, n_out=500, activation=T.tanh ) # classify the values of the fully-connected sigmoidal layer layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10) 

So, can you suggest a way to visualize the sample image processing step by step after training the neural network?

+5
source share
1 answer

It is not that difficult. If you use the same LeNetConvPoolLayer class definition from the anano deep learning tutorial, you just need to compile the function with x as input and [LayerObject].output as output (where LayerObject can be any layer object, such as layer0, layer1, etc. . which layer you want to visualize.

vis_layer1 = function ([x], [layer1.output])

Skip (or a lot) the sample (just like you supplied the input tensor during training), and you will get the output of the level for which your compilation was performed.

Note. This way you get the outputs in the same form as the model used in the calculation. However, you can change it however you want by changing the output variable as layer1.output.flatten(n) .

+6
source

Source: https://habr.com/ru/post/1241851/


All Articles