It is not that difficult. If you use the same LeNetConvPoolLayer class definition from the anano deep learning tutorial, you just need to compile the function with x as input and [LayerObject].output as output (where LayerObject can be any layer object, such as layer0, layer1, etc. . which layer you want to visualize.
vis_layer1 = function ([x], [layer1.output])
Skip (or a lot) the sample (just like you supplied the input tensor during training), and you will get the output of the level for which your compilation was performed.
Note. This way you get the outputs in the same form as the model used in the calculation. However, you can change it however you want by changing the output variable as layer1.output.flatten(n) .
source share