Reuse Layer Weight in Tensorflow

I use tf.slim to implement an auto encoder. I am fully convinced with the following architecture:

[conv, outputs = 1] => [conv, outputs = 15] => [conv, outputs = 25] =>
=> [conv_transpose, outputs = 25] => [conv_transpose, outputs = 15] => 
[conv_transpose, outputs = 1]

It must be completely convolutional, and I cannot do the pool (the limitations of the bigger problem). I want to use bound scales, so

encoder_W_3 = decoder_W_1_Transposed 

(therefore, the weights of the first level of the decoder are the transposed last layers of the encoder).

If I reuse the scales, the tfslim regular path allows you to reuse them, i.e. reuse = True and then just provide the region name for the layer you want to reuse, I get a problem with the size:

ValueError: Trying to share variable cnn_block_3/weights, but specified shape (21, 11, 25, 25) and found shape (21, 11, 15, 25).

This makes sense if you cannot bear the weight of the previous model. Does anyone have an idea of ​​how I can transpose these weights?

PS: , , api, tfslim, .

+6
1

- , ?

:

new_weights = tf.transpose(weights, perm=[0, 1, 3, 2])

.

, @Seven, , .

+2

Source: https://habr.com/ru/post/1015628/


All Articles