I was given a trained neural network in a torch, and I need to rebuild it exactly in a tensor stream. I believe that I correctly defined the architecture of the network in the tensor flow, but I had problems with the transfer of weight and offset shadows. Using a third-party package, I converted all the weight and displacement tensors from the torch network to numpy arrays, and then wrote them to disk. I can load them back into my python program, but I cannot figure out how to assign them to the appropriate layers in my tensorflow network.
For example, I have a convolution layer defined in tensor flow as
kernel_1 = tf.Variable(tf.truncated_normal([11,11,3,64], stddev=0.1)) conv_kernel_1 = tf.nn.conv2d(input, kernel_1, [1,4,4,1], padding='SAME') biases_1 = tf.Variable(tf.zeros[64]) bias_layer_1 = tf.nn_add(conv_kernel_1, biases_1)
According to the tensor flow documentation, the operation tf.nn.conv2d uses the form defined in the kernel_1 variable to construct the weight tensor. However, I cannot figure out how to access this weight tensor in order to install it in the weight array that I loaded from the file.
Is it possible to explicitly set the weight tensor? And if so, how?
(The same question applies to the displacement tensor.)
source share