I have a neural network, which is currently implemented in a tensor stream, but I have a problem with predictions after training, because I have conv2d_transpose operations, and the forms of these ops depend on the size of the batch. I have a layer for which output_shape is required as an argument:
def deconvLayer(input, filter_shape, output_shape, strides):
W1_1 = weight_variable(filter_shape)
output = tf.nn.conv2d_transpose(input, W1_1, output_shape, strides, padding="SAME")
return output
This is actually used in a larger model that I built as follows:
conv3 = layers.convLayer(conv2['layer_output'], [3, 3, 64, 128], use_pool=False)
conv4 = layers.deconvLayer(conv3['layer_output'],
filter_shape=[2, 2, 64, 128],
output_shape=[batch_size, 32, 40, 64],
strides=[1, 2, 2, 1])
The problem is that if I make a forecast using a trained model, my test data should have the same lot size, otherwise I get the following error.
tensorflow.python.framework.errors.InvalidArgumentError: Conv2DBackpropInput: input and out_backprop must have the same batch size
? , , , .