Unfortunately, Tensorflow does not allow the transfer of tensors to the definition of conv2d . The approach that I used mainly performed conv2d in increments of 1, and then cut the result with the required steps. It may not be the optimal approach, but it works, and tf.strided_slice can use tensors. So in your case it will be something like:
s = tf.placeholder(np.int32,[4]) image = tf.placeholder(np.float32, [None, 3, 32, 32]) convoluted = tf.layers.conv2d(image, filters=32, kernel_size=[3, 3], strides=[1,1], padding='same', data_format='channels_first') result = tf.strided_slice(convoluted, [0,0,0,0], tf.shape(convoluted), s)
You can then pass 4 step sizes to s during the run, where each entry corresponds to a step in the corresponding collapsed input size.
source share