How to dynamically select convolution?

How to dynamically select convolution?

Using placeholders doesn't work:

s = tf.placeholder(np.int32) image = tf.placeholder(np.float32, [None, 3, 32, 32]) tf.layers.conv2d(image, filters=32, kernel_size=[3, 3], strides=[s, s], padding='same', data_format='channels_first') 

This gives a TypeError .

Similar difficulties arise when pool_size and strides are merged.

+5
source share
1 answer

Unfortunately, Tensorflow does not allow the transfer of tensors to the definition of conv2d . The approach that I used mainly performed conv2d in increments of 1, and then cut the result with the required steps. It may not be the optimal approach, but it works, and tf.strided_slice can use tensors. So in your case it will be something like:

 s = tf.placeholder(np.int32,[4]) image = tf.placeholder(np.float32, [None, 3, 32, 32]) convoluted = tf.layers.conv2d(image, filters=32, kernel_size=[3, 3], strides=[1,1], padding='same', data_format='channels_first') result = tf.strided_slice(convoluted, [0,0,0,0], tf.shape(convoluted), s) 

You can then pass 4 step sizes to s during the run, where each entry corresponds to a step in the corresponding collapsed input size.

+5
source

Source: https://habr.com/ru/post/1273068/


All Articles