How to properly implement a convolution exception in TensorFlow

According to the original fallout document, this regularization method can be applied to convolution levels that often improve their performance. The TensorFlow function tf.nn.dropoutsupports this with a parameter noise_shapethat allows the user to choose which parts of the tensors will fall out independently. However, neither paper nor documentation provides a clear explanation of which measurements should be stored independently, and TensorFlow's explanation of how it works noise_shapeis rather unclear.

only sizes with noise_shape [i] == shape (x) [i] will make independent decisions.

I would suggest that for typical output of the CNN level of a form, [batch_size, height, width, channels]we do not want individual rows or columns to fall out by themselves, but rather entire channels (which would be equivalent to a node in a fully connected NN) regardless of the examples (i.e. different channels can be dropped for various examples in the batch). Am I correct in this assumption?

If so, how could an exception with such specificity be implemented using the parameter noise_shape? This will:

noise_shape=[batch_size, 1, 1, channels]

or

noise_shape=[1, height, width, 1]
+4
source share
1 answer

from here ,

, (x) = [k, l, m, n] noise_shape = [k, 1, 1, n], , .

.

noise_shape = noise_shape if noise_shape is not None else array_ops.shape(x)
# uniform [keep_prob, 1.0 + keep_prob)
random_tensor = keep_prob
random_tensor += random_ops.random_uniform(noise_shape,
                                           seed=seed,
                                           dtype=x.dtype)
# 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
binary_tensor = math_ops.floor(random_tensor)
ret = math_ops.div(x, keep_prob) * binary_tensor
ret.set_shape(x.get_shape())
return ret

random_tensor += broadcast. noise_shape [i] 1, , , 0 1. , noise_shape=[k, 1, 1, n], . , () , .

+2

Source: https://habr.com/ru/post/1681780/


All Articles