What is the correct batch normalization function in Tensorflow?

In tensorflow 1.4, I found two functions that perform a standard batch, and they look the same:

  • tf.layers.batch_normalization ( link )
  • tf.contrib.layers.batch_norm ( link )

What function should I use? Which one is more stable?

+5
source share
2 answers

To add to the list, there are several ways to make a batch norm in a tensor flow:

  • tf.nn.batch_normalization is a low-level op. The caller is responsible for processing the mean and variance tensors themselves.
  • tf.nn.fused_batch_norm is another low-level op, similar to the previous one. The difference is that it is optimized for 4D input tensors, which is a common case in convolutional neural networks. tf.nn.batch_normalization accepts tensors of any rank greater than 1.
  • tf.layers.batch_normalization is a high-level shell compared to previous operations. The biggest difference is that it takes care of creating and managing the current means and variance tensors, and, if possible, invokes a fast fused operator. Usually this should be the default choice for you.
  • tf.contrib.layers.batch_norm is an early implementation of the packet norm before it finishes the main API (i.e. tf.layers ). Its use is not recommended, as it may be omitted in future versions.
  • tf.nn.batch_norm_with_global_normalization is another deprecated op. Currently delegating the tf.nn.batch_normalization call, but is likely to be removed in the future.
  • Finally, there is also a Keras keras.layers.BatchNormalization layer, which, in the case of the endorflow backend, calls tf.nn.batch_normalization .
+8
source

As shown in the doc , tf.contrib is a contribution module containing volatile or experimental code. When the function is completed, it will be removed from this module. Now there are two of them to be compatible with the historical version.

So, it is recommended to use the previous tf.layers.batch_normalization .

+2
source

Source: https://habr.com/ru/post/1274365/


All Articles