I am confused by tf.layers.batch_normalization in a tensor stream.
My code is as follows:
def my_net(x, num_classes, phase_train, scope): x = tf.layers.conv2d(...) x = tf.layers.batch_normalization(x, training=phase_train) x = tf.nn.relu(x) x = tf.layers.max_pooling2d(...)
Learning seems to work well, and val_accuracy is reasonable (say 0.70 ). The problem is this: when I tried to use a prepared model to run a test (i.e. test function), if the phase_train parameter phase_train set to False , the test_accuracy value test_accuracy very low (for example, 0.000270 ), but when the phase_train value phase_train set to True , test_accuracy seems correct (say 0.69 ).
As far as I understood, phase_train should be False at the testing stage, right? I am not sure what the problem is. Do not I understand the normalization of the party?
source share