Does anyone know if Tensorflow normalizes the default input?
I have grayscale images whose values range from approximately 20,000-28,000. When I normalized the data, something strange happened. The network, trained for a couple of hundred iterations, succeeded in making accurate predictions, but all of a sudden all of the forecasts went on NaN. Of course, he could not recover, because TF could not be optimized from NaN.
When I didn’t normalize, the data training went fine and converged.
Any ideas?
source
share