A learning level in excess of 0.001 leads to an error

I tried to crack the code from the Udacity Deep Learning course (Assignment 3 - Regularization) and Tensorflow mnist_with_summaries.py Tutorial. My code looks fine

https://github.com/llevar/udacity_deep_learning/blob/master/multi-layer-net.py

but something strange is happening. All tasks use a learning speed of 0.5 and at some point introduce an exponential decline. However, the code I compiled works fine when I set the learning speed to 0.001 (with or without decay). If I set the initial speed to 0.1 or more, I get the following error:

Traceback (most recent call last):
  File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 175, in <module>
    summary, my_accuracy, _ = my_session.run([merged, accuracy, train_step], feed_dict=feed_dict)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 340, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 564, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 637, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 659, in _do_call
    e.code)
tensorflow.python.framework.errors.InvalidArgumentError: Nan in summary histogram for: layer1/weights/summaries/HistogramSummary
     [[Node: layer1/weights/summaries/HistogramSummary = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](layer1/weights/summaries/HistogramSummary/tag, layer1/weights/Variable/read)]]
Caused by op u'layer1/weights/summaries/HistogramSummary', defined at:
  File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 106, in <module>
    layer1, weights_1 = nn_layer(x, num_features, 1024, 'layer1')
  File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 79, in nn_layer
    variable_summaries(weights, layer_name + '/weights')
  File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 65, in variable_summaries
    tf.histogram_summary(name, var)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/logging_ops.py", line 113, in histogram_summary
    tag=tag, values=values, name=scope)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_logging_ops.py", line 55, in _histogram_summary
    name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1154, in __init__
    self._traceback = _extract_stack()

If I set the speed to 0.001, then the code will exit with an accuracy of 0.94.

Using the 0.8 RC0 tensor flow on Mac OS X.

+1
source share
2

, ( NaNs). , , , , , .

, 17 NaN Histogram, , , NaN . NaN , .. 0 0. , , , .

,   merged = tf.merge_all_summaries()

merged = tf.constant(1)

test_writer.add_summary(summary)
+5

:

diff = y_ * tf.log(y) 

0 * log (0)

:

cross_entropy = -tf.reduce_sum(y_*tf.log(tf.clip_by_value(y_conv,1e-10,1.0)))

: Tensorflow NaN?

0

Source: https://habr.com/ru/post/1693837/


All Articles