TensorFlow: PlaceHolder error when using tf.merge_all_summaries ()

I get a placeholder error.

I do not know what this means, because I correctly display on sess.run(..., {_y: y, _X: X}) ... I provide here a fully functional MWE that reproduces the error:

 import tensorflow as tf import numpy as np def init_weights(shape): return tf.Variable(tf.random_normal(shape, stddev=0.01)) class NeuralNet: def __init__(self, hidden): self.hidden = hidden def __del__(self): self.sess.close() def fit(self, X, y): _X = tf.placeholder('float', [None, None]) _y = tf.placeholder('float', [None, 1]) w0 = init_weights([X.shape[1], self.hidden]) b0 = tf.Variable(tf.zeros([self.hidden])) w1 = init_weights([self.hidden, 1]) b1 = tf.Variable(tf.zeros([1])) self.sess = tf.Session() self.sess.run(tf.initialize_all_variables()) h = tf.nn.sigmoid(tf.matmul(_X, w0) + b0) self.yp = tf.nn.sigmoid(tf.matmul(h, w1) + b1) C = tf.reduce_mean(tf.square(self.yp - y)) o = tf.train.GradientDescentOptimizer(0.5).minimize(C) correct = tf.equal(tf.argmax(_y, 1), tf.argmax(self.yp, 1)) accuracy = tf.reduce_mean(tf.cast(correct, "float")) tf.scalar_summary("accuracy", accuracy) tf.scalar_summary("loss", C) merged = tf.merge_all_summaries() import shutil shutil.rmtree('logs') writer = tf.train.SummaryWriter('logs', self.sess.graph_def) for i in xrange(1000+1): if i % 100 == 0: res = self.sess.run([o, merged], feed_dict={_X: X, _y: y}) else: self.sess.run(o, feed_dict={_X: X, _y: y}) return self def predict(self, X): yp = self.sess.run(self.yp, feed_dict={_X: X}) return (yp >= 0.5).astype(int) X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1]]) y = np.array([[0],[1],[1],[0]]]) m = NeuralNet(10) m.fit(X, y) yp = m.predict(X)[:, 0] print accuracy_score(y, yp) 

Error:

 I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8 I tensorflow/core/common_runtime/direct_session.cc:58] Direct session inter op parallelism threads: 8 0.847222222222 W tensorflow/core/common_runtime/executor.cc:1076] 0x2340f40 Compute status: Invalid argument: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] W tensorflow/core/common_runtime/executor.cc:1076] 0x2340f40 Compute status: Invalid argument: You must feed a value for placeholder tensor 'Placeholder' with dtype float [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Traceback (most recent call last): File "neuralnet.py", line 64, in <module> m.fit(X[tr], y[tr, np.newaxis]) File "neuralnet.py", line 44, in fit res = self.sess.run([o, merged], feed_dict={self._X: X, _y: y}) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 368, in run results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 444, in _do_run e.code) tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op u'Placeholder_1', defined at: File "neuralnet.py", line 64, in <module> m.fit(X[tr], y[tr, np.newaxis]) File "neuralnet.py", line 16, in fit _y = tf.placeholder('float', [None, 1]) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 673, in placeholder name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 463, in _placeholder name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 664, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1834, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1043, in __init__ self._traceback = _extract_stack() 

If I remove tf.merge_all_summaries() or remove merged from self.sess.run([o, merged], ...) then it will work fine.

This is similar to this post: Error calculating resume in TensorFlow However, I am not using iPython ...

+5
source share
1 answer

The tf.merge_all_summaries() function is convenient, but also somewhat dangerous: it combines all the summaries in the default graph , which includes any summaries from previous calls that are clearly not related to each other and that call code that also added totals to the graph by by default. If the old resume nodes are dependent on the old placeholder, you will get errors similar to the ones you indicated in your question (and like the previous questions ).

There are two independent workarounds:

  • Make sure you explicitly collect the resumes you want to compute. It is as simple as using the explicit tf.merge_summary() op in your example:

     accuracy_summary = tf.scalar_summary("accuracy", accuracy) loss_summary = tf.scalar_summary("loss", C) merged = tf.merge_summary([accuracy_summary, loss_summary]) 
  • Make sure that every time you create a new set of summaries, you do this on a new chart. The recommended style is to use an explicit default schedule:

     with tf.Graph().as_default(): # Build model and create session in this scope. # # Only summary nodes created in this scope will be returned by a call to # `tf.merge_all_summaries()` 

    Alternatively, if you are using the latest open source TensorFlow (or the upcoming release 0.7.0), you can call tf.reset_default_graph() to reset the graph state and delete all old pivot nodes.

+16
source

Source: https://habr.com/ru/post/1243066/


All Articles