Forced copy of the tensor when pasting

Firstly, I’m not sure that the name is very good, but it was the best I could come up with, given my understanding of the situation.

In the background, I am trying to understand how the queues work in a tensor stream, and ran into the next problem that puzzled me.

I have a variable n that I insert in tf.FIFOQueue and then increment the variable. This is repeated several times, and you can expect a result similar to 0, 1, 2, ... However, when the queue is empty, all values ​​are the same.

More precisely, the code is as follows:

from __future__ import print_function

import tensorflow as tf

q = tf.FIFOQueue(10, tf.float32)

n = tf.Variable(0, trainable=False, dtype=tf.float32)
inc = n.assign(n+1)
enqueue = q.enqueue(n)

init = tf.global_variables_initializer()

sess = tf.Session()
sess.run(init)

sess.run(enqueue)
sess.run(inc)

sess.run(enqueue)
sess.run(inc)

sess.run(enqueue)
sess.run(inc)

print(sess.run(q.dequeue()))
print(sess.run(q.dequeue()))
print(sess.run(q.dequeue()))

I expect to print:

0.0
1.0
2.0

Instead, I get the following result:

3.0
3.0
3.0

, n , , . , - tensorflow, , , - ?

enqueue = q.enqueue(n)

enqueue = q.enqueue(tf.identity(n))

shadoworflow TensorFlow, tf.identity? , , . tf.control_dependencies(), , .

: - , , - tensorflow, , "" . , CUDA_VISIBLE_DEVICES = ", , CUDA_VISIBLE_DEVICES =" 0" .

0
2

q.enqueue(tf.add(q, 0))

, , .

, , , . , , q.enqueue(v.read_value()) , TF 0.12rc1

GPU- , - CPU, enqueue op GPU- > CPU .

+3

, , , , , dtypes.

, float ints, , n :

q.enqueue(tf.add(n, 0))

, (, int floats):

q.enqueue_many([[n]])

, , - , :

q.enqueue(tf.add(n, tf.zeros_like(n)))

, t:

q.enqueue([tf.add(n, tf.zeros_like(n)) for n in t])

.

, !

-

: , tf.bool tf.zeros_like(). .

0

Source: https://habr.com/ru/post/1661134/


All Articles