Why do I need to initialize variables in TensorFlow?

I primarily develop my models in R, and am currently studying TensorFlow. I am reading a tutorial with the following code

raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13] 
spike = tf.Variable(False)
spike.initializer.run()

for i in range(1, len(raw_data)): 
    if raw_data[i] - raw_data[i-1] > 5:
        updater = tf.assign(spike, True)
        updater.eval()
    else:
        tf.assign(spike, False).eval()
    print("Spike", spike.eval())
sess.close()

From a layman's point of view, why do I need to initialize and Variabalize in TensorFlow? I know this may be a basic question, but this is not addressed in R.

+4
source share
1 answer

Let's see what the script does:

spike = tf.Variable(False)

This line creates a symbolic variable or node in the compute graph with a constant initializer. Nothing has been allocated for this variable at this time. In addition, it is not yet known on which device (CPU or GPU) it will be placed.

Further

spike.initializer.run()

spike , .

, , -, , . : . , .

-, const . , Xavier . , .

, tenorflow : - op. , tenorflow , , , , Python.

:

import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()

... , .

+3

Source: https://habr.com/ru/post/1691600/


All Articles