Understanding the Example Variable Area in Tensorflow

I looked at the mechanics section for Tensorflow, in particular about general variables . In the “Problem” section, they deal with a convolutional neural network and provide the following code (which runs the image through the model):

# First call creates one set of variables.
result1 = my_image_filter(image1)
# Another set is created in the second call.
result2 = my_image_filter(image2)

If the model were implemented in this way, it would be impossible to learn / update the parameters, because a new set of parameters for each image in my training set?

Edit: I also tried using the “problem” in a simple linear regression example , and there seemed to be no problem with this implementation method. The training seems to work, and can also be shown with the last line of code. Therefore, I am wondering if there is a subtle inconsistency in the tenson documentation and what I am doing.

import tensorflow as tf
import numpy as np

trX = np.linspace(-1, 1, 101)
trY = 2 * trX + np.random.randn(*trX.shape) * 0.33 # create a y value which is         approximately linear but with some random noise

X = tf.placeholder("float") # create symbolic variables
Y = tf.placeholder("float")


def model(X):
    with tf.variable_scope("param"):
        w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix

    return tf.mul(X, w) # lr is just X*w so this model line is pretty simple


y_model = model(X)

cost = (tf.pow(Y-y_model, 2)) # use sqr error for cost function

train_op = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # construct an optimizer to minimize cost and fit line to my data

sess = tf.Session()
init = tf.initialize_all_variables() # you need to initialize variables (in this case just variable W)
sess.run(init)

with tf.variable_scope("train"):
    for i in range(100):
        for (x, y) in zip(trX, trY):
        sess.run(train_op, feed_dict={X: x, Y: y})

print sess.run(y_model, feed_dict={X: np.array([1,2,3])})
+4
source share
1 answer

It is necessary to create a variable set only once for the entire set (and testing). The purpose of variable regions is to modulate subsets of parameters, such as those belonging to layers (for example, when the layer architecture is repeated, the same names can be used within each level).

model. , , :

from __future__ import print_function

X = tf.placeholder("float") # create symbolic variables
Y = tf.placeholder("float")
print("X:", X.name)
print("Y:", Y.name)

def model(X):
    with tf.variable_scope("param"):
        w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix
    print("w:", w.name)
    return tf.mul(X, w) 

sess.run(train_op, feed_dict={X: x, Y: y}) train_op X Y. ( ); . , , :

with tf.variable_scope("train"):
    print("X:", X.name)
    print("Y:", Y.name)
    for i in range(100):
        for (x, y) in zip(trX, trY):
            sess.run(train_op, feed_dict={X: x, Y: y})

, , .

, , get_variable tf.variable_scope:

with tf.variable_scope("param"):
    w = tf.get_variable("weights", [1])
print("w:", w.name)
+9

Source: https://habr.com/ru/post/1615991/


All Articles