TensorFlow: how to ensure that tensors are on the same graph

I am trying to get started with TensorFlow in python by creating a simple direct feed NN. I have one class that contains network scales (variables that are updated during the train, and they must remain constant at run time) and another script for training the network, which receives training data, separates them from the parties and trains the network in lots . When I try to train the network, I get an error message indicating that the data tensor is not on the same graph as the NN tensors:

ValueError: Tensor ("Placeholder: 0", shape = (10, 5), dtype = float32) must be from the same graph as Tensor ("windows / embedding / Cast: 0", shape = (100232, 50) , dtype = float32).

Relevant parts of the training script:

def placeholder_inputs(batch_size, ner): windows_placeholder = tf.placeholder(tf.float32, shape=(batch_size, ner.windowsize)) labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size)) return windows_placeholder, labels_placeholder with tf.Session() as sess: windows_placeholder, labels_placeholder = placeholder_inputs(batch_size, ner) logits = ner.inference(windows_placeholder) 

And the corresponding ones in the network class:

 class WindowNER(object): def __init__(self, wv, windowsize=3, dims=[None, 100,5], reg=0.01): self.reg=reg self.windowsize=windowsize self.vocab_size = wv.shape[0] self.embedding_dim = wv.shape[1] with tf.name_scope("embedding"): self.L = tf.cast(tf.Variable(wv, trainable=True, name="L"), tf.float32) with tf.name_scope('hidden1'): self.W = tf.Variable(tf.truncated_normal([windowsize * self.embedding_dim, dims[1]], stddev=1.0 / math.sqrt(float(windowsize*self.embedding_dim))), name='weights') self.b1 = tf.Variable(tf.zeros([dims[1]]), name='biases') with tf.name_scope('output'): self.U = tf.Variable(tf.truncated_normal([dims[1], dims[2]], stddev = 1.0 / math.sqrt(float(dims[1]))), name='weights') self.b2 = tf.Variable(tf.zeros(dims[2], name='biases')) def inference(self, windows): with tf.name_scope("embedding"): embedded_words = tf.reshape(tf.nn.embedding_lookup(self.L, windows), [windows.get_shape()[0], self.windowsize * self.embedding_dim]) with tf.name_scope("hidden1"): h = tf.nn.tanh(tf.matmul(embedded_words, self.W) + self.b1) with tf.name_scope('output'): t = tf.matmul(h, self.U) + self.b2 

Why are there two graphs in the first place, and how can I guarantee that the data placeholder tensors are on the same graph than NN?

Thanks!!

+6
source share
2 answers

You should be able to create all tensors under the same graph, doing something like this:

 g = tf.Graph() with g.as_default(): windows_placeholder, labels_placeholder = placeholder_inputs(batch_size, ner) logits = ner.inference(windows_placeholder) with tf.Session(graph=g) as sess: # Run a session etc 

More details about graphs in TF can be found here: https://www.tensorflow.org/versions/r0.8/api_docs/python/framework.html#Graph

+5
source

Sometimes, when you get such an error, an error (which can often use the wrong variable from another graph) could happen much earlier and spread to the operation that eventually triggered the error. Therefore, you can examine only this line and conclude that the tensors must be from the same graph, and the error actually lies somewhere else.

The easiest way to check is to print out which graph is used for each / op variable in the graph. You can do it simply:

 print(variable_name.graph) 
+1
source

Source: https://habr.com/ru/post/1011758/


All Articles