Restoring a model trained with a variable input length in a tensor stream results in an InvalidArgumentError

I am new to tensor flow and am currently experimenting with models of varying complexity. I have a problem with package save and restore functions. As far as I understood the tutorials, I would have to restore the prepared schedule and start it with some new input at some later point. However, when I try to do this, I get the following error:

InvalidArgumentError (see above for tracing): Form [-1,10] has negative dimensions [[Node: Placeholder = Placeholderdtype = DT_FLOAT, shape = [?, 10], _device = "/ job: localhost / replica: 0 / task : 0 / cpu: 0 "]]

My understanding of the message is that the reconstructed graph does not like one dimension to remain arbitrary, which, in turn, is necessary for practical cases when I do not know in advance how large my input will be. The code snippet as a minimal example that creates the error above can be found below. I know how to restore each tensor individually, but it becomes impractical pretty quickly when models grow in complexity. I am grateful for any help I receive and apologize if my question is stupid.

import numpy as np
import tensorflow as tf

def generate_random_input():
    alist = []
    for _ in range(10):
        alist.append(np.random.uniform(-1, 1, 100))
    return np.array(alist).T
def generate_random_target():
    return np.random.uniform(-1, 1, 100)

x = tf.placeholder('float', [None, 10])
y = tf.placeholder('float')

# the model 
w1 = tf.get_variable('w1', [10, 1], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer(seed=1))
b1 = tf.get_variable('b1', [1], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer(seed=1))

result = tf.add(tf.matmul(x, w1), b1, name='result')

loss = tf.reduce_mean(tf.losses.mean_squared_error(predictions=result, labels=y))
optimizer = tf.train.AdamOptimizer(0.03).minimize(loss)
saver = tf.train.Saver()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    sess.run([optimizer, loss], feed_dict={x: generate_random_input(), y: generate_random_target()})
    saver.save(sess, 'file_name')

# now load the model in another session:
sess2 = tf.Session()
saver = tf.train.import_meta_graph('file_name.meta')
saver.restore(sess2, tf.train.latest_checkpoint('./'))
graph = tf.get_default_graph()
pred = graph.get_operation_by_name('result')
test_result = sess2.run(pred, feed_dict={x: generate_random_input()})
+4
source share
2 answers

, label_palceholder . , placeholder [-1] - -1, . .

0

, . CNN . :

def create_model():
    x = tf.placeholder('float', [None, 10])
    y = tf.placeholder('float')   
    w1 = tf.get_variable('w1', [10, 1], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer(seed=1))
    b1 = tf.get_variable('b1', [1], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer(seed=1))
    result = tf.add(tf.matmul(x, w1), b1, name='result')
    return x, y, result

x, y, result = create_model()
loss = tf.reduce_mean(tf.losses.mean_squared_error(predictions=result, labels=y))
optimizer = tf.train.AdamOptimizer(0.03).minimize(loss)
saver = tf.train.Saver()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    sess.run([optimizer, loss], feed_dict={x: generate_random_input(), y: generate_random_target()})
    saver.save(sess, 'file_name')

# now load the model in another session:
sess2 = tf.Session()
# This stuff is optional if everything is the same scope
x, y, result = create_model()
saver = tf.train.Saver()
# loss = ... if you want loss
# Now just restore the weights and run
saver.restore(sess, 'file_name')
test_result = sess2.run(pred, feed_dict={x: generate_random_input()})

, . , , .

0

Source: https://habr.com/ru/post/1680813/


All Articles