Tensor Flow Data Entry Switching: Train / Validation

I have data that gets into my schedule through the queue runners, after I switched from convenient but quick placeholders.

After each training era, I want to perform a test. In addition to the training, in the validation passage, different data are used, there is no addition and there is no shuffling.

The question is simple: how do I switch these things?

A few observations:

  • I cannot switch the option shufflein string_input_producervia tf.placeholderboolean.
  • The only examples I found use placeholderto separate training from verification data. They, in turn, do not use high lines.
  • I managed to do the above using tf.cond()here I would check for is_training tf.placeholderboolean that I am going through feed_dict. Is this the most optimal solution? How expensive is it tf.conf()?
+4
source share
2 answers

A method that works well for me is to use tf.placeholder_with_default:

images_train, labels_train = train_data_pipeline(fnlist_train, ref_grid)
images_val, labels_val = val_data_pipeline(fnlist_val, ref_grid)
images = tf.placeholder_with_default(images_train, shape=[None, FLAGS.nx_image, FLAGS.ny_image, FLAGS.nz_image])
labels = tf.placeholder_with_default(labels_train, shape=[None, label_length])

images labels . images labels feed_dict sess.run(). - , , feed_dict , sess.run([images_val, labels_val]), numpy, feed_dict. , , == > numpy == > , .

, , , .

+3

make_template https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/template_test.py; , :

training_input, training_output = ([1., 2., 3., 4.], [2.8, 5.1, 7.2, 8.7])
test_input, test_output = ([5., 6., 7., 8.], [11, 13, 15, 17])

tf.set_random_seed(1234)

def test_line(x):
  m = tf.get_variable("w", shape=[],
                      initializer=tf.truncated_normal_initializer())
  b = tf.get_variable("b", shape=[],
                      initializer=tf.truncated_normal_initializer())
  return x * m + b

line_template = template.make_template("line", test_line)

train_prediction = line_template(training_input)
test_prediction = line_template(test_input)

train_loss = tf.reduce_mean(tf.square(train_prediction - training_output))
test_loss = tf.reduce_mean(tf.square(test_prediction - test_output))

optimizer = tf.train.GradientDescentOptimizer(0.1)
train_op = optimizer.minimize(train_loss)

with tf.Session() as sess:
  sess.run(tf.initialize_all_variables())
  initial_test_loss = sess.run(test_loss)
  sess.run(train_op)
  final_test_loss = sess.run(test_loss)

# Parameters are tied, so the loss should have gone down when we trained it.
self.assertLess(final_test_loss, initial_test_loss)
+1

Source: https://habr.com/ru/post/1655332/


All Articles