I have a TensorFlow-based neural network and a set of variables.
The learning function is as follows:
def train(load = True, step)
"""
Defining the neural network is skipped here
"""
train_step = tf.train.AdamOptimizer(1e-4).minimize(mse)
saver = tf.train.Saver()
if not load:
sess.run(tf.initialize_all_variables())
else:
saver.restore(sess, 'Variables/map.ckpt')
print 'Model Restored!'
for i in xrange(step):
train_step.run(feed_dict = {x: train, y_: label})
save_path = saver.save(sess, 'Variables/map.ckpt')
print 'Model saved in file: ', save_path
print 'Training Done!'
I called the training function as follows:
train(False, 1)
for i in xrange(10):
train(True, 10)
I trained this way because I needed to transfer different data to my model. However, if I call the train function this way, TensorFlow will generate an error message indicating that it cannot read the saved model from the file.
After some experiments, I discovered that this was due to the slow saving of control points. Before the file was written to disk, the next function of the train will start reading, thus creating an error.
I tried using the time.sleep () function to make some delay between each call, but that did not work.
- , /? !