I train the Generative Adversarial Network (GAN) in a tensor flow, where basically we have two different networks, each of which has its own optimizer.
self.G, self.layer = self.generator(self.inputCT,batch_size_tf) self.D, self.D_logits = self.discriminator(self.GT_1hot) ... self.g_optim = tf.train.MomentumOptimizer(self.learning_rate_tensor, 0.9).minimize(self.g_loss, global_step=self.global_step) self.d_optim = tf.train.AdamOptimizer(self.learning_rate, beta1=0.5) \ .minimize(self.d_loss, var_list=self.d_vars)
The problem is that I first train one of the networks (g), and then I want to train g and d together. However, when I call the load function:
self.sess.run(tf.initialize_all_variables()) self.sess.graph.finalize() self.load(self.checkpoint_dir) def load(self, checkpoint_dir): print(" [*] Reading checkpoints...") ckpt = tf.train.get_checkpoint_state(checkpoint_dir) if ckpt and ckpt.model_checkpoint_path: ckpt_name = os.path.basename(ckpt.model_checkpoint_path) self.saver.restore(self.sess, ckpt.model_checkpoint_path) return True else: return False
I have an error similar to this (with lots of tracing):
Tensor name "beta2_power" not found in checkpoint files checkpoint/MR2CT.model-96000
I can restore the network g and continue learning using this function, but when I want to show d from scratch and g from a saved model, I have this error.
source share