I tried using tf.Print debug statements to better understand the format of reported gradients and variables from compute_gradients (), but ran into an unexpected problem. The training procedure and the debugging procedure (gvdebug) are as follows:
def gvdebug(g, v):
g2 = tf.zeros_like(g, dtype=tf.float32)
v2 = tf.zeros_like(v, dtype=tf.float32)
g2 = g
v2 = v
return g2,v2
def training(loss, global_step, learning_rate=0.1):
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
grads_and_vars = optimizer.compute_gradients(loss)
gv2 = [gvdebug(gv[0], gv[1]) for gv in grads_and_vars]
train_op = optimizer.apply_gradients(gv2, global_step=global_step)
return train_op
This code works fine (but doesn't print), but if I uncomment the two tf.Print lines in gvdebug (), I get an error from apply_gradients: 'TypeError: Variable should be tf.Variable'. I thought tf.Print just went through tensors - what am I doing wrong?
source
share