Running multiple pre-prepared Tensorflow networks at once

What I would like to do is run several pre-prepared Tensorflow networks at the same time. Since the names of some variables within each network may be the same, a common solution is to use the namespace when creating the network. However, the problem is that I trained these models and saved prepared variables inside several control point files. After I use the namespace when creating a network, I cannot load variables from checkpoint files.

For example, I trained AlexNet, and I would like to compare two sets of variables, one set from era 10 (saved in the epoch_10.ckpt file), and the other set from era 50 (saved to the epoch_50.ckpt file). Since the two are exactly the same network, the variable names inside are the same. I can create two networks using

with tf.name_scope("net1"):
    net1 = CreateAlexNet()
with tf.name_scope("net2"):
    net2 = CreateAlexNet()

However, I cannot load trained variables from .ckpt files, because when I trained this network, I did not use the namespace. Despite the fact that I can set the namespace to "net1" when I train the network, this prevents me from loading variables for net2.

I tried:

with tf.name_scope("net1"):
    mySaver.restore(sess, 'epoch_10.ckpt')
with tf.name_scope("net2"):
    mySaver.restore(sess, 'epoch_50.ckpt')

This does not work.

What is the best way to solve this problem?

+4
source share
2 answers

- , :

# Build a graph containing `net1`.
with tf.Graph().as_default() as net1_graph:
  net1 = CreateAlexNet()
  saver1 = tf.train.Saver(...)
sess1 = tf.Session(graph=net1_graph)
saver1.restore(sess1, 'epoch_10.ckpt')

# Build a separate graph containing `net2`.
with tf.Graph().as_default() as net2_graph:
  net2 = CreateAlexNet()
  saver2 = tf.train.Saver(...)
sess2 = tf.Session(graph=net1_graph)
saver2.restore(sess2, 'epoch_50.ckpt')

- , tf.Session (, TensorFlow), :

, var_list, (.. ) tf.Variable, .

var_list, - :

with tf.name_scope("net1"):
  net1 = CreateAlexNet()
with tf.name_scope("net2"):
  net2 = CreateAlexNet()

# Strip off the "net1/" prefix to get the names of the variables in the checkpoint.
net1_varlist = {v.name.lstrip("net1/"): v
                for v in tf.get_collection(tf.GraphKeys.VARIABLES, scope="net1/")}
net1_saver = tf.train.Saver(var_list=net1_varlist)

# Strip off the "net2/" prefix to get the names of the variables in the checkpoint.
net2_varlist = {v.name.lstrip("net2/"): v
                for v in tf.get_collection(tf.GraphKeys.VARIABLES, scope="net2/")}
net2_saver = tf.train.Saver(var_list=net2_varlist)

# ...
net1_saver.restore(sess, "epoch_10.ckpt")
net2_saver.restore(sess, "epoch_50.ckpt")
+9

, . : Saver Tensorflow TensorFlow.

tf.train.Saver() - op. , , tf.train.Saver(), . .

0

Source: https://habr.com/ru/post/1652660/


All Articles