How to split / reuse layers / variables on different graphs in a tensor stream?

I am creating a complex neural network model in which 2 networks use several layers. My implementation was to create two tensor flow graphs and share levels / variables between them. However, errors were detected only during the network creation process.

import tensorflow as tf def create_network(self): self.state_tensor = tf.placeholder(tf.float64, [None, self.state_size], name="state") self.action_tensor = tf.placeholder(tf.float64, [None, self.action_size], name="action") self.actor_graph = tf.Graph() with self.actor_graph.as_default(): print tf.get_variable_scope() state_h1 = tf.layers.dense(inputs=self.state_tensor, units=64, activation=tf.nn.relu, name="state_h1", reuse=True) state_h2 = tf.layers.dense(inputs=state_h1, units=32, activation=tf.nn.relu, name="state_h2", reuse=True) self.policy_tensor = tf.layers.dense(inputs=state_h2, units=self.action_size, activation=tf.nn.softmax, name="policy") self.critic_graph = tf.Graph() with self.critic_graph.as_default(): print tf.get_variable_scope() state_h1 = tf.layers.dense(inputs=self.state_tensor, units=64, activation=tf.nn.relu, name="state_h1", reuse=True) state_h2 = tf.layers.dense(inputs=state_h1, units=32, activation=tf.nn.relu, name="state_h2", reuse=True) action_h1 = tf.layers.dense(inputs=self.action_tensor, units=64, activation=tf.nn.relu, name="action_h1") action_h2 = tf.layers.dense(inputs=action_h1, units=32, activation=tf.nn.relu, name="action_h2") fc = tf.layers.dense(inputs=[state_h2, action_h2], units=32, activation=tf.nn.relu, name="fully_connected") self.value_tensor = tf.layers.dense(inputs=fc, units=1, activation=None, name="value") 

The error message is here:

 Traceback (most recent call last): <tensorflow.python.ops.variable_scope.VariableScope object at 0x1101c3790> File "/Users/niyan/code/routerRL/test.py", line 16, in <module> model = DPGModel(state_dim, action_dim) File "/Users/niyan/code/routerRL/DPGModel.py", line 10, in __init__ self.create_network() File "/Users/niyan/code/routerRL/DPGModel.py", line 37, in create_network reuse=True) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/layers/core.py", line 216, in dense return layer.apply(inputs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 303, in apply return self.__call__(inputs, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 269, in __call__ self.build(input_shapes[0]) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/layers/core.py", line 123, in build trainable=True) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 988, in get_variable custom_getter=custom_getter) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 890, in get_variable custom_getter=custom_getter) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 341, in get_variable validate_shape=validate_shape) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 258, in variable_getter variable_getter=functools.partial(getter, **kwargs)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 208, in _add_variable trainable=trainable and self.trainable) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 333, in _true_getter caching_device=caching_device, validate_shape=validate_shape) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 657, in _get_single_variable "VarScope?" % name) ValueError: Variable state_h1/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? 

So it seems that you cannot reuse tensor flow from other graphs. However, they are indeed 2 networks and are better regarded as separate graphs. Any suggestions?

+5
source share

Source: https://habr.com/ru/post/1267016/


All Articles