TensorFlow does a limited amount of caching, but probably does not apply to the case you are describing.
If you create tf.Session
with the following parameters, permanent folding will be enabled:
config = tf.ConfigProto(graph_options=tf.GraphOptions( optimizer_options=tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L2))) sess = tf.Session(config=config)
When you call sess.run()
with this configuration, TensorFlow will evaluate the corresponding nodes to run, and then identify the subgraph of those nodes whose outputs are constant, evaluate them and cache the results. Therefore, it will avoid re-performing redundant calculations.
However, in your question, you note that F
is a function of some trained variables. From the point of view of TensorFlow, these variables are volatile - they can change at any time and therefore do not cache values ββobtained from these variables. If you want to reuse the same value for F
several times, you can consider storing it in tf.constant()
so that bending optimization is more useful.
source share