TensorFlow Compute Caching

Is there a canonical way to reuse calculations from a previously supplied placeholder in TensorFlow? My specific use case:

  • deliver multiple inputs (using one placeholder) at the same time, all of which are transmitted through the network to produce smaller images
  • determine losses based on various combinations of these smaller representations.
  • train one batch at a time, where each batch uses a subset of inputs without re-arranging smaller views

Here is the goal in the code, but which is defective, because the same calculations are performed over and over:

X_in = some_fixed_data
combinations_in = large_set_of_combination_indices
for combination_batch_in in batches(combinations_in, batch_size=128):
    session.run(train_op, feed_dict={X: X_in, combinations: combination_batch_in})

Thanks.

+4
source share
2 answers

sess.Run() Variable. , , . . , , . Op .

+6

, CSE ( ). , TensorFlow , optimizer_do_cse Graph, false, true GraphConstructorOptions. ++ GraphConstructorOptions (, Python)

, " CSE", , , .

+1

Source: https://habr.com/ru/post/1620587/


All Articles