Since using gpu.options.allow_growth and gpu_options.per_process_gpu_memory_fraction to estimate model size is currently a trial and error and tedious solution, I suggest using tf.RunMetadata() in combination with a tensogram.
Example:
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() summary, _ = sess.run(train_step, feed_dict, options=run_options, run_metadata=run_metadata) train_writer.add_run_metadata(run_metadata, 'step%d' % i)
Run your model and tensogram, go to the desired part of the graph and read the node statistics.
Source: https://www.tensorflow.org/get_started/graph_viz
source share