Keras: uses real GPU memory

I use Keras with the Tensorflow backend and looking at nvidia-smi not enough to understand how much memory is required for the current network architecture, because Tensorflow just allocates all the available memory.

So the question is, how do I find out about using real GPU memory?

+7
source share
2 answers

This can be done using a timeline that can give you a complete picture of registering in memory. Similar to the code below:

 import tensorflow as tf with K.get_session() as s: run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() # your fitting code and s run with run_options to = timeline.Timeline(run_metadata.step_stats) trace = to.generate_chrome_trace_format() with open('full_trace.json', 'w') as out: out.write(trace) 

If you want to limit memory usage to the GPU, this can also be done from gpu_options. Like the following code:

 import tensorflow as tf from keras.backend.tensorflow_backend import set_session config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.2 set_session(tf.Session(config=config)) 

Check the following documentation for the Timeline object

Since you use TensorFlow in the backend , you can use the tfprof profiling tool.

+12
source

You can still use nvidia-smi telling TensorFlow not to reserve all the GPU memory, but to increase this reservation on demand:

 config = tf.ConfigProto() config.gpu_options.allow_growth = True keras.backend.tensorflow_backend.set_session(tf.Session(config=config)) 
0
source

Source: https://habr.com/ru/post/1014776/


All Articles