This can be done using a timeline that can give you a complete picture of registering in memory. Similar to the code below:
import tensorflow as tf with K.get_session() as s: run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata()
If you want to limit memory usage to the GPU, this can also be done from gpu_options. Like the following code:
import tensorflow as tf from keras.backend.tensorflow_backend import set_session config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.2 set_session(tf.Session(config=config))
Check the following documentation for the Timeline object
Since you use TensorFlow in the backend , you can use the tfprof profiling tool.
source share