GPU memory usage for cloned workers in Keras

Can Keras be used as much as possible from each individual GPU?

For example, when using two GPUs of 10 GB each and a 1 GB memory model, I would expect a 20-time increase in training time by dividing the training set and cloning the model 10 times on each GPU.

Instead, I only found an option multi_gpu_model( https://keras.io/utils/#multi_gpu_model ) that replicates the model on multiple GPUs. This only helps to increase productivity by a factor of 2, since it replicates the model once on each GPU.

Using fit_generatorparameters such as use_multiprocessing=Truedoes not help in this way.

Is such an increase in replication / productivity possible?

Thank.

+4
source share

Source: https://habr.com/ru/post/1689846/


All Articles