Parallel connection of several Keras models on one GPU

I am trying to install several small Keras models in parallel on the same GPU. For reasons I need to remove them from the list and train them one step at a time. Since I was not lucky with the standard multiprocessor module, I use pathos.

What I tried to do looks something like this:

from pathos.multiprocessing import ProcessPool as Pool
import tensorflow as tf
import keras.backend as K

def multiprocess_step(self, model):
    K.set_session(sess)
    with sess.graph.as_default():
        model = step(model, sess)
        return model

def step(model, sess):
    K.set_session(sess)
    with sess.graph.as_default():
        model.fit(x=data['X_train'], y=data['y_train'],
               batch_size=batch_size
               validation_data=(data['X_test'], data['y_test']), 
               verbose=verbose,
               shuffle=True,
               initial_epoch=self.step_num - 1)
        return model

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.visible_device_list = "0"
sess = tf.Session(config=config)

K.set_session(sess)
with sess.graph.as_default():
    pool = Pool(8).map
    model_list = pool(multiprocess_step, model_list)

but all that I try, I keep getting an error stating that the models do not seem to be on the same chart ...

ValueError: Tensor("training/RMSprop/Variable:0", shape=(25, 352), dtype=float32_ref) must be from the same graph as Tensor("RMSprop/rho/read:0", shape=(), dtype=float32).

An exception occurs in the line model.fit (), so I had to do something wrong with the appointment of the session schedule, although I tried to install this in all possible places?

Does anyone have experience with something like that?

+4
source share

Source: https://habr.com/ru/post/1691025/


All Articles