Combining the outputs of several models into one model

I'm currently looking for a way that I can combine the output of several models into one model, I need to create a CNN network that does the classification.

enter image description here

The image is divided into sections (as can be seen from the colors), each section is set as an input to a specific model (1,2,3,4), the structure of each model is the same, but each section is assigned to a separate model to ensure that the same weight does not apply to the entire image. “My attempt is to avoid complete weight distribution and maintain weight distribution locally.” Each model then convolves and maximizes the union and generates some kind of output that must be loaded into a dense layer that accepts the output from previous models (model 1,2,3,4) and performs the classification.

My question is, is it possible to create a model 1,2,3,4 and connect it to a fully connected layer and train all the models defined by the input sections, and the output class - without the need to determine the convolution and layer pool outputs in keras?

+3
1

, , . keras . , ,

import numpy as np
import keras
from keras.optimizers import SGD
from keras.models import Sequential, Model
from keras.layers import Activation, Dense, Dropout, Flatten, Input, Merge, Convolution2D, MaxPooling2D

# Generate dummy data
train1 = np.random.random((100, 100, 100, 3))
train2 = np.random.random((100, 100, 100, 3))
train3 = np.random.random((100, 100, 100, 3))
train4 = np.random.random((100, 100, 100, 3))

y_train = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)

#parallel ip for different sections of image
inp1 = Input(shape=train1.shape[1:])
inp2 = Input(shape=train2.shape[1:])
inp3 = Input(shape=train3.shape[1:])
inp4 = Input(shape=train4.shape[1:])

# paralle conv and pool layer which process each section of input independently
conv1 = Conv2D(64, (3, 3), activation='relu')(inp1)
conv2 = Conv2D(64, (3, 3), activation='relu')(inp2)
conv3 = Conv2D(64, (3, 3), activation='relu')(inp3)
conv4 = Conv2D(64, (3, 3), activation='relu')(inp4)

maxp1 = MaxPooling2D((3, 3))(conv1)
maxp2 =MaxPooling2D((3, 3))(conv2)
maxp3 =MaxPooling2D((3, 3))(conv3)
maxp4 =MaxPooling2D((3, 3))(conv4)

# can add multiple parallel conv, pool layes to reduce size

flt1 = Flatten()(maxp1)
flt2 = Flatten()(maxp2)
flt3 = Flatten()(maxp3)
flt4 = Flatten()(maxp4)

mrg = Merge(mode='concat')([flt1,flt2,flt3,flt4])

dense = Dense(256, activation='relu')(mrg)

op = Dense(10, activation='softmax')(dense)

model = Model(input=[inp1, inp2, inp3, inp4], output=op)
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit([train1,train2,train3,train4], y_train,
          nb_epoch=10, batch_size=28)
+3

Source: https://habr.com/ru/post/1016229/


All Articles