Evaluation of high-resolution images from the bottom using the Keras model based on ConvLSTM2D

I am trying to use the following architecture ConvLSTM2Dto evaluate sequences of high resolution images from low resolution images:

import numpy as np, scipy.ndimage, matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, ConvLSTM2D, MaxPooling2D, UpSampling2D
from sklearn.metrics import accuracy_score, confusion_matrix, cohen_kappa_score
from sklearn.preprocessing import MinMaxScaler, StandardScaler
np.random.seed(123)

raw = np.arange(96).reshape(8,3,4)
data1 = scipy.ndimage.zoom(raw, zoom=(1,100,100), order=1, mode='nearest') #low res
print (data1.shape)
#(8, 300, 400)

data2 = scipy.ndimage.zoom(raw, zoom=(1,100,100), order=3, mode='nearest') #high res
print (data2.shape)
#(8, 300, 400)

X_train = data1.reshape(data1.shape[0], 1, data1.shape[1], data1.shape[2], 1)
Y_train = data2.reshape(data2.shape[0], 1, data2.shape[1], data2.shape[2], 1)
#(samples,time, rows, cols, channels)

model = Sequential()
input_shape = (data1.shape[0], data1.shape[1], data1.shape[2], 1)
#samples, time, rows, cols, channels
model.add(ConvLSTM2D(16, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape))     
model.add(ConvLSTM2D(8, kernel_size=(3,3), activation='sigmoid',padding='same'))

print (model.summary())

model.compile(loss='mean_squared_error',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X_train, Y_train, 
          batch_size=1, epochs=10, verbose=1)

x,y = model.evaluate(X_train, Y_train, verbose=0)
print (x,y)

This declaration will result in the following error Value:

ValueError: input 0 is not compatible with layer conv_lst_m2d_2: expected ndim = 5, found ndim = 4

How can I fix this ValueError? I think the problem is with the input forms, but cannot understand what exactly is wrong. Note that the output should also be a sequence of images, not a result of classification.

+3
source share
1 answer

, LSTMs , many-to-one, (batch_size, 300, 400, 16). , :

model.add(ConvLSTM2D(16, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape))     
model.add(ConvLSTM2D(8, kernel_size=(3,3), activation='sigmoid',padding='same'))

, (batch_size, 8, 300, 400, 16) ( ), LSTM. , return_sequences LSTM:

model.add(ConvLSTM2D(16, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape,
                     return_sequences=True))
model.add(ConvLSTM2D(8, kernel_size=(3,3), activation='sigmoid',padding='same'))

. , :

model.add(ConvLSTM2D(16, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape,
                     return_sequences=True))
model.add(ConvLSTM2D(8, kernel_size=(3,3), activation='sigmoid',padding='same'))
model.add(GlobalAveragePooling2D())
model.add(Dense(10, activation='softmax'))  # output shape: (None, 10)

, TimeDistributed:

x = Input(shape=(300, 400, 8))
y = GlobalAveragePooling2D()(x)
y = Dense(10, activation='softmax')(y)
classifier = Model(inputs=x, outputs=y)

x = Input(shape=(data1.shape[0], data1.shape[1], data1.shape[2], 1))
y = ConvLSTM2D(16, kernel_size=(3, 3),
               activation='sigmoid',
               padding='same',
               return_sequences=True)(x)
y = ConvLSTM2D(8, kernel_size=(3, 3),
               activation='sigmoid',
               padding='same',
               return_sequences=True)(y)
y = TimeDistributed(classifier)(y)  # output shape: (None, 8, 10)

model = Model(inputs=x, outputs=y)

, keras. ConvLSTM2D.


: 2 data1...

, X_train 1 8 (300, 400, 1) , 8 1 (300, 400, 1).
, :

X_train = data1.reshape(data1.shape[0], 1, data1.shape[1], data1.shape[2], 1)
Y_train = data2.reshape(data2.shape[0], 1, data2.shape[1], data2.shape[2], 1)

:

X_train = data1.reshape(1, data1.shape[0], data1.shape[1], data1.shape[2], 1)
Y_train = data2.reshape(1, data2.shape[0], data2.shape[1], data2.shape[2], 1)

, accuracy , mse. , mae.

, ( , , ):

model = Sequential()
input_shape = (data1.shape[0], data1.shape[1], data1.shape[2], 1)
model.add(ConvLSTM2D(16, kernel_size=(3, 3), activation='sigmoid', padding='same',
                     input_shape=input_shape,
                     return_sequences=True))
model.add(ConvLSTM2D(1, kernel_size=(3, 3), activation='sigmoid', padding='same',
                     return_sequences=True))

model.compile(loss='mse', optimizer='adam')

model.fit(X_train, Y_train, ...) :

Using TensorFlow backend.
(8, 300, 400)
(8, 300, 400)
Epoch 1/10

1/1 [==============================] - 5s 5s/step - loss: 2993.8701
Epoch 2/10

1/1 [==============================] - 5s 5s/step - loss: 2992.4492
Epoch 3/10

1/1 [==============================] - 5s 5s/step - loss: 2991.4536
Epoch 4/10

1/1 [==============================] - 5s 5s/step - loss: 2989.8523
+2

Source: https://habr.com/ru/post/1695438/


All Articles