I am trying to train a convolutional neural network on Keras 1.2.2 on top of Theano 0.8.2 (python 2.7). I can import keras and anano without getting an error.
The error occurs only 2-5 minutes after the following code is run.
#Prepare images etc.
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Convolution2D(nb_filters * 2, kernel_size[0], kernel_size[1]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Dense(64))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
Which creates the following output:
X_train shape: (984, 1, 1000, 1000)
984 train samples
246 test samples
Train on 984 samples, validate on 246 samples
Epoch 1/4
[1]+ Segmentation fault (core dumped)
So, it looks like the model has been compiled and training has begun. I managed to prepare the model earlier with smaller images (form: (400, 1, 500, 500). Can changing the number of training images and their size lead to an error? I also tried updating Keras to 2.0 and Theano to the current version, but this is not helped.
Any suggestions?