Why is my CNN not studying

I apologize for such a cliche question, but I really don't know why my CNN is not improving.

I am training CNN for an SVHN dataset (single digit) with 32x32 images.

For preprocessing, I convert RGB to grayscale and normalize all pixel data by standardization. Thus, the data range becomes (-1.1). To make sure mine Xand ymatch each other correctly, I randomly select an image from Xand a label from ywith the same index, and this shows what they are doing.

Here is my code (Keras, endorflow backend):

"""
    Single Digit Recognition
"""

import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Activation, Convolution2D
from keras.layers.pooling import MaxPooling2D
from keras.optimizers import SGD
from keras.layers.core import Dropout, Flatten
model = Sequential()

model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 1)))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, border_mode='same', dim_ordering='default'))
model.add(Convolution2D(32, 5, 5, border_mode='same', input_shape=(16, 16, 16)))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, border_mode='same', dim_ordering='default'))
model.add(Convolution2D(64, 5, 5, border_mode='same', input_shape=(32, 8, 8)))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, border_mode='same', dim_ordering='default'))
model.add(Flatten())
model.add(Dense(128, input_dim=1024))
model.add(Activation("relu"))
model.add(Dense(10, input_dim=128))
model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])
model.fit(train_X, train_y,
          validation_split=0.1,
          nb_epoch=20,
          batch_size=64)
score = model.evaluate(test_X, test_y, batch_size=16)

After completing 10 eras, the accuracy is still the same as in the first epoch, and why I stopped it.

Train on 65931 samples, validate on 7326 samples
Epoch 1/20
65931/65931 [==============================] - 190s - loss: 2.2390 - acc: 0.1882 - val_loss: 2.2447 - val_acc: 0.1885
Epoch 2/20
65931/65931 [==============================] - 194s - loss: 2.2395 - acc: 0.1893 - val_loss: 2.2399 - val_acc: 0.1885
Epoch 3/20
65931/65931 [==============================] - 167s - loss: 2.2393 - acc: 0.1893 - val_loss: 2.2402 - val_acc: 0.1885
Epoch 4/20
65931/65931 [==============================] - 172s - loss: 2.2394 - acc: 0.1883 - val_loss: 2.2443 - val_acc: 0.1885
Epoch 5/20
65931/65931 [==============================] - 172s - loss: 2.2393 - acc: 0.1884 - val_loss: 2.2443 - val_acc: 0.1885
Epoch 6/20
65931/65931 [==============================] - 179s - loss: 2.2397 - acc: 0.1881 - val_loss: 2.2433 - val_acc: 0.1885
Epoch 7/20
65931/65931 [==============================] - 173s - loss: 2.2399 - acc: 0.1888 - val_loss: 2.2410 - val_acc: 0.1885
Epoch 8/20
65931/65931 [==============================] - 175s - loss: 2.2392 - acc: 0.1893 - val_loss: 2.2439 - val_acc: 0.1885
Epoch 9/20
65931/65931 [==============================] - 175s - loss: 2.2395 - acc: 0.1893 - val_loss: 2.2401 - val_acc: 0.1885
Epoch 10/20
 9536/65931 [===>..........................] - ETA: 162s - loss: 2.2372 - acc: 0.1909 

Should I continue with great patience or is there something wrong with my CNN?

+4

Source: https://habr.com/ru/post/1666786/


All Articles