How to prevent the loss of: nan while I customize my keras model?

Here is my code:

model = Sequential()
model.add(Dense(50, input_dim=33, init='uniform', activation='relu'))
for u in range(3): #how to efficiently add more layers
    model.add(Dense(33, init='uniform', activation='relu'))
model.add(Dense(122, init='uniform', activation='sigmoid'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

#This line of code is an update to the question and may be responsible
model.fit(X_train, Y_train, nb_epoch=35, batch_size=20, validation_split=0.2, callbacks=[EarlyStopping(monitor='val_loss', patience=10)])

It worked the Era and got better in accuracy, but then the loss became nano, and the accuracy went down. I used model.predictand got an error from this.

Has anyone received a fix?

+2
source share
1 answer

If you use categorical_crossentropyas a function of losses, the last layer of the model should be softmax.

Here you are using sigmoid, which has the ability to make all output sizes close to 0, which will lead to loss of overflow and therefore nan.

+3
source

Source: https://habr.com/ru/post/1670883/


All Articles