How to Continue Model Learning with ModelCheckpoint Keras

I am a new Keras user. I have a question about Keras training.

Due to the time limit on my server (each task can only work in less than 24 hours), I have to train my model using several 10-era periods.

In the 1st training period, after 10 eras, the weights of the best model are stored using ModelCheckpoint Keras.

conf = dict()
conf['nb_epoch'] = 10
callbacks = [
             ModelCheckpoint(filepath='/1st_{epoch:d}_{val_loss:.5f}.hdf5',
             monitor='val_loss', save_best_only=True,
             save_weights_only=False, verbose=0)
            ]   

Suppose I have the best model: '1st_10_1.00000.hdf5'. Then I continue to train my model using 10 eras and store the scales of the best model as follows.

model.load_weights('1st_10_1.00000.hdf5')
model.compile(...)
callbacks = [
             ModelCheckpoint(filepath='/2nd_{epoch:d}_{val_loss:.5f}.hdf5',
             monitor='val_loss', save_best_only=True,
             save_weights_only=False, verbose=0)
            ]

. 1- val_loss 1.20000, script "2nd_1_1.20000.hdf5". , val_loss , val_loss (1.00000). , -, "2nd_1_1.20000.hdf5", "1st_10_1.00000.hdf5".

'2nd_1_1.20000.hdf5'
'2nd_1_2.15000.hdf5'
'2nd_1_3.10000.hdf5'
'2nd_1_4.05000.hdf5'
...

, , . - , ? !

+4
2

, ... , API , .

, . modelcheckpoint callback .

: https://github.com/fchollet/keras/blob/master/keras/callbacks.py#L390

, logget, if -inf/inf .

, .

.. else if.

.

, .

+1

, . , , (, val_loss). : Keras?

+1

Source: https://habr.com/ru/post/1675360/


All Articles