Just to increase @mtrw's answer, according to the documentation , training stops when any of these conditions occur:
- The maximum number of epochs has been reached :
net.trainParam.epochs - Maximum time exceeded :
net.trainParam.time - Performance boils down to the goal :
net.trainParam.goal - The performance gradient falls below min_grad :
net.trainParam.min_grad - mu exceeds mu_max :
net.trainParam.mu_max - Verification performance has increased by more than max_fail times since the last time it decreased (when using verification):
net.trainParam.max_fail
Epochs and time limits allow you to set the upper limit of the duration of training.
, () /: .
min_grad ( "" ) , , mingrad, . - , , , , , , .
mu, mu_dec mu_max (backpropagation).
max_fail , , .
, , ( ). , , min_grad, /. , max_fails - , .