I embed a multi-layer perceptron in Keras and use scikit-learn to do cross validation. For this, I was inspired by the code found in the Cross-Checking issue in Keras
from sklearn.cross_validation import StratifiedKFold def load_data():
In my research on neural networks, I learned that the idea of knowledge about a neural network is in the synaptic balance and in the process of tracing the network, scales that are updated to thereby reduce the frequency of network errors and improve its performance. (In my case, I use supervised learning)
For better training and evaluation of neural network performance, a common use method is cross-validation, which returns sections of the data set for training and model evaluation.
My doubt is ...
In this code snippet:
for train, test in kFold.split(X, Y): model = None model = create_model() train_evaluate(model, X[train], Y[train], X[test], Y[test])
Do we identify, train and evaluate a new neural network for each of the created sections?
If my goal is to fine tune the network for the entire data set, why is it wrong to identify one neural network and train it for the generated sections?
That is, why is this piece of code so?
for train, test in kFold.split(X, Y): model = None model = create_model() train_evaluate(model, X[train], Y[train], X[test], Y[test])
and not like that?
model = None model = create_model() for train, test in kFold.split(X, Y): train_evaluate(model, X[train], Y[train], X[test], Y[test])
Do I understand correctly how the code works? Or my theory?
Thanks!