I am currently planning my first Conv. NN at Tensorflow, and read the many educational materials available on the Tensorflow website for understanding.
There seem to be two ways to create custom CNN:
1) Use the Tensorflow tf.layers layer module, which is a "high level API". Using this method, you define a model definition function consisting of tf.layers objects, and in the main function, create an instance of tf.learn.Estimator , passing it the model definition function. Here, the fit() and evaluate() methods can be called on an Estimator object, which trains and validates accordingly. Link: https://www.tensorflow.org/tutorials/layers . The main function below:
def main(unused_argv):
Full code here
2) Use the Tensorflow "low level API" in which layers are defined in the definition function. Here, the levels are determined manually, and the user must perform many calculations manually. In the main function, the user runs tf.Session() and manually configures training / validation using for the loop (s). Link: https://www.tensorflow.org/get_started/mnist/pros . The main function below:
def main(_):
Full code here
My dilemma: I like the simplicity of defining a neural network using tf.layers (option 1), but I want the training customization to be implemented with a "low-level API" (option 2). In particular, when using the tf.layers implementation, tf.layers there a way to report the accuracy of the verification of each nth learning iteration? Or, more generally, can I train / check with tf.Session() , or am I limited to using the tf.learn.Estimator fit() and evaluate() methods?
It seems strange that it would be necessary to get a final grade point after all the training has been completed, since I thought that the whole point of verification is to track the development of the network during the training. Otherwise, what's the difference between validation and testing?
Any help would be appreciated.