Numerous levels of accuracy in Caffe

I am trying to classify a large set of images using nVidia DIGITS and Caffe. Everything works well when I use the standard networks and networks that I built.

However, when I run the GoogleNet example, I can see the results of several levels of accuracy. How can there be several levels of accuracy in CNN? Having multiple loss levels is understandable, but what does multiple accuracy mean? I get several accuracy graphs during training. Like this image: learning process

lossX-top1 and lossX-top5 denote accuracy levels. I understand from Prototex that they rate the top and top 5 accuracy values, but what are the lossX accuracy levels?

Despite the fact that some of these graphs converge to about 98%, when I manually test the learning network for 'validation.txt' , I get a significantly lower value (corresponding to the bottom three accuracy graphs).

Can someone shed some light on this? How can there be several levels of accuracy with different values?

+5
source share
1 answer

If you look closely at 'train_val.prototxt' , you will notice that there are indeed several levels of accuracy that separate the main "path" from different levels. loss1 is evaluated after inception 4a , loss2 is evaluated after inception 4d and loss3 is the loss at the top of the network. The introduction of loss levels (and accuracy) in the intermediate representations of a deep network allows accelerating the propagation of the gradient during training. These intermediate precision measures how well the intermediate representations match.

+4
source

Source: https://habr.com/ru/post/1233033/


All Articles