I saw the github problem below asking the same question, you might want to follow it for future updates.
https://github.com/tensorflow/minigo/issues/37
I am not saying for the developers who made this decision, but I would suggest that they do it by default, because it is really used often, and for most applications where you do not return to shortcuts, labels are permanent anyway and not hurt.
Two common use cases for backpropagating in tags:
- Creating competitive examples
There is a whole field of research around the construction of competitive examples that deceive the neural network. Many of the approaches used for this include training the network, and then fixing the network and backtracking to shortcuts (the original image) to configure it (with some limitations, usually) to obtain a result that tricks the network into mistakenly classifying the image.
- Visualization of the internal elements of the neural network.
I also recommend that people watch the video with the deepviz toolkit on youtube, you will learn a ton about the internal representations studied by the neural network.
https://www.youtube.com/watch?v=AgkfIQ4IGaM
If you continue to delve into this and find the original paper, you will find that they also return to tags to create images that activate certain filters on the network to understand them.
source share