Why "softmax_cross_entropy_with_logits_v2" is tagged

I am wondering why in Tensorflow version 1.5.0 and later softmax_cross_entropy_with_logits_v2 are returned by default to both labels and logics. What are some applications / scripts in which you want to execute backprop in tags?

+5
source share
1 answer

I saw the github problem below asking the same question, you might want to follow it for future updates.

https://github.com/tensorflow/minigo/issues/37

I am not saying for the developers who made this decision, but I would suggest that they do it by default, because it is really used often, and for most applications where you do not return to shortcuts, labels are permanent anyway and not hurt.

Two common use cases for backpropagating in tags:

  • Creating competitive examples

There is a whole field of research around the construction of competitive examples that deceive the neural network. Many of the approaches used for this include training the network, and then fixing the network and backtracking to shortcuts (the original image) to configure it (with some limitations, usually) to obtain a result that tricks the network into mistakenly classifying the image.

  • Visualization of the internal elements of the neural network.

I also recommend that people watch the video with the deepviz toolkit on youtube, you will learn a ton about the internal representations studied by the neural network.

https://www.youtube.com/watch?v=AgkfIQ4IGaM

If you continue to delve into this and find the original paper, you will find that they also return to tags to create images that activate certain filters on the network to understand them.

+5
source

Source: https://habr.com/ru/post/1275703/


All Articles