Ok, so I'm ready to run the tf.nn.softmax_cross_entropy_with_logits()
function in Tensorflow.
I understand that βlogitsβ should be a probability tensor, each of which corresponds to a certain probability of the pixel, that it is part of the image, which will ultimately be a βdogβ or βtruckβ or something else. a finite number of things.
These logits will be connected to this cross-entropy equation: 
As I understand it, logits are connected to the right side of the equation. That is, they are q of each x (image). If they were probabilities from 0 to 1 ... it would make sense to me. But when I run my code and end with the tensor of logs, I do not get the probability. Instead, I get floats that are positive and negative:
-0.07264724 -0.15262917 0.06612295 ..., -0.03235611 0.08587133 0.01897052 0.04655019 -0.20552202 0.08725972 ..., -0.02107313 -0.00567073 0.03241089 0.06872301 -0.20756687 0.01094618 ..., etc
So my question is ... is that right? Should I somehow calculate all my logits and turn them into probabilities from 0 to 1?
source share