My predictions are always [0.5.0.5], my weights are bouncing, my loss starts ~ 30 and slowly decreases.
Model conv-> pool-> conv-> pool-> fullrelu-> full (softmax with logs later).
It works great with TFLearn.
How I calculated the loss (I mean its sparse _softmax, how could it go to 0.5 0.5, when it is sparse for only one truth label):
def calcLoss(logits,train_labels_node,WFully,BFully): loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( labels=train_labels_node, logits=logits)) return loss
How I calculated Optimizer:
def calcOptimizer(learning_rate,loss): return tf.train.AdamOptimizer(learning_rate).minimize(loss)
Losses

Edit: after playing with option and adding:
tf.train.create_global_step() learningrate = tf.train.exponential_decay( 0.01,
Now it works.
source share