I am building a DNN to predict whether an object is present in an image or not. My network has two hidden layers, and the last layer is as follows:
W_fc2 = weight_variable([2048, 1])
b_fc2 = bias_variable([1])
y = tf.matmul(h_fc1, W_fc2) + b_fc2
Then I have a placeholder for labels:
y_ = tf.placeholder(tf.float32, [None, 1], 'Output')
I start batch training (so the first argument in the Output layer form is None).
I use the following loss function:
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
y[:, :1], y_[:, :1], name='xentropy')
loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
predict_hand = tf.greater(y, 0.5)
correct_prediction = tf.equal(tf.to_float(predict_hand), y_)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
But at runtime, I got the following error:
Rank mismatch: the rank of labels (received 2) should be equal to the ranks of logits minus 1 (received 2).
I think I should redo the layer with the inscription, but I'm not sure what it expects. I looked in the documentation and it says:
logits: r [d_0, d_1,..., d_ {r-2}, num_classes] dtype float32 float64. : shape [d_0, d_1,..., d_ {r-2}] dtype int32 int64. [0, num_classes).
, ( 0 1)?