In the example, weights are incorrectly initialized, but without a hidden layer, an effective linear softmax regression is obtained, which the demo does not affect this choice. Setting them all to zero is safe, but only for a single layer network.
, . , .
:
W = tf.Variable(tf.random_uniform([784, 100], -0.01, 0.01))
b = tf.Variable(tf.zeros([100]))
h0 = tf.nn.relu(tf.matmul(x, W) + b)
W2 = tf.Variable(tf.random_uniform([100, 10], -0.01, 0.01))
b2 = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(h0, W2) + b2)
, , , - , . , . , , - lockstep, , ( ), .