If you want to minimize one parameter, you can do the following (I avoided using a placeholder, since you are trying to prepare a parameter - placeholders are often used for hyperparameters and input and are not considered training parameters):
import tensorflow as tf x = tf.Variable(10.0, trainable=True) f_x = 2 * x* x - 5 *x + 4 loss = f_x opt = tf.train.GradientDescentOptimizer(0.1).minimize(f_x) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(100): print(sess.run([x,loss])) sess.run(opt)
The following list of pairs (x, loss) will be displayed here:
[10.0, 154.0] [6.5, 56.0] [4.4000001, 20.720001] [3.1400001, 8.0192013] [2.3840001, 3.4469128] [1.9304, 1.8008881] [1.65824, 1.2083197] [1.494944, 0.99499512] [1.3969663, 0.91819811] [1.3381798, 0.89055157] [1.3029079, 0.88059855] [1.2817447, 0.87701511] [1.2690468, 0.87572551] [1.2614281, 0.87526155] [1.2568569, 0.87509394] [1.2541142, 0.87503386] [1.2524685, 0.87501216] [1.2514811, 0.87500429] [1.2508886, 0.87500143] [1.2505331, 0.87500048] [1.2503198, 0.875] [1.2501919, 0.87500024] [1.2501152, 0.87499976] [1.2500691, 0.875] [1.2500415, 0.875] [1.2500249, 0.87500024] [1.2500149, 0.87500024] [1.2500089, 0.875] [1.2500054, 0.87500024] [1.2500032, 0.875] [1.2500019, 0.875] [1.2500012, 0.87500024] [1.2500007, 0.87499976] [1.2500005, 0.875] [1.2500002, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024]