The easiest way to write linear activation to TensorFlow is to use tf.matmul() and tf.add() (or the + operator). Assuming you have a matrix of exits from the previous layer (let it be called prev_layer ) with a size of batch_size x prev_units , and the size of the linear layer is linear_units :
prev_layer = โฆ linear_W = tf.Variable(tf.truncated_normal([prev_units, linear_units], โฆ)) linear_b = tf.Variable(tf.zeros([linear_units])) linear_layer = tf.matmul(prev_layer, linear_W) + linear_b
source share