How to use the linear activation function in TensorFlow?

In CUDA ConvNet, we can specify the neuron activation function as linear by writing neuron=linear[a,b] such that f(x) = ax + b .

How to achieve the same result in TensorFlow?

+5
source share
1 answer

The easiest way to write linear activation to TensorFlow is to use tf.matmul() and tf.add() (or the + operator). Assuming you have a matrix of exits from the previous layer (let it be called prev_layer ) with a size of batch_size x prev_units , and the size of the linear layer is linear_units :

 prev_layer = โ€ฆ linear_W = tf.Variable(tf.truncated_normal([prev_units, linear_units], โ€ฆ)) linear_b = tf.Variable(tf.zeros([linear_units])) linear_layer = tf.matmul(prev_layer, linear_W) + linear_b 
+5
source

Source: https://habr.com/ru/post/1246784/


All Articles