I initially had the same question, and I could not find the answer, so here is how I do it with Tensorboard (this implies some familiarity with Tensorboard).
activation = tf.nn.relu(layer) active = tf.count_nonzero(tf.count_nonzero(activation, axis=0)) tf.summary.scalar('pct-active-neurons', active / layer.shape[1])
In this snip, activation
is my activation after ReLU for this particular layer. The first call to tf.count_nonzero(out, axis=0)
is to calculate the activation amount of each neuron in all training examples for the current training phase. The second call tf.count_nonzero( ... )
, which wraps the first call, counts how many neurons in the layer had at least one activation for a series of training examples for this step. Finally, I convert it to a percentage, dividing the number of neurons that had at least one activation at the training stage, by the total number of neurons for the layer.
More information on setting up the Tensorboard can be found here .
source share