Using tf.name_scope in a tensorboard with tensor evaluation

I have code for calculating performance metrics in my Estimatormodel_fn, written in a function that returns a metrics dictionary

def __model_eval_metrics(self, classes, labels, mode):
    if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL:
        return {
                'accuracy': tf.metrics.accuracy(labels=tf.argmax(input=labels, axis=1), predictions=classes),
                'precision': tf.metrics.precision(labels=tf.argmax(input=labels, axis=1), predictions=classes),
                'recall': tf.metrics.recall(labels=tf.argmax(input=labels, axis=1), predictions=classes)
                }
    else:
        return None

During training evaluation , they are recorded as scaling within the limits model_fngrouped by the name of the train_metrics

if mode == tf.estimator.ModeKeys.TRAIN:
    with tf.name_scope('train_metrics') as scope:
        tf.summary.scalar('model_accuracy', eval_metrics['accuracy'][1])
        tf.summary.scalar('model_precision', eval_metrics['precision'][1])
        tf.summary.scalar('model_recall', eval_metrics['recall'][1])
        tf.summary.scalar('model_loss', loss)

This gives the desired grouping in the Tensorboard. Tensorboard example

For Estimator evaluation, metrics are passed as a dictionary to the argument EstimatorSpeceval_metric_ops as a result__model_eval_metrics()

return tf.estimator.EstimatorSpec(
    mode=mode,
    predictions={"predictions": predictions, "classes": classes},
    loss=loss,
    train_op=train_op,
    eval_metric_ops=eval_metrics,
)

The problem is that in Tensorboard, these metrics are no longer grouped by namespace, and I cannot figure out where to add the namespace to make this happen. You can see that the scores are not grouped.

Tensorboard wrong

Question

  • name_scope Estimator?
  • name_scope Tensorboard?
+4

Source: https://habr.com/ru/post/1684494/


All Articles