Splitting charts on one chart using Tensorboard

I am using Keras with a Tensorflow backend. My work includes comparing the characteristics of several models, such as Inception, VGG, Resnet, etc. In my dataset. I would like to build the accuracy of training several models on one graph. I try to do this in a Tensorboard, but it does not work.

Is there a way to plot multiple plots in one plot using a Tensorboard or is there another way I can do this?

thank

+11
source share
4 answers
  • You can definitely build scalars such as loss and verification accuracy: tf.summary.scalar("loss", cost)where cost is the tensor 'cost = tf.reduce_mean (-tf.reduce_sum (y * tf.log (pred) "), extension_indices = 1))
  • Now you are writing a bulletin to build all the values, and then you can combine all these bulletins into one bulletin: merged_summary_op = tf.summary.merge_all()
  • The next step is to run this summary in the session with summary = sess.run(merged_summary_op)
  • After you run merged_summary_opyou should write a resume using summary_writer: summary_writer.add_summary(summary, epoch_number)wheresummary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
  • Now open a terminal or cmd and run the following command: "Run the command tensorboard --logdir="logpath"
  • Then open http://0.0.0.0:6006/ in your web browser
  • You can go to the following link: https://github.com/jayshah19949596/Tensorboard-Visualization-Freezing-Graph
  • , , ,
  • , tenorflow 1.5, , 1.5 API
  • , , FileWriter .
  • : enter image description here
  • , ... ,
  • , , , , @RobertLugg

==================== =================

, , , , . , .

+3

.

for i in range(x):
    tensorboard = TensorBoard(log_dir='./logs/' + 'run' + str(i), histogram_freq=0,
                                     write_graph=True, write_images=False)

    model.fit(X, Y, epochs=150, batch_size=10, callbacks=[tensorboard])

:

tensorboard --logdir=logs
+2

SummaryWriter tenorboardX pytorch 1.2, add_scalars:

:

my_summary_writer.add_scalars(f'loss/check_info', {
    'score': score[iteration],
    'score_nf': score_nf[iteration],
}, iteration)

:

tensorboard image


, add_scalars : (, , ):

tensorboard image

:

my_summary_writer.add_scalar(f'check_info/score',    score[iter],    iter)
my_summary_writer.add_scalar(f'check_info/score_nf', score_nf[iter], iter)
+1

, tf.summary.FileWriter . tf.summary.scalar tf.summary.FileWriter. summary tf.summary.FileWriter.

import os

import tqdm
import tensorflow as tf


def tb_test():
    sess = tf.Session()

    x = tf.placeholder(dtype=tf.float32)
    summary = tf.summary.scalar('Values', x)
    merged = tf.summary.merge_all()

    sess.run(tf.global_variables_initializer())

    writer_1 = tf.summary.FileWriter(os.path.join('tb_summary', 'train'))
    writer_2 = tf.summary.FileWriter(os.path.join('tb_summary', 'eval'))

    for i in tqdm.tqdm(range(200)):
        # train
        summary_1 = sess.run(merged, feed_dict={x: i-10})
        writer_1.add_summary(summary_1, i)
        # eval
        summary_2 = sess.run(merged, feed_dict={x: i+10})            
        writer_2.add_summary(summary_2, i)

    writer_1.close()
    writer_2.close()


if __name__ == '__main__':
    tb_test()

Here is the result:

enter image description here

The orange line shows the result of the assessment phase, and, accordingly, the green line illustrates the data of the training stage.

0
source

Source: https://habr.com/ru/post/1694054/


All Articles