How to ensure the value of the learning speed for the tensor in keras

I use keras and want to implement custom learning speed through keras.callbacks.LearningRateScheduler

How can I take a training course to be able to control it in tensor? ( keras.callbacks.TensorBoard )

I currently have:

 lrate = LearningRateScheduler(lambda epoch: initial_lr * 0.95 ** epoch) tensorboard = TensorBoard(log_dir=LOGDIR, histogram_freq=1, batch_size=batch_size, embeddings_freq=1, embeddings_layer_names=embedding_layer_names ) model.fit_generator(train_generator, steps_per_epoch=n_steps, epochs=n_epochs, validation_data=(val_x, val_y), callbacks=[lrate, tensorboard]) 
+5
source share
1 answer

I'm not sure how to pass it to Tensorboard, but you can control it with python.

 from keras.callbacks import Callback class LossHistory(Callback): def on_train_begin(self, logs={}): self.losses = [] self.lr = [] def on_epoch_end(self, batch, logs={}): self.losses.append(logs.get('loss')) self.lr.append(initial_lr * 0.95 ** len(self.losses)) loss_hist = LossHistory() 

Then just add loss_hist to your callbacks .

Update:

Based on this answer:

 class LRTensorBoard(TensorBoard): def __init__(self, log_dir='./logs', **kwargs): super(LRTensorBoard, self).__init__(log_dir, **kwargs) self.lr_log_dir = log_dir def set_model(self, model): self.lr_writer = tf.summary.FileWriter(self.lr_log_dir) super(LRTensorBoard, self).set_model(model) def on_epoch_end(self, epoch, logs=None): lr = initial_lr * 0.95 ** epoch summary = tf.Summary(value=[tf.Summary.Value(tag='lr', simple_value=lr)]) self.lr_writer.add_summary(summary, epoch) self.lr_writer.flush() super(LRTensorBoard, self).on_epoch_end(epoch, logs) def on_train_end(self, logs=None): super(LRTensorBoard, self).on_train_end(logs) self.lr_writer.close() 

Just use it like a regular TensorBoard .

+1
source

Source: https://habr.com/ru/post/1271934/


All Articles