Error using data extension options in the object discovery API

I try to use data_augmentation_options in .config files to train the network, in particular ssd_mobilenet_v1, but when I activate the random_adjust_brightness option, I get the error message inserted very quickly below (I activate the option after step 110000).

I tried to decrease the default value:

optional float max_delta=1 [default=0.2]; 

But the result was the same.

Any idea why? Images are RGB from png files (from Bosch Small Traffic Lights Dataset ).

 INFO:tensorflow:global step 110011: loss = 22.7990 (0.357 sec/step) INFO:tensorflow:global step 110012: loss = 47.8811 (0.401 sec/step) 2017-11-16 11:02:29.114785: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.114895: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.114969: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.115043: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.115112: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values ... 

Edit: The workaround I found is this. Inf or nan are at a loss, so by checking the function in / object _detection / core / preprocessor.py that performs randomization of brightness:

 def random_adjust_brightness(image, max_delta=0.2): """Randomly adjusts brightness. Makes sure the output image is still between 0 and 1. Args: image: rank 3 float32 tensor contains 1 image -> [height, width, channels] with pixel values varying between [0, 1]. max_delta: how much to change the brightness. A value between [0, 1). Returns: image: image which is the same shape as input image. boxes: boxes which is the same shape as input boxes. """ with tf.name_scope('RandomAdjustBrightness', values=[image]): image = tf.image.random_brightness(image, max_delta) image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0) return image 

It is assumed that image values ​​should be between 0.0 and 1.0. Is it possible that images actually come from 0 and even a different range? In this case, trimming distorts them and leads to failure. In short: I commented on the clipping line, and it works (we will see the results).

+5
source share
3 answers

Often getting LossTensor is inf or nan. : Tensor had NaN values LossTensor is inf or nan. : Tensor had NaN values is due to an error in the bounding boxes / annotations (Source: https://github.com/tensorflow/models/issues/1881 ).

I know that in the Light Light Light Light Bosch dataset there are some annotations that go beyond the image size. For example, the image height in this dataset is 720 pixels, but some bounding rectangles have a height coordinate greater than 720. This is common because whenever a car recording a sequence hits a traffic light, a part of the traffic light is visible, and some of them are cut off .

I know that this is not an exact answer to your question, but I hope it gives an idea of ​​the possible reason why you have a problem. Perhaps eliminating annotations that go beyond the size of the image will help solve the problem; however, I am dealing with the same problem, except that I do not use image preprocessing. In the same dataset, I encounter the error LossTensor is inf or nan. : Tensor had NaN values LossTensor is inf or nan. : Tensor had NaN values every ~ 8000 steps.

+1
source

In addition to annotations that go beyond image size, the Bosch Traffic Light traffic detection training dataset also has one image, where x_max <x_min and y_max <y_min , which causes a negative width and height. This causes the “LossTensor to have the value inf or nan .: Tensor has the value NaN” every 8000 steps. I had the same error; resolving the problematic entries resolved the issue.

0
source

I also ran into this, I ended up writing a quick and dirty script to find bad eggs. I don’t know if the image changes over time, but the loaded set had three bad annotated images.
. / rgb / train / 2015-10-05-11-26-32_bag / 105870.png

./R/train/2015-10-05-11-26-32_bag/108372.png

./R/train/2015-10-05-14-40-46_bag/462350.png

and for those interested, heres my script:

 import yaml import os INPUT_YAML = "train.yaml" examples = yaml.load(open(INPUT_YAML, 'rb').read()) len_examples = len(examples) print("Loaded ", len(examples), "examples") for example in examples: for box in example['boxes']: xmin = float(box['x_min']) xmax = float(box['x_max']) ymin = float(box['y_min']) ymax = float(box['y_max']) if xmax < xmin or xmax > 1280 or xmin > 1280: print( "INVALID IMAGE: ", example['path'], " X_MAX = ", float(box['x_max']) ) if ymax < ymin or ymax > 720 or ymin > 720: print( "INVALID IMAGE: ", example['path'], " Y_MAX = ", float(box['y_max']) ) 
0
source

Source: https://habr.com/ru/post/1273431/


All Articles