FIFOQueue Tensorflow Error: FIFOQueue is closed and has insufficient elements

Now I use shadoworflow to write a program to test models. I am using FIFOQueue for an input queue. For example, I have 50,000 images and the number of 100 images at a time. The program works great, except for the final iteration. At the last iteration, it shows the error "E tensorflow / core / client / tensor_c_api.cc: 485] FIFOQueue '_0_path_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: path_queue_Dequeue = QueueDequeue_class = [" loc: @ path_queue "], component_types = [DT_INT32, DT_BOOL, DT_STRING], timeout_ms = -1, _device =" / job: localhost / replica: 0 / task: 0 / processor: 0 "]]"

I think this is because he is trying to insert 50,001 ~ 50,100 images, but cannot achieve this. However, I do not need to embed these images and not use them. How can I avoid this error?

Another question: if I would like to use dequeue_many (100), the total number of images is not divided by 100, say, 45678. In this case, the tensor stream gives an error. How can i solve this?

Thank.

+5
source share
4 answers

Try dequeue_up_toinstead dequeue_many: https://www.tensorflow.org/versions/r0.10/api_docs/python/io_ops.html

Hope this helps!

+2
source

You can catch a specific mistake that will elegantly end the training when all the examples are exhausted:

try:
    while True:
        # Run training Ops here...

except tf.errors.OutOfRangeError:
    print('Done training -- epoch limit reached')
+1

, , . png , , .

input = tensorflow.train.string_input_producer(tensorflow.train.match_filenames_once("/input/*.png"))

- .

filename_im = tensorflow.train.string_input_producer(glob.glob('/input/*.png'))

+1

, , , . , . ? . .

0

Source: https://habr.com/ru/post/1651588/


All Articles