As work around, you can save a counter to indicate how many items are in the queue. Here's how to define a counter:
queue_size = tf.get_variable("queue_size", initializer=0,
trainable=False, use_resource=True)
Then, when the preprocessing data (for example, in a function dataset.map), we can increase this counter:
def pre_processing():
data_size = ...
queue_size_op = tf.assign_add(queue_size, data_size)
with tf.control_dependencies([queue_size_op]):
Then we can decrease the counter every time we run our model with a batch of data:
def model():
queue_size_op = tf.assign_add(queue_size, -batch_size)
with tf.control_dependencies([queue_size_op]):
, , - queue_size , , , .. :
current_queue_size = session.run(queue_size)
( API Dataset), .