TensorFlow-Slim: in-memory dataset provider

I just started using Slim and really liked it. I made the mnist test code with help slim.dataset_data_provider, but found it much slower than the native tensor when I load all the data into memory.

I assume that due to the thin data stream data from the hard disk? I am wondering if there is an example of using a data provider to access data in memory.

I installed num_readers = 10in DatasetDataProviderand installed num_threads = 10in tf.train.batch.

Thank. This is a great tool.

Also added a code for reference:

import tensorflow as tf
import time
import sys
from tensorflow.contrib import slim

# need tensorflow_models project
sys.path.append('/home/user/projects/tf_models/slim')
from datasets import mnist

g = tf.Graph()
with g.as_default():
    tf.logging.set_verbosity(tf.logging.DEBUG)
    train_set = mnist.get_split('train', data_dir)
    provider = slim.dataset_data_provider.DatasetDataProvider(train_set, num_readers = 10, shuffle = True)
    [image, label] = provider.get(['image', 'label'])
    images, _ = tf.train.batch([image, label], batch_size = batch_size, num_threads=10, capacity = 2*batch_size)
    images = tf.cast(images, tf.float32) / 255

    recon, model = inference_ae(images, 0.5)

    sh = images.get_shape().as_list()
    loss = tf.contrib.losses.log_loss(recon, tf.reshape(images, [sh[0], -1]))

    tf.scalar_summary('loss', loss)
    optimizer = tf.train.AdadeltaOptimizer(learning_rate)
    train_op = slim.learning.create_train_op(loss, optimizer)

    final_loss_value = slim.learning.train(train_op, log_dir, number_of_steps= n_steps,
                                           log_every_n_steps = 10,
                                           save_summaries_secs=300, save_interval_secs=600)
    print("Final loss value: {}".format(final_loss_value))
+4
source share

Source: https://habr.com/ru/post/1656828/


All Articles