How to get shortcuts from the minibar?

I am working on this tutorial:

https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb

The test / train data files are simple section-delimited text files containing the names of the image files and the correct labels, such as:

...\data\CIFAR-10\test\00000.png    3
...\data\CIFAR-10\test\00001.png    8
...\data\CIFAR-10\test\00002.png    8

How to extract source labels from a minibar?

I tried with this code:

reader_test = MinibatchSource(ImageDeserializer('test_map.txt', StreamDefs(
    features = StreamDef(field='image', transforms=transforms), # first column in map file is referred to as 'image'
    labels   = StreamDef(field='label', shape=num_classes)      # and second as 'label'
)))

test_minibatch = reader_test.next_minibatch(10)
labels_stream_info = reader_test['labels']
orig_label = test_minibatch[labels_stream_info].value
print(orig_label)

<cntk.cntk_py.Value; proxy of <Swig Object of type 'CNTK::ValuePtr *' at 0x0000000007A32C00> >

But, as you see above, the results are not an array with labels.

What is the correct code to access tags?

This code works, but then it uses a different file format, not ImageDeserializer.

File format:

|labels 0 0 1 0 0 0 |features 0
|labels 1 0 0 0 0 0 |features 457

Work code:

mb_source = text_format_minibatch_source('test_map2.txt', [
    StreamConfiguration('features', 1),
    StreamConfiguration('labels', num_classes)])

test_minibatch = mb_source.next_minibatch(2)

labels_stream_info = mb_source['labels']
orig_label = test_minibatch[labels_stream_info].value
print(orig_label)

[[[ 0.  0.  1.  0.  0.  0.]]
 [[ 1.  0.  0.  0.  0.  0.]]]

How can I get tab labels when using ImageDeserializer?

+4
source share
2 answers

Can you try:

orig_label = test_minibatch[labels_stream_info].value
+2

- , - . , labels numpy. train_and_evaluate CNTK_201B:

for epoch in range(max_epochs):       # loop over epochs
    sample_count = 0
    while sample_count < epoch_size:  # loop over minibatches in the epoch
        data = reader_train.next_minibatch(min(minibatch_size, epoch_size - sample_count), input_map=input_map) # fetch minibatch.
        print("Features:")
        print(data[input_var].shape)
        print(data[input_var].value.shape)
        print("Labels:")
        print(data[label_var].shape)
        print(data[label_var].value.shape)

:

Training 116906 parameters in 10 parameter tensors.
Features:
(64, 1, 3, 32, 32)
(64, 1, 3, 32, 32)
Labels:
(64, 1, 10)
()

, numpy.ndarray, shape.

.

+1

Source: https://habr.com/ru/post/1665845/


All Articles