Batch_size in a tensor stream? Understanding of the concept

My question is simple and simple. What determines the size of the party in training and forecasting the neural network. How to visualize it in order to get a clear idea of ​​how data is transmitted to the network.

Suppose I have autoencoder

encoder = tflearn.input_data(shape=[None, 41])
encoder = tflearn.fully_connected(encoder, 41,activation='relu')

and I take the input as a csv file with 41 functions, so, as I understand it, each of them will be extracted from the csv file and transfer it to 41 neurons of the first layer when the lot size is 1.

But when I increase the batch size to 100, how will 41 functions out of 100 batches in this network be implemented?

model.fit(test_set, test_labels_set, n_epoch=1, validation_set=(valid_set, valid_labels_set),
          run_id="auto_encoder", batch_size=100,show_metric=True, snapshot_epoch=False)

Will the party or some operations on them be normalized?

The number of epics is the same for both cases.

+4
source share
1

- , . , (None) 41 .

None , 100 ( ) ( , ).

, ;)

, ! , .

+3

Source: https://habr.com/ru/post/1673116/


All Articles