I concatenate the data into a numpy array as follows:
xdata_test = np.concatenate((xdata_test,additional_X))
This is done a thousand times. Arrays have dtype float32, and their sizes are shown below:
xdata_test.shape : (x1,40,24,24) (x1 : [500~10500])
additional_X.shape : (x2,40,24,24) (x2 : [0 ~ 500])
The problem is that when x1more than ~ 2000-3000, concatenation takes much longer.
The graph below shows the concatenation time depending on size x2:

Is this a memory issue or a basic numpy feature?
source
share