I have a program for studying an artificial neural network, and a 2-dimensional matrix is required as training data. The size of the dataset I want to use is about 300,000 x 400 floats. I can't use chunking here because the library I'm using (DeepLearningTutorials) accepts one numpy array as training data.
The code shows a MemoryError when using RAM is about 1.6 GB in this process (I checked it in the system monitor), but I have a total RAM of 8 GB. In addition, the system is a 32-bit version of Ubuntu-12.04.
I checked the answers to other similar questions, but somewhere he says there is nothing better than allocating memory for your python program, and somewhere the answer is unclear as to how to increase the process memory.
Interestingly, I run the same code on another machine, and it can easily receive an array of almost 150,000 x 400 in size. The basic configurations are similar, except that the other machine is 64-bit, and this 32-bit.
Can someone please give a theoretical answer on why there are so many differences or is this the only reason for my problem?
Hingo source
share