How to increase the array of arrays and solve the problem of lack of memory in Matlab 2009b?

I am using Matlab 2009b and have a memory error. I read other published sols, but they are not useful to me. I'm sure I'm doing everything right, but I have to use a very large number of arrays. I think the problem lies in the fact that Matlab does not allow the array to be in several OS blocks. I am using Windows 7 . Is there any way to get rid of this problem? For example, is it possible to increase the array that Matlab uses in Windows 7?

System: Windows 7
Matlab: 2009b

+4
source share
3 answers

If the largest block available (as shown in memory ) is much smaller than the maximum amount of memory available to Matlab, restarting Matlab (or the system) can help.

Otherwise, you need to either rewrite the code or buy more RAM (and / or use the 64-bit version of Win7).

I suggest you try rewriting the code. It is often possible to solve memory problems.

EDIT

From your comment on @Richie Cotton's post, I see that you want to make the classification a huge amount of data. If this is a small number of classes, none of which are very sparse, you can solve the problem by running kmeans, say from 10 randomly selected subsets, say, 30% of your data. This should find you cluster centers just fine. To associate your data with the kernels, all you have to do is calculate for each data point the distance to the centers of the cluster and connect them to the nearest center.

+3
source

If you think that the size of your array is not large enough to guarantee such an error, your previous operations may have fragmented the available memory. MATLAB requires contiguous blocks, so fragmentation can lead to such errors.

So, before the point in the code where the memory error occurs, try running the pack command. This is all I can think of, besides the usual fixes.

+3
source

EDIT: MathWorks give advice on this issue .


You can view memory usage with the system_dependent memstats and system_dependent dumpmem (and also just memory , as Jonas noted).

The pack command (which actually defragments your workspace) can also be useful.

If you are dealing with objects containing> 10 million or so values, then memory can easily become a problem. It may be an option to drop equipment (for example, buy more RAM), but there is a limit to what you can achieve.

As I suggest you approach transcoding to make them more memory efficient:

See if there are any variables you don't need to allocate. A classic example of this is when a function returns a value of the same size as the input.

 function x = XPlus1(x) x = x + 1; end 

more memory efficient than

 function y = XPlus1(x) y = x + 1; end 

Then try to break the problem into small pieces. At the simplest level, this may include performing operations on rows instead of a whole matrix or individual elements instead of a vector. (The cost of looping is less than the cost of it does not work at all due to lack of memory.) Then you need to restore your answer from the parts.

This step is essentially a philosophy that shows map reduction, so as a bonus, your code will be more easily parallelized.

+2
source

Source: https://habr.com/ru/post/1308453/


All Articles