EDIT: MathWorks give advice on this issue .
You can view memory usage with the system_dependent memstats and system_dependent dumpmem (and also just memory , as Jonas noted).
The pack command (which actually defragments your workspace) can also be useful.
If you are dealing with objects containing> 10 million or so values, then memory can easily become a problem. It may be an option to drop equipment (for example, buy more RAM), but there is a limit to what you can achieve.
As I suggest you approach transcoding to make them more memory efficient:
See if there are any variables you don't need to allocate. A classic example of this is when a function returns a value of the same size as the input.
function x = XPlus1(x) x = x + 1; end
more memory efficient than
function y = XPlus1(x) y = x + 1; end
Then try to break the problem into small pieces. At the simplest level, this may include performing operations on rows instead of a whole matrix or individual elements instead of a vector. (The cost of looping is less than the cost of it does not work at all due to lack of memory.) Then you need to restore your answer from the parts.
This step is essentially a philosophy that shows map reduction, so as a bonus, your code will be more easily parallelized.
source share