Performance of MATLAB instances over time

15 views (last 30 days)
Christopher
Christopher on 18 Sep 2017
Edited: Christopher on 19 Sep 2017
I've noticed for a long time that the performance of a matlab instance seems to decrease with use. I presently am seeing this effect in a matlab instance that I've been using for a few hours. To show that the instance is in fact slow I've opened a second instance and run the exact same code in both, separately. The code begins with a clear all and I also used clc and close all before each run. The slow matlab instance requires about 15% more time to execute the same code. The slowness is consistent whether I use or do not use profiler. No figures are produced, although I've made some figures in the past using the slow matlab instance.
I attach an screenshot showing part of the code run with the profiler.
Although the instance is overall slow, you can see that the slowness is confined to certain lines. Generally the lines with the simplest operations are not affected, although this doesn't seem to be a rule.
This has been typical behavior for as long as I've used matlab for cpu and memory intensive models (~5 years). What really prompted me to do this test is that when I run this same code with totsteps=3000000 (see line 299) the code takes more than 12 hours to run and I find that the entire code has slowed down by more than 75%. Even after clearing the matlab instance (clear all, clc, close all) I can rerun the code and it will still be 75% slower than it should be.
Does anyone know how to avoid this behavior?
I am using R2015b in Windows 8.1 Pro, i7-4960X and 64 GB RAM.
  4 Comments
Jan
Jan on 18 Sep 2017
Edited: Jan on 18 Sep 2017
"Even when variables are defined as global"? Why do you assume, that global variables have any positive impact on the performance? There is no need to copy variables before "operating" on them, as long as "operating" means a read access. For writing this can be controlled also, see https://blogs.mathworks.com/loren/2007/03/22/in-place-operations-on-data/
Walter Roberson
Walter Roberson on 18 Sep 2017
Global variables are the slowest kind of variables because of the need to search the global namespace.

Sign in to comment.

Answers (2)

Jan
Jan on 18 Sep 2017
Edited: Jan on 18 Sep 2017
clear all wastes time by removing all loaded function from the memory. If you want to clear the variables, use clear variables or better use functions to keep the workspace clean.
Even the simple allocation of zeros to fill the cells get slower. This might mean, that either the memory is exhausted (check this by feature memstats and the task manager), or that the processor speed is throttled due to overheating. So check the current speed and temperature of the cores also. See e.g. https://software.intel.com/en-us/articles/intel-power-gadget-20
[EDITED]
Try if it helps to avoid the repeated creation of matrices in Gphase and Gphase2. Move the block "for v = 1:numTestc, Gphase{v} = zeros(..." before the "for d" loop. Perhaps this is easier:
Gphase = cell(1, numTestc);
Gphase2 = Gphase;
Then inside the loop use:
aGphase = zeros(ynum, xnum);
for d = 1:totsteps
for v = 1:numTestc
aGphase(:) = 0;
...
Gphase{v} = aGphase;
end
...
end
Well, this is a little bit more voodoo than I usually suggest. If this does not help, try to use a 3D array:
Gphase = zeros(ynum, xnum, numTextc);
and
Gphase(:, :, v)
instead of
Gphase{v}
  3 Comments
Stephen23
Stephen23 on 18 Sep 2017
"That's interesting info regarding clear variables"
There is this advice and plenty more useful information in the MATLAB documentation:
Jan
Jan on 18 Sep 2017
See [EDITED] in my answer: Some guesses what could help, if the problem is caused by the memory manager (see Philip Borghesani's answer).

Sign in to comment.


Philip Borghesani
Philip Borghesani on 18 Sep 2017
What you are describing sounds like fragmentation of the Windows memory manager. This answer gives an example and a long discussion.
Unfortunately no amount of clearing of variables will help with the problem. Allocating and freeing a large number (10000+) of fairly large (.1 -.5 Mb?) arrays seems to be the trigger. The only solution is to reuse the arrays or use large multi dimension arrays instead of freeing and reallocating.
  1 Comment
Christopher
Christopher on 19 Sep 2017
Edited: Christopher on 19 Sep 2017
A memory fragmentation issue would make perfect sense.
How can I be sure that an operation on an array is using the same preallocated memory?
Jan Simon suggested that I replace
Gphase{v} = zeros(ynum,xnum);
with
Gphase{v}(:) = 0;
which makes sense since ynum and xnum may change, but what about lines such as:
ind{p} = sparse(P==p);
ind{p} = find(P==p);
FeOFraci = FeOFrac(ind{p});
M1M2Excess(ind{p}) = MgOFeO(ind{p})-EqFeOMgO(p)
Ci{1} = C{1}+adjmat(1,v);
Gphase{v} = Gphase{v} + R2.*T.*(MgOFrac.*log(MgOFrac) + FeOFrac.*log(FeOFrac));
dG_nearZero = exp(-Ci{1}./thresh).*Nmax;
MgFeEx_ind = M1M2Excess>0;
some of these change size because ind{p} (which is an index matrix of P) may change size, but the other arrays, like Ci{1}, Gphase{v}, and dG_nearZero, are always the same size. Maybe something like
Ci{1}(:) = C{1}(:)+adjmat(1,v);
and
Gphase{v}(:) = Gphase{v}(:) + R2.*T.*(MgOFrac.*log(MgOFrac) + FeOFrac.*log(FeOFrac))
are necessary to make them use the same memory, but this is unacceptably slower.
Also, for arrays like ind{p} which may change size each loop, how can I make them always use the same memory allocation? This should be possible because I know that the array will never exceed some maximum size.

Sign in to comment.

Categories

Find more on Performance and Memory in Help Center and File Exchange

Products

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!