Does order of multiplication in a for loop matter?

I am currently trying to rewrite my code in a way to get rid of the for loops I have, but as I was doing that, I stumbled on something that is making me worry that my code is not giving me the correct results. I rewrote the same terms in the for loop, but in a different order and it gave different things, I wanted to know why.
I have:
l1 = zeros(Nx_max,Ny_max);
l2 = zeros(Nx_max,Ny_max);
nx = Nx_max-1;
ny = Ny_max-1;
for x = 2:nx
for y = 2:ny
l1(x,y) = (2*dxx(x,y) +2*dyy(x,y))*(1/(h^2))*p(x,y,t1);
l2(x,y) = (2*dxx(x,y) + 2*dyy(x,y))*p(x,y,t1) * (1/(h^2));
end
end
diff = sum(sum(l1-l2))
I would expect if these are the same that diff = 0 (and they should be the same since all I did was change the order of multiplying one term in the for loop (at each iteration in the loop all terms are just numbers, so there should be no difference in order).
But I do not get zero. And I'm not sure why.
Note:dxx,dyy, and p are all arrays of the same size as l1 and l2. h is just a number

1 Comment

p does not appear to be the same size as l1 (3D vs 2D). Also, what is t1?

Sign in to comment.

Answers (2)

I think this may be a case of round-off error. Is the result close to zero (e.g. 1e-16)?
Also, just a suggestion, avoid using built-in MATLAB functions as variable names (e.g. diff).

1 Comment

1e-13. I'm thinking rounding error too, but was worried because some of my terms are pretty small, so didn't want that to affect anything
And thanks for the hint

Sign in to comment.

dpb
dpb on 13 Aug 2014
Edited: dpb on 13 Aug 2014
...I would expect ... should be the same since all I did was change the order of multiplying one term in the for loop...
That's naive expectation -- while commutativity and associativity are true in pure mathematics, floating point arithmetic is only an approximation (and a fairly crude one at that, albeit still quite useful). Precise equality in the case of rearranging terms isn't always going to happen (for other than integer arguments that don't exhibit under- or over- flow, anyway) owing to the difference in precisely how roundoff/truncation to the limited precision of the hardware occurs between the two sequences.
For the details in all their gory fullness, see Goldberg's paper
I would expect if you look at something like
max(l1(:)-l2(:))
you'd find the actual difference isn't huge but at the limits of what double precision accuracy is.

6 Comments

@dpb. I wouldn't call 16-decimal-place accuracy 'crude'. It's a whole lot better than my calculator gets.
It is true that, strictly speaking, associativity of addition and multiplication no longer holds, nor does the distributive law. However, if the IEEE 754 standard is used, commutativity of these two operations is still adhered to. The standard states that the result of any of the four operations of addition, subtraction, multiplication, and division must always be the properly rounded value of the mathematically precise result, and that implies the commutative law for addition and multiplication even after rounding.
dpb
dpb on 13 Aug 2014
Edited: dpb on 13 Aug 2014
It (15/16 decimals) is a whole lot fewer than infinite... :) If one fills in the possible entries in IEEE double precision on the real line, there are far more locations left out than included.
For the two operations independently, yes; when implemented and with other terms and add in the difference between the co-processor and internal storage; don't think you can say that there is such expected for the OP's expressions (and, as illustrated, didn't get such).
If it were expected, then sasha's uncovered a bug in Matlab (and I don't believe that is so)...
For the uninitiated into the full wonders of floating point, the following is kinda' interesting (and uses Matlab to demonstrate some things, too :) )
The characteristic outlined at beginning of p 1-3 and leading to illustration at Fig 1-1 is the fundamental basis I had in mind when calling floating point "crude", Roger, just for the record. I know you know all this, but it's often eye-opening when folks run across it for the first time as it is (in my experience) rarely taught outside of full-blown C-S curricula.
@dpb. I wouldn't get too excited by Figure 1-1. It is for a hypothetical machine with only two bits of mantissa, not the 53 bits for IEEE double precision. For ordinary run-of-the-mill computations matlab's 'double' is quite accurate enough. Admittedly there are certain difficult problems in which even with double precision, the effect of cumulative rounding errors becomes so large they overwhelm the values that are being computed. One good example of this latter was presented in the recent Answers thread #147192.
I was serious when I said that the four basic binary (that is, with two operands) operations are required to obtain the properly rounded value of the mathematically precise result. It is actually accomplished using what are known as "guard" bits, "round" bits, and "sticky" bits, though having to do so was an annoyance to certain logic designers. As I stated, addition and multiplication between two numbers is therefore always commutative. However this does not imply the associative law, and that makes it quite possible (and almost inevitable) that Sasha would obtain differing results in l1 and l2.
dpb
dpb on 14 Aug 2014
Edited: dpb on 14 Aug 2014
The figure is just illustrative and exaggerated by the limitation of number of bits, granted. It's there to make the point patently clear and obvious. But the same effect is true irregardless of how many bits there are. That's why I also agreed it (floating-point in general and IEEE-754 in particular) is quite useful. :)
And again, the whole point was making that when get beyond just the two values, the rules of associativity/commutivity aren't exact in floating point as they are in mathematics. Which, I don't believe, I said anything more other than being a little poetic about it... :)
"the rules of associativity/commutivity aren't exact in floating point as they are in mathematics." I still object to your inclusion of commutativity in this expression of inexactness. Commutativity is understood as applying only to binary operations, A op B = B op A, and as such the matlab operations of addition and multiplication are precisely commutative. No ifs, ands, or buts.
OK, so it's a semantics problem -- what do you want to call the order sequencing that I lumped into associativity?

Sign in to comment.

Categories

Find more on Loops and Conditional Statements in Help Center and File Exchange

Asked:

on 13 Aug 2014

Commented:

dpb
on 14 Aug 2014

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!