(Block-) Matrix multiplication inaccuracy

3 views (last 30 days)
I am dealing with 2Nx2N matrices that are made up of NxN diagonal block matrices and an arbitrary NxN matrix ε
, such that the product due to diagonal (commutative) matrices .
When I implement this (see code) this is not exactly = 0 even though there is no inverting or complex operations involved. I suppose that this is due to numerical technicalities but it causes significant errors to arise in the final results. I would appreciate comments on how to avoid this issue or warnings as to where I am doing something I shouldn't be doing.
%functions for matrix arrangement
curly_K = @(x,y,e) [y*e*y,(-1)*y*e*x;
(-1)*x*e*y,x*e*x];
K = @(x,y) [x*x,x*y;
y*x,y*y];
% create diagonal matrices and one non-diagonal matrix of size NxN
N = 10;
Kx = diag(rand(N,1));
Ky = diag(rand(N,1));
eps = rand(N);
%the matrix product should always be zero, as long as Kx and Ky are diagonal
%as in this case they commute (check below in explicit form of the matrix
%product)
product = curly_K(Kx,Ky,eps)*K(Kx,Ky);
%%
% individual blocks of the product matrix
product11 = Ky*eps*Ky*Kx*Kx-Ky*eps*Kx*Ky*Kx;
product12 = Ky*eps*Ky*Kx*Ky - Ky*eps*Kx*Ky*Ky;
product21 = -Kx*eps*Ky*Kx*Kx + Kx*eps*Kx*Ky*Kx;
product22 = -Kx*eps*Ky*Kx*Ky + Kx*eps*Kx*Ky*Ky;
%plotting
figure(6)
subplot(1,2,1)
surf(product)
subplot(1,2,2)
surf([product11,product12;product21,product22])
  1 Comment
John D'Errico
John D'Errico on 7 Aug 2020
There are several issues here. First is the basic problem of floating point arithmetic. Floating point numbers do not store most decimal fractions exactly. Thus, we see
>> .3 - .2 - .1
ans =
-2.7756e-17
does not produce zero. Even though we know it "should" be.
The answer is to never trust the least significant bits of a computation.
Next, we see that the same computation, performed in a different sequence, will often produce a different result, at the level of the least significant bits.
>> .3 - (.1 + .2)
ans =
-5.5511e-17
Identically the same, but not so. Must I repeat myself? :)
The answer is to never trust the least significant bits of a computation.
When you do perform matrix multiplies, MATLAB invokes the BLAS to do the work. And the BLAS, while making things run quickly, can also choose to do those operations in any sequence it wants internally. The net result is sometimes (even often) those adds get done in a different sequence, so you can see results that are subtly different down at the level of the least significant bits.
Again, don't presume those results are identical. The solution is to use methods that are tolerant of errors in the least significant bits. Always use tolerances when making comparisons between floating point numbers.

Sign in to comment.

Accepted Answer

Bruno Luong
Bruno Luong on 7 Aug 2020
Edited: Bruno Luong on 7 Aug 2020
Same problem, don't count on exact associativity when working with floating point arithmetics. I can quote word-to-word from what I wrote here few days ago here:
"If your code is sensitive to such small errors, then it's not stable and might be you have to revise the algorithm itself."

More Answers (0)

Categories

Find more on Operating on Diagonal Matrices in Help Center and File Exchange

Products


Release

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!