Numerical search for the singular points of a complex matrix

43 views (last 30 days)
Hello, I have a function that takes in a complex number and generates a square matrix of complex numbers. What I am trying to do is to locate the values of s such that , ie that the matrix Z is singular.
To do this, the method I implemented a grid of s values and computed the condition number at each point. From what I gathered from the answers on this site, using MATLAB's det function outright was not the way to go. While visually this gives the answer I am looking for, it is extremely time intensive and the accuracy of the answer is dependent on the refinement of that grid.
In a research paper I have read (admittedly quite dated ~1972), the method they claim to use is a Newton-Rahpson method to locate the singular points.
They use the Taylor expansion to find the singular point
In this case, I can get as arbitrarily close to as I would like to, and it is easy for me to procure initial guesses that work. However, the implementation of this requires not only the determinant but the derivative of the determinant as well. On top of this, the output must be a complex number, as the final answer must also be a complex number. I'm not sure how they implemented this (and in 1972 no less) such that near the singular point their computation didn't go crazy, but their answers are certainly correct.
Therefore my question is, is there a good way to apply MATLAB's det function here to get what I need? Or is there another way for me to do this search for the singular points that doesn't involve a full sweep of the search space? And less importantly but still curious, how the heck might they have done this in 1972?
Thanks in advance
My code is too long to post but the current setup with the grid formation is basically (this is dummy code, not real, just to give an idea)
num_x = 100;
num_y = 51;
omega = linspace(0, 1, num_x));
sigmas = linspace(0, 2, num_y);
for i=1:num_x
for j=1:num_y
s = sigmas(j) + 1i.*omegas(i);
Z = % some involved square matrix computation here
conds(ii,jj) = cond(Z);
end
end

Accepted Answer

Matt J
Matt J on 5 Dec 2024 at 20:14
Edited: Matt J on 5 Dec 2024 at 20:19
and it is easy for me to procure initial guesses that work
If so, just use fminsearch,
p = fminsearch(@(p) det(Z(p)) ,[sigma0,omega0]) ; %Or cond() instead of det()
s = complex(p(1),p(2))
  1 Comment
Matthew
Matthew on 5 Dec 2024 at 21:16
Well I just gave it a go and it worked well, however I used
1./cond(Z(p))
But that's a minor difference. Much appreciated, I had no idea that fminsearch also could do complex numbers like that.

Sign in to comment.

More Answers (1)

John D'Errico
John D'Errico on 5 Dec 2024 at 21:23
Edited: John D'Errico on 5 Dec 2024 at 21:32
I'm sorry, but the use of det for anything like this is a bad idea.
How did they do this in 1972? Sigh, they used determinants too much back then. (They still do. Teachers teach what they were taught. And so determinants propagate, like coat hangers.) And anything they did solve would have been on a relatively small problem in context today. Standards were relatively low for such things in 1972.
But why are you writing code to do Newton-Raphson in the first place? One option might be to use an existing solver to look for the minimum of the reciprocal condition number of your matrix. That would suggest fminunc or fmincon as an option. No need to compute the derivative at all. And it avoids problems with det. This is where I would start, at least. The reciprocal condition number is a better choice, I think, than to try to maximize the condition number, which would seem to introduce numerical issues because the condition number wants to go to infinity there. Heck, since this is a 2-variable problem always in the real and complex parts of s, fminsearch would be adequate, or even a tool like GA, which has the advantage that you need not do a large presampling to find starting values.
Actually, as I think about it, I think GA is probably my first choice.
You don't give a simple example problem, and I am not being very creative.
i = sqrt(-1);
A0 = randn(3) + i*randn(3);
A = @(s1,s2) A0 + [s1 + s2*i,0 0;0 0 0;0 s2*i 0];
obj = @(s1,s2) rcond(A(s1,s2));
fsurf(obj,[-10 10 -10 10])
Warning: Function behaves unexpectedly on array inputs. To improve performance, properly vectorize your function to return an output with the same size and shape as the input arguments.
xlabel 'real part'
ylabel 'imag part'
lb = [-10,-10];
ub = [10 10];
[sval,fval,exitflag] = ga(@(S) obj(S(1),S(2)),2,[],[],[],[],lb,ub)
ga stopped because the average change in the fitness value is less than options.FunctionTolerance.
sval = 1×2
0.5257 4.7061
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
fval = 7.0947e-08
exitflag = 1
svd(A(sval(1),sval(2)))
ans = 3×1
6.7306 3.7017 0.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Don't write your own solvers to do what you can find already written by someone who knew very much what they were doing.
  2 Comments
Matthew
Matthew on 5 Dec 2024 at 22:13
This seems to work as well, thanks for the information.
I am facing an issue in general, that the majority of matrices I generate give a high condition number from the start. If I decrease the size of the matrix, I reach a point where cond(Z) doesn't seem to notice the singular points at all. In these cases, both methods in this thread fail.
If I increase the size, the peaks return and these methods work, but it is undesirable to do this. Is there a way for me to solve this problem without increasing size? I'm not super steeped in linear algebra, so I'm not very knowledgeable on preconditioners or if that would even help.
I will try a few things on my own, but I'll take all the insight I can get
John D'Errico
John D'Errico on 5 Dec 2024 at 22:35
Edited: John D'Errico on 5 Dec 2024 at 22:37
If the condition number is so large everywhere, then the reciprocal condition number will also fail, since effectively, rcond(A) is just 1/cond(A).
This means that everywhere you look, the range of singular values of your matrix is so large that it exceeds the dynamic range of a double, pretty much everywhere. Effectively, they will have a dynamic range larger than 16 significant digits. And that means you will not be able to effectively solve this using double precision.
You might be able to work with the matrix in a symbolic form, but that in itself willl carry massive performance penalties.
Anyway, you do NOT want to be writing your own solvers here. That just adds an extra layer of stuff you don't fully understand.

Sign in to comment.

Categories

Find more on Programming in Help Center and File Exchange

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!