How to transform matrix V? (A = V*F*inv(V))
2 views (last 30 days)
Show older comments
Youngsoo Choi
on 21 Mar 2021
Commented: John D'Errico
on 24 Mar 2021
Hello, it threre any method to find transform matrix V using Matlab?
A = [0 1 0 0 0 0 ; 0 0 1 0 0 ; 0 0 0 1 0 ; 0 0 0 0 1; -45 -111 -104 -48 -11]
F = [-1 0 0 0 0 ; 0 -2 1 0 0 ; 0 -1 -2 0 0 ; 0 0 0 -3 1 ; 0 0 0 0 -3]
Thanks
1 Comment
John D'Errico
on 21 Mar 2021
PLEASE. Read my answer, explaining the problem, and explaining why an all-zero solution is valid as the others have suggested to solve it, and why their solutnios can easily converge to such a solution. As well, it as explains why a non-trivial solution is non-unique (unless A and F are only 2x2 matrices.) Finally, I give a solution that is non-iterative, requiring no starting values.
Accepted Answer
Star Strider
on 21 Mar 2021
A = [0 1 0 0 0 ; 0 0 1 0 0 ; 0 0 0 1 0 ; 0 0 0 0 1; -45 -111 -104 -48 -11];
F = [-1 0 0 0 0 ; 0 -2 1 0 0 ; 0 -1 -2 0 0 ; 0 0 0 -3 1 ; 0 0 0 0 -3];
fcn = @(V) A*V - V*F;
opts = optimoptions('fsolve', 'MaxIterations',1E+5, 'MaxFunctionEvaluations',1E+6);
[V,fval] = fsolve(fcn, rand(5), opts)
LHS = A*V % Check Result
RHS = V*F % Check Result
.
4 Comments
John D'Errico
on 21 Mar 2021
Edited: John D'Errico
on 21 Mar 2021
As I said in my comment to Walter, this will also potentially fail. The trivial solution to the problem as you have posed it is V== zeros(size(A)). So fsolve can easily diverge to that garbage, non-solution.
Bruno Luong
on 21 Mar 2021
Agree, John's solution using linear algebra is direct, and thus more robust.
More Answers (3)
John D'Errico
on 21 Mar 2021
Edited: John D'Errico
on 21 Mar 2021
If a solution exists to the problem, you finding it in a fully robust way using the transformation A*V == V*F may be problematic. At least so unless you are careful with the mathematics. Doing so admits the trivial solution V == 0. This happens because the transformed problem is now implicitly a homogeneous linear system of equations. As such, both the fsolve solution posed, as well as the sylvester solution posed are subtly wrong.
That does not mean the problem is impossible to solve however. How might we find a non-trivial solution to the transformed problem? We might try the good old kron trick. kron is a slick way of unrolling such matrix problems. Essentially, we create a new homogeneous linear system, where n is the number of rows or columns of the matrix A.
( kron(eye(n),A) - kron(F,eye(n)).' )*V(:) == 0
A problem when we do this is it does not force V to be invertable. It also recognizes that V is not unique. Just for kicks, lets try an example. A simple test case will show the problem.
V = rand(3) % V is random, so with probability measure 1, it will be non-singular.
F = rand(3)
A = V*F*inv(V)
By construction, we have created a test problem where a solution must exist. Now, can we recover that solution, if we knew only A and F? Construct the matrix M as:
n = size(A,1);
M = kron(eye(n),A) - kron(F,eye(n)).';
(To understand how and why this works, you need to see what kron does with your matrix as I used it there twice. TRY IT!) M will be a 9x9 matrix here. If a non-trivial solution exists to the problem M*V(:) == 0 exists, then we MUST also have a solution to the problem A*V == V*F.
size(M)
rank(M)
So M is as expected, a 9x9 matrix. It has rank 6 though. So there is a 3-dimensional subspace of solutions to the problem A*V==V*F. The solution will ne non-unique.
Does non-uniqueness make sense? Yes, perfect sense. For example, suppose you have found a valid solution V to the problem A==V*F*inv(V). Now consider the matrix k*V. Since linear algebra allows us to extract such a constant from matrix products, and since inv(k*V) = 1/k*inv(V), as long as k is non-zero, then we should see that
(k*V) * F* inv(k*V) == (k*1/k) * V*F*inv(V) == V*F*inv(V) == A
So clearly any solution V MUST be non-unique. (But scaling is not the only issue, as we will see later.) Anyway, that means we cannot so easily recover the original matrix V. Let try it. The way to solve the linear homogeneous problem is to employ null.
Mnull = null(M)
Any linear combination of the columns in Mnull will be a solution. Vor example, we might try this:
V1 = reshape(Mnull(:,1),3,3)
As you can see, it is not the same as V. But is it a valid solution? Perhaps.
A - V1*F*inv(V1)
Indeed, it is a solution. As long as V1 was invertable, then it MUST be A valid solution. But as easily, I could have chosen a completely different matrix for V, thus
V2 = reshape(Mnull*randn(3,1),3,3)
A - V2*F*inv(V2)
And again, we see that to within floating point trash, V2 is also a valid solution. Can I find the linear combination of columns that would have yielded the original matix V? Of course, but that is only because I know V itself.
lincomb = Mnull\V(:)
So we have
Vrestored = reshape(Mnull*lincomb,3,3)
Now compare that to the original V, and we see they are the same, as always to within floating point trash.
norm(V - Vrestored)
So the solution we wanted to find was indeed hidden inside the columns of Mnull, though there is no valid reason to distinguish any of the infinitely many linear combinations we could have chosen.
We should recognize that as long as the solution chosen represents an invertable matrix V, then it MUST be a valid solution. But also that the solution is never unique, unless A and F were 2x2 matrices. In that case, my gut tells me that M should be a 4x4 matrix of rank 3.
As such, I've given you the complete solution. It takes only these four lines of code, with no iterative methods needed. Do with it as you wish.
n = size(A,1);
M = kron(eye(n),A) - kron(F,eye(n)).';
Mnull = null(M);
V = reshape(Mnull(:,1),n,n);
I've chosen the first column of Mnull. This is completely arbitrary. In the event that V as chosen (incredibly rarely) results in a singular matrix, just choose a different column of Mnull, or choose some random linear combination of the columns. Can we insure the solution must result in a nonsingular matrix V? While the columns of Mnull must form a set of linearly independent columns, that does not force the resulting nxn matrix to be also non-singular that I see. Even if both A and F are full rank, we could still have a non-trivial solution for V that is less than full rank, when written in the form A*V-V*F == 0.
4 Comments
John D'Errico
on 24 Mar 2021
Sorry. I took a couple of days off.
My response is in my last comment in my answer, where I point out that if Mnull has multiple columns, then if you choose only one column, then you may get a singular matrix V, which wile it satisfies the transformed problem, it fails because V is singular. You found exactly this out. And Bruno has given the perfect answer, to use a random linear combination of the columns of Mnull. This will generally produce a non-singular matrix.
Ivo Houtzager
on 21 Mar 2021
I believe you want to compute the Jordan canonical form of A. This can be done by
[V,F] = jordan(A)
1 Comment
Walter Roberson
on 21 Mar 2021
no, F is an input for this question. Given an input and an output, what was the transform matrix?
Walter Roberson
on 21 Mar 2021
The below Python describes an algorithm for finding X in AX+XB=Q. If we map A->A, X=V, B=-F, Q=zeros then we can see that the situation applies. Perhaps it could even be simplified because of the Q=0
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.solve_sylvester.html#scipy.linalg.solve_sylvester
4 Comments
John D'Errico
on 21 Mar 2021
I initially was going to comment your answer is the correct way to solve it. Until I thought about it, and then tried a simple case to verify my conjectiure of the problem. Sylvester really seems the perfect solution.
See Also
Categories
Find more on Linear Algebra in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!