From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
A second order equation gives two first order equations for y and dy/dt. The matrix becomes a companion matrix.
OK. A third video about stability for second order, constant coefficient equations. But we'll move on to matrices here. So this is a rather special video. So this is our familiar equation. And I took a to b1, I just divided out a. No problem.
So that's one second order equation. But we know how to convert it to two first order equations. And here they are. So this is two equations. That's a 2 by 2 matrix there. And so let me read the top equation. It says that dy dt is 0y plus 1dy dt. So that equation is a triviality. dy dt equals dy dt.
The second equation is the real one. The derivative of y prime is y double prime. So this is second derivative here, equals minus cy and minus b y prime. And that's my equation y double prime, when I bring the minus cy over as plus cy, and I bring the minus b y prime over as plus b y prime. I have my equation. So that equation is the same as that one. It's just written with a vector unknown. It's a system, system of two equations.
And it's got a 2 by 2 matrix. And it's called, this particular matrix with a 0 and a 1 is called the companion matrix. Companion, so this is the companion equation to that one.
OK. So whatever we know about this equation, from the exponents s1 and s2, we're going to have the same information out of this equation. But the language changes. And that's really the point of this video, just to tell you the change in language. So here it is. The old exponents, s1 and s2, for that problem, and everybody watching this video is remembering that the s's solve s squared plus Bs plus C equals 0. So that's always what are s's are.
So that has two roots, s1 and s2 that control everything, control stability. Now if I do it in this language, I no longer call them s1 and s2. But they're the same two numbers. What I call them is eigenvalues, a cool word, half German half English maybe, kind of a crazy word. But it's well established.
Those same numbers would be called the eigenvalues of the matrix. You see, the matrix in this problem is the same. We've got the same information as the equation here. So those are the eigenvalues. And may I just tell you what you may know already? That everybody writes lambda, a Greek lambda, for eigenvalue. So where I had two exponents, here I have two eigenvalues. And those numbers are the same as those numbers. And they satisfy the same equation.
And when we meet matrices and eigenvalues properly and soon, we'll see about eigenvalues of other matrices. And we'll see that for these particular companion matrices, the eigenvalues solve the same equation that the exponents solve, this quadratic s squared and Bs and C equals 0.
OK. And stability, remember that stability has been real part of those roots of those exponents less than zero, because then the exponential has that negative real part, and goes to zero. Now we're just using, so that was our old language. And our new language would be real part of lambda, less than zero.
Stable matrix is real part of the eigenvalues, lambda less than zero. So we're just really exchanging the letters s and the single high order equation for the letter lambda, and two first order equations. OK. I'm doing this without-- just connecting the lambda to the s, but without telling you what the lambda is on its own.
OK. So let me remember. So, here I've taken a further step. Because basically I've said everything about a second order equation. We know the condition for stability. The condition is that the damping should be positive, B should be positive. And the frequency squared better come out positive. So C should be positive. So B positive and C positive were the case when this was our matrix.
Now I just have a few minutes more. So why don't I allow any 2 by 2 matrix. I'm not going to give you the theory of eigenvalues here. But just make the connection. OK. So I want to make the connection. And you remember that the companion matrix had a special form 0. a was zero, b was 1, c was the minus the big C, and d was minus the B. That was the companion.
So what am I going to say at this early, almost too early moment about eigenvalues? Because I'll have to do those properly. Eigenvalues and eigenvectors are the key to a system of equations. And you understand what I mean by system? It means that the unknown-- that I have more than one equation.
My matrix is 2 by 2, or 3 by 3, or n by n. My unknown z has 2 or 3 or n different components. It's a vector. So z is a vector. A matrix multiplies a vector. That's what matrices do. They multiply vectors. So that's the general picture. And this was an especially important case.
So we can decide on the stability. So I'll just summarize the stability for that system. The stability will be-- well I have to tell you something about the solutions to that system. Remember z is a vector. So here are solutions. z is-- it turns out this is the key. That there is an e-- you expect exponentials. And you expect now eigenvalues instead of s there. And now we need a vector. And let me just call that vector x1. And this will be the eigenvector. And this is the eigenvalue.
And if I look for a solution of that form, put it into my equation, out pops the key equation for eigenvectors. So again, I put this, hope for solution, into the equation. And I'll discover that a times this vector x1 should be lambda 1 times x1. Oh well, I have a lot to say about that.
But if it holds, if a times x1 is lambda 1 times x1, then when I put this in, the equation works. I've got a solution. Well I've got one solution. And of course for second order things, I'm looking for two solutions. So the complete solution would also be-- so I could have it's linear. So I can always multiply by a constant. And then I would expect a second one, of the same form, e to some other eigenvalue, like some other exponent times some other eigenvector.
Here's my look-ahead message that solutions look like that. So we're looking for an eigenvalue, and looking for an eigenvector. And there is the key equation they have to satisfy. And that equation comes when we put this into the differential equation and make the two sides agree. So that's what's coming. Eigenvalues and eigenvectors control the stability for systems of equations.
And that's what the world is mostly looking at, single equation, once in awhile but very, very often a system. And it'll be the eigenvalues that tell us. So are the eigenvalues positive? In that case we blow up, unstable. Are the eigenvalues negative, or at least the real part is negative? That's the stable case that we live with. Good, thanks.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.