Preface


This section is devoted to fundamental matrices for linear differential equations.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part II of the course APMA0340
Introduction to Linear Algebra with Mathematica

Fundamental Matrices for Variable Coefficient Equations


 

A fundamental matrix of a system of n homogeneous linear ordinary differential equations
\[ \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) , \qquad {\bf x}(t) = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix}, \quad {\bf P}(t) = \begin{pmatrix} p_{11} (t) & p_{12} (t) & \cdots & p_{1n} (t) \\ p_{21} (t) & p_{22} (t) & \cdots & p_{2n} (t) \\ \vdots & \vdots & \ddots & \vdots \\ p_{1n} (t) & p_{2n} (t) & \cdots & p_{nn} (t) \end{pmatrix} , \]
is any nonsingular (its determinant is not zero for all t) matrix-function Ψ(t) that satisfies the matrix differential equation:
\[ \dot{\bf \Psi} (t) = {\bf P}(t)\,{\bf \Psi}(t) \qquad \mbox{or} \qquad \frac{\text d}{{\text d}t}\, {\bf \Psi} (t) = {\bf P}(t)\,{\bf \Psi}(t) . \]
Here dot stands for the derivative with respect to time variable t. In other words, a fundamental matrix has n linearly independent columns, each of them is a solution of the homogeneous vector equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) . \) Once a fundamental matrix is determined, every solution to the system can be written as \( {\bf x} (t) = {\bf \Psi}(t)\,{\bf c} , \) for some constant vector c (written as a column vector of height n). A product of a fundamental matrix and a nonsingular constant matrix is again a fundamental matrix. Therefore, a fundamental matrix is not unique.
Theorem 1: If X(t) is a solution of the n × n matrix differential equation \( \dot{\bf X} (t) = {\bf P}(t)\,{\bf X}(t) , \) then for any constant column-vector c, the n-vector u = X(t) c is a solution of the vector equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) . \)

Theorem 2: If an n × n matrix P(t) has continuous entries on an open interval, then the vector differential equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) \) has an n × n fundamental matrix \( {\bf X} (t) = \left\{ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right\} \) on the same interval. Every solution x(t) to this system can be written as a linear combination of the column vectors of the fundamental matrix in a unique way:
\[ {\bf x} (t) = c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t) \qquad\mbox{or in matrix form} \quad {\bf x} (t) = {\bf X} (t)\, {\bf c} \]
for appropriate constants c1, c2, ... , cn, where \( {\bf c} = \left\langle c_1 , c_2 , \ldots , c_n \right\rangle^{\mathrm T} \) is a column vector of these constants.

The above representation of a solutions as a linear combination of linearly independent function-vectors is referred to as the general solution to the homogeneous vector differential equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) . \)

Theorem 3: The general solution of a nonhomogeneous linear vector equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t) \) is the sum of the general solution of the complement homogeneous equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) \) and a particular solution of the inhomogeneous equation. That is, every solution to \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t) \) is of the form
\[ {\bf x} (t) = c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t) + {\bf x}_p (t) \]
for some constants c1, c2, ... , cn, where
\[ {\bf x}_h (t) = c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t) \]
is the general solution of the homogeneous linear equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) \) and xp (t) is a particular solution of the nonhomogeneosu equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t) . \)

Theorem: Superposition Principle for inhomogeneous equations Let P(t) be an n × n matrix function that is continuous on an interval [a,b], and let x1(t) and x2(t) be two vector solutions of the nonhomogeneous equations
\[ \dot{\bf x}_1 (t) = {\bf P}(t)\,{\bf x}_1 (t) + {\bf f}_1 (t) , \qquad \dot{\bf x}_2 (t) = {\bf P}(t)\,{\bf x}_2 (t) + {\bf f}_2 (t) , \quad t \in [a,b] , \]
respectively. Then their sum \( {\bf x} (t) = {\bf x}_1 (t) + {\bf x}_2 (t) \)

Corollary: The difference between any two solutions of the nonhomogeneous vector equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t) \) is a solution of the complementary homogeneous equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) . \)
Example 1: It is not hard to verify that the vector functions
\[ {\bf x}_1 (t) = \begin{bmatrix} 1 \\ t \end{bmatrix} , \qquad {\bf x}_2 (t) = \begin{bmatrix} t^2 \\ t \end{bmatrix} \]
are two linearly independent solutions to the following homogeneous vector differential equation
\[ \dot{\bf x} (t) = {\bf P} (t)\, {\bf x} (t) , \qquad {\bf P} (t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} -2t^2 & 2t \\ 0 & 1- t^2 \end{bmatrix} . \]
Therefore, the corresponding fundamental matrix is
\[ {\bf X} (t) = \begin{bmatrix} 1 & t^2 \\ t & t \end{bmatrix}, \qquad \det {\bf X} (t) = t - t^3 = t \left( 1 - t^2 \right) . \]
The determinant \( W(t) = \det\,{\bf X}(t) \) of a square matrix \( {\bf X}(t) = \left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right] \) formed from the set of n vector functions x1, x2, ... , xn, is called the Wronskian of these column vectors x1, x2, ... , xn.
Theorem 5: [N. Abel] Let P(t) be an n × n matrix function with entries pij(t) (i,j = 1,2, ... ,n) that are continuous functions on some interval. Let xk(t), k = 1,2, ... , n, be n solutions to the homogeneous vector differential equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) . \) Then the Wronskian of the set of vector solutions is
\[ W(t) = \det {\bf X}(t) = W(t_0 ) \,\exp \left\{ \int_{t_0}^t \mbox{tr}\,{\bf P}(t)\,{\text d}t \right\} , \]
with t0 being a point within an interval where the trace tr P(t) = p11 + p22 + ... + pnn is continuous. ■

Corollary 2: Let x1(t), x2(t), ... , xn(t) be column solutions of the homogeneous vector equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) \) on some interval |a,b|, where n × n matrix function P(t) is continuous. Then the corresponding matrix \( {\bf X}(t) = \left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right] \) of these column vectors is either a singular matrix for all t ∈ |a,b| or else nonsingular. In other words, det X(t) is either identically zero or it never vanishes on the interval |a,b|.

Corollary 3: Let P(t) be an n × n matrix function that is continuous on an interval |a,b|. If \( \left\{ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right\} \) is a linearly independent set of solutions to the homogeneous differential equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) \) on |a,b|, then the Wronskian
\[ W(t) = \det {\bf X}(t) = \det \left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right] \]
is not zero at every point t in |a,b|. ■

Example 2: The matrix
\[ {\bf P} (t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} -2t^2 & 2t \\ 0 & 1- t^2 \end{bmatrix} \qquad (t \ne 0,1,-1) \]
has the trace tr \( {\bf P} = \frac{1 - 3\,t^2}{t \left( 1 - t^2 \right)} . \) Integrating the latter, we get
\[ \int {\bf P} (t) \,{\text d} t = \int \frac{1 - 3\,t^2}{t \left( 1 - t^2 \right)} \,{\text d} t = \ln \left( t- t^3 \right) . \]
From Abel's theorem, it follows that the Wronskian is
\[ W(t) = C\, e^{\int \mbox{tr}\,{\bf P}(t)\,{\text d}t} = C\, \left( t- t^3 \right) . \]
On the other hand, direct calculations show that the Wronskian of the given functions x1(t) and x2(t) is
\[ W(t) = \det \begin{bmatrix} 1& t^2 \\ t&t \end{bmatrix} = t - t^3 \ne 0 \quad \mbox{for} \quad t \ne 0, 1, -1 . \]
End of Example 2

Let us consider the initial value problem

\[ \frac{{\text d}{\bf x}}{{\text d}t} = {\bf P}(t)\, {\bf x} (t) , \qquad {\bf x} (t_0 ) = {\bf x}_0 . \]
The general solution of the homogeneous equation is
\[ {\bf x} (t) = {\bf X} (t)\, {\bf c} , \]
where \( {\bf c} = \left\langle c_1 , c_2 , \ldots , c_n \right\rangle^{\mathrm T} \) is the column vector of arbitrary constants. To satisfy the initial condition, we set
\[ {\bf X} (t_0 )\, {\bf c} = {\bf x}_0 \qquad\mbox{or} \qquad {\bf c} = {\bf X}^{-1} (t_0 )\, {\bf x}_0 . \]
Therefore, the solution to the initial value problem becomes
\[ {\bf x}(t) = {\bf \Phi} (t, t_0 )\,{\bf x}_0 = {\bf X} (t)\,{\bf X}^{-1} (t_0 )\, {\bf x}_0 . \]
The square matrix \( {\bf \Phi} (t, s) = {\bf X} (t)\, {\bf X}^{-1} (s) \) is usually referred to as a propagator matrix.
Theorem 6: Let X(t) be a fundamental matrix for the homogeneous linear system \( \dot{\bf x} = {\bf P}(t)\,{\bf x} (t) , \) meaning that X(t) is a solution of the matrix equation \( \dot{\bf X} = {\bf P}(t)\,{\bf X} (t) \) and det X(t) ≠ 0. Then the unique solution of the initial value problem
\[ \dot{\bf x}(t) = {\bf P}(t)\,{\bf x} (t) , \qquad {\bf x} (t_0 ) = {\bf x}_0 \]
is given by \( {\bf x}(t) = {\bf \Phi} (t, t_0 )\,{\bf x}_0 . \)

Corollary 4: For a fundamental matrix X(t), the propagator matrix Φ(t, t0) is the unique solution of the following matrix initial value problem
\[ \frac{\text d}{{\text d}t}\,{\bf \Phi} \left( t, t_0 \right) = {\bf P}(t)\, {\bf \Phi} \left( t, t_0 \right) , \qquad {\bf \Phi} \left( t_0 , t_0 \right) = {\bf I} , \]
where I is the identity matrix. Hence, Φ(t, t0) is a fundamental matrix of the homogeneous vector differential equation \( \dot{\bf x} = {\bf P}(t)\,{\bf x} (t) . \)

Corollary 5: Let X(t) and Y(t) be two fundamental matrices of the homogeneous vector equation \( \dot{\bf x} = {\bf P}(t)\,{\bf x} (t) . \) Then there exists a nonsingular constant square matrix C such that \( {\bf X} (t) = {\bf Y} (t)\, {\bf C} , \ \det{\bf C} \ne 0 . \) This means that the solution space of the matrix equation \( \dot{\bf X} = {\bf P}(t)\,{\bf X} (t) \) is 1. ■
Example 3: Consider the initial value problem
\[ \dot{\bf x} (t) = {\bf P}(t)\,{\bf x} (t) , \quad {\bf x} (2) = {\bf x}_0 \qquad\mbox{where} \quad {\bf P}(t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} -2t^2 & 2t \\ 0 & 1- t^2 \end{bmatrix} \quad {\bf x}_0 = \begin{bmatrix} 2 \\ 1 \end{bmatrix} . \]
We know from the previous example, that a fundamental matrix for corresponding homogeneous vector equation is
\[ {\bf X} (t) = \begin{bmatrix} 1 & t^2 \\ t & t \end{bmatrix} . \]
Since
\[ {\bf X}^{-1} (t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} t & -t^2 \\ -t & 1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf X}^{-1} (2) = \frac{1}{6} \begin{bmatrix} -2&4 \\ 2& -1 \end{bmatrix} , \]
we get the propagator matrix
\[ {\bf \Phi} (t,2) = {\bf X} (t) {\bf X}^{-1} (2) = \frac{1}{6} \begin{bmatrix} 2t^2 -2 & 4- t^2 \\ 0 & 3t \end{bmatrix} . \]
Then the solution of the given initial value problem becomes
\[ {\bf x} (t) = {\bf \Phi} (t,2) \, {\bf x}_0 = \frac{1}{6} \begin{bmatrix} 2t^2 -2 & 4- t^2 \\ 0 & 3t \end{bmatrix} \begin{bmatrix} 2 \\ 1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} t^2 \\ t \end{bmatrix} . \]
X[t_] = {{1, t^2}, {t, t}}
P[t_] = {{-((2 t^2)/(t - t^3)), (2 t)/(t - t^3)}, {0, 1/(t - t^3) - t^2/(t - t^3)}}
Simplify[D[X[t], t] - P[t].X[t]]
Inverse[{{1, t^2}, {t, t}}] /. t -> 2
X[t].{{-(1/3), 2/3}, {1/3, -(1/6)}}
Simplify[%]
{{t^2/2}, {t/2}}

 

Exponential Matrices


Consider autonomous vector linear differential equation of the form

\[ \dot{\bf y} (t) = {\bf A}\, {\bf y} (t) , \]
where A is a square n × n matrix and y(t) is an (n × 1)-column vector of n unknown functions. Here we use dot to represent the derivative with respect to t. A solution of the above equation is a curve in n-dimensional space; it is called an integral curve, a trajectory, a streamline, or an orbit. When the independent variable t is associated with time (which is usually the case), we can call a solution y(t) the state of the system at time t. Since a constant matrix A is continuous on any interval, all solutions of the system \( \dot{\bf y} (t) = {\bf A} \, {\bf y} (t) \) are determined on ( -∞ , ∞ ). Therefore, when we speak of solutions to the vector equation \( \dot{\bf y} (t) = {\bf A} \, {\bf y} (t) , \) we consider solutions on the real axis.

Any fundamental matrix is a constant multiple of the exponential matrix:

\[ {\bf \Phi}(t) = e^{{\bf A}\,t} {\bf C}, \qquad \det{\bf C} \ne 0. \]
The exponential matrix function is a unique solution of the following matrix initial value problem:
\begin{equation} \label{EqExp.1} \frac{\text d}{{\text d}t} {\bf X}(t) = {\bf A}\,{\bf X}(t) , \qquad {\bf X}(0) = {\bf I} , \end{equation}
where I is the identity matrix. With this in hand, the propagator is expressed as
\[ {\bf \Phi}(t,s) = e^{{\bf A}\,(t-s)} . \]
Then the solution of the initial value problem
\[ \frac{\text d}{{\text d}t} {\bf y}(t) = {\bf A}\,{\bf y}(t) , \qquad {\bf y}(t_0) = {\bf c} , \]
is expressed through the propagator:
\[ {\bf y}(t) = {\bf \Phi}(t,t_0 ) {\bf c} = e^{{\bf A}\,(t-t_0 )} {\bf c} . \]

Mathematica has a couple of options to determine a fundamental matrix. It has a build-in command MatrixExp[A t] that determined a fundamental matrix for any square matrix A. For a diagonalizable matrix A, another way to find the fundamental matrix is to use two lines approach:

{roots,vectors} = Eigensystem[A]
Phi[t_] = Transpose[Exp[roots t]*vectors]
Example 4: Consider a linear system of differential equations
\[ \dot{\bf y}(t) = {\bf A}\,{\bf y}(t), \qquad\mbox{where}\quad {\bf y}(t) = \begin{bmatrix} y_1 (t) \\ y_2 (t) \end{bmatrix}, \quad {\bf A} = \begin{bmatrix} 2&1 \\ 6&3 \end{bmatrix} . \]
First, we find eigenvalues and eigenvectors:
Eigenvalues[{{2, 1}, {6, 3}}]
{5, 0}
Eigenvectors[{{2, 1}, {6, 3}}]
{{1, 3}, {-1, 2}}
We check with Mathematica that vectors v1 = [1, 3] and v1 = [-1, 2] are eigenvectors corresponding eigenvalues λ1 = 5 and λ1 = 0, respectively.
A = {{2, 1}, {6, 3}};
v1 = {1, 3};
v2 = {-1, 2};
A.v1 - 5*v1
{0, 0}
A.v2
{0, 0}
Now we use Mathematica to determine the fundamental matrix. First, we define two linearly independent solutions:
lambda1=5; lambda2=0; y1[t_] = Exp[lambda1*t]*v1
Out[13]= {E^(5 t), 3 E^(5 t)}
y2[t_] = Exp[lambda2*t]*v2
Out[14]= {-1, 2}
The general solution:
y[t_] = c1*y1[t]+c2*y2[t]
Out[15]= {-c2 + c1 E^(5 t), 2 c2 + 3 c1 E^(5 t)}
( * check *)
Simplify[y'[t]-A.y[t]=={0,0}]
Out[16]= True
To find the fundamental matrix:
W[t_]=Transpose[{y1[t],y2[t]}]
Out[17]= {{E^(5 t), -1}, {3 E^(5 t), 2}}
Det[W[t]]
Out[18]= 5 E^(5 t)
Simplify[W'[t]-A.W[t]==0,Trig->False]
Out[19]= {{0, 0}, {0, 0}} == 0
Simplify[W'[t] - A.W[t] == {{0, 0}, {0, 0}}]
Out[20]= True
End of Example 4
Example 5: Let us consider a differential equation with diagonazable matrix:
\[ \frac{{\text d}{\bf y}}{{\text d}t} = {\bf A}\,{\bf y}, \qquad \mbox{with} \qquad {\bf A} = \begin{bmatrix} 3&2&4 \\ 2&0&2 \\ 4&2&3 \end{bmatrix} . \]

First, we check its eigenvalues and corresponding eigenvectors:

A = {{3, 2, 4}, {2, 0, 2}, {4, 2, 3}};
Eigenvalues[A]
Out[2]= {8, -1, -1}
Eigenvectors[A]
{{2, 1, 2}, {-1, 0, 1}, {-1, 2, 0}}
Therefore, the given matrix A has three linearly independent eigenvectors; therefore, it is diagonalizable and its minimal polynomial is
\[ \psi (\lambda ) = \left( \lambda -8 \right)\left( \lambda +1 \right) \]
We can build the coresponding exponential matrix using Sylvester's formula:
\[ e^{{\bf A}\,t} = e^{8t} {\bf Z}_8 + e^{-t} {\bf Z}_{-1} , \]
where
\[ {\bf Z}_8 = \frac{1}{9} \left( {\bf A} + {\bf I} \right) = \frac{1}{9} \begin{bmatrix} 4&2&4 \\ 2&1&2 \\ 4&2&4 \end{bmatrix} , \qquad {\bf Z}_{-1} = -\frac{1}{9} \left( {\bf A} -8 {\bf I} \right) = \frac{1}{9}\begin{bmatrix} \phantom{-}5&-2&-4 \\ -2&\phantom{-}8&-2 \\ -4&-2&\phantom{-}4 \end{bmatrix} . \]
A = {{3, 2, 4}, {2, 0, 2}, {4, 2, 3}};
Z8 = (A + IdentityMatrix[3])/9
Z1 = -(A - 8* IdentityMatrix[3])/9
We can also check our answer with a standard Mathematica command:
MatrixExp[A t]
{{1/9 E^-t (5 + 4 E^(9 t)), 2/9 E^-t (-1 + E^(9 t)), 4/9 E^-t (-1 + E^(9 t))}, {2/9 E^-t (-1 + E^(9 t)), 1/9 E^-t (8 + E^(9 t)), 2/9 E^-t (-1 + E^(9 t))}, {4/9 E^-t (-1 + E^(9 t)), 2/9 E^-t (-1 + E^(9 t)), 1/9 E^-t (5 + 4 E^(9 t))}}
Now we check that the exponential matrix is a solution of the matrix diferential equation \eqref{EqExp.1}. We check it in two ways. First we consider Sylvester's formula that leads to
\[ \frac{\text d}{{\text d}t} e^{{\bf A}\,t} = \frac{\text d}{{\text d}t} \left( e^{8t} {\bf Z}_8 + e^{-t} {\bf Z}_{-1} \right) = 8\, e^{8t} {\bf Z}_8 - e^{-t} {\bf Z}_{-1} . \]
Therefore, it is sufficient to show the validity of the following equations:
\begin{align*} 8\,{\bf Z}_8 &= {\bf A}\, {\bf Z}_8 , \\ - {\bf Z}_{-1} &= {\bf A}\, {\bf Z}_{-1} . \end{align*}
Indeed, Mathematica confirms
8*Z8 - A.Z8
A.Z1 + Z1
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

If you would like to use the standard Mathematica command, we need to differentiate the exponential function.

Dt[MatrixExp[A t], t];
Simplify[%]
Out[5]= {{1/9 E^-t (-5 + 32 E^(9 t)), 2/9 E^-t (1 + 8 E^(9 t)),
4/9 E^-t (1 + 8 E^(9 t))}, {2/9 E^-t (1 + 8 E^(9 t)),
8/9 E^-t (-1 + E^(9 t)),
2/9 E^-t (1 + 8 E^(9 t))}, {4/9 E^-t (1 + 8 E^(9 t)),
2/9 E^-t (1 + 8 E^(9 t)), 1/9 E^-t (-5 + 32 E^(9 t))}}
To check that the exponential matrix is the solution of the matrix differential equation \eqref{EqExp.1} directly:
Simplify[Dt[MatrixExp[A t], t] - A.MatrixExp[A t]]
Out[6]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

Finally, we need to show that the exponential matrix satisfies the initial condition

Print[MatrixExp[A 0]]
Out[7]= {{1,0,0},{0,1,0},{0,0,1}}
Note that instead of Dt, we can use the partial derivative operator: D[function,t]

The general solution:

CC := {c1, c2, c3} (* vector of arbitrary constants *)
(* note that the upper case letter C is prohibited to use *)
MatrixExp[A t].CC
Out[9]= {-c1 E^(2 t) (-1 + t) + c2 E^(2 t) t +
c3 E^(2 t) t, -(1/2) c1 E^(2 t) (-4 + t) t +
1/2 c3 E^(2 t) (-2 + t) t + 1/2 c2 E^(2 t) (2 - 2 t + t^2),
1/2 c1 E^(2 t) (-6 + t) t - 1/2 c2 E^(2 t) (-4 + t) t -
1/2 c3 E^(2 t) (-2 - 4 t + t^2)}
End of Example 5

 

Example 6: We consider a matrix that has pure imaginary eigenvalues:
\[ {\bf A} = \begin{bmatrix} \phantom{-}0 & 1 \\ -1&0 \end{bmatrix} . \]
A := {{0, 1}, {-1, 0}}
Eigenvalues[A]
Out[2]= {I, -I}
Simplify[ComplexExpand[MatrixExp[A t]]]
Out[3]= {{Cos[t], Sin[t]}, {-Sin[t], Cos[t]}}
diag = DiagonalMatrix[{2, -1, 4}]
Out[4]= {{2, 0, 0}, {0, -1, 0}, {0, 0, 4}}
y[t_] = MatrixExp[diag t]
Out[5]= {{E^(2 t), 0, 0}, {0, E^-t, 0}, {0, 0, E^(4 t)}}
y[0]
Out[6]= {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
y'[t] - diag.y[t]
Out[7]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
DiagonalMatrix[{2, 3}, -1]
Out[8]= {{0, 0, 0}, {2, 0, 0}, {0, 3, 0}}
DiagonalMatrix[{2, 3}, 1] // MatrixForm
out[9]/MatrixForm=
{"0", "2", "0"},
{"0", "0", "3"},
{"0", "0", "0"}
End of Example 6

 

  1. Chi-Tsong Chen (1998). Linear System Theory and Design (3rd ed.). New York: Oxford University Press,. ISBN 978-0195117776.
  2. Vladimir Dobrushkin, Applied Differential Equations. The Primary Course, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043.
  3. Devi, J.V., Deo, S.G., Khandeparkar, R., Linear Algebra to Differential Equations, 2021, CRC Press, ISBN 9780815361466

 

Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions