Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the
first course APMA0330
Return to the main page for the
second course APMA0340
Return to Part II of the course APMA0340
Introduction to Linear Algebra
In this system of differential equations, the \( n^2 \) coefficients \( p_{11}(t), p_{12}(t), \ldots , p_{nn}(t) \)
and the n functions \( f_1 (t) , f_2 (t) , \ldots , f_n (t) \) are assumed to be known and continuous on some open interval (𝑎, b). Instead of \( {\text d}x /{\text d}t \) we will use either of shorter notations \( x' \) (Lagrange) or the more customary notation
\( \dot{x} \) (Newton) to denote a derivative of \( x(t) \) with respect to variable t associated with time.
If the coefficients \( p_{ij} \)
are constants, we have a constant coefficient system of equations. Otherwise, we have a linear system of differential equations with variable coefficients.
The system is said to be homogeneous or undriven if \( f_1 (t) \equiv f_2 (t) \equiv \cdots f_n (t) \equiv 0. \)
The linear system of differential equations can be written in compact vector form:
Here \( {\bf x} (t) \) and \( {\bf f} (t) \) are n-dimensional vector-functions that are assumed to be columns. Again, the matrix P(t) and the column vector f(t) are assumed to be given, but the vector x(t) is unknown and has to be determined.
A system of linear differential equations in normal form \eqref{EqVariable.2} is called a vector differential equation. Its complementary equation
is a homogeneous equation whose general solution is referred to as the complementary function, containing n arbitrary constants. The homogeneous equation \eqref{EqVariable.3} obviously has the zero solution x(t) ≡ 0, which is called the trivial solution.
A differential equation is usually subject to the initial conditions:
where \( t_0 \) is a specified value of t∈(𝑎, b) and \( x_{10} , x_{20} , \ldots , x_{n0} \)
are prescribed constants. In what follows, we usually assume that t_{0} = 0. The problem of finding a solution to a
system of differential equations that satisfies the given initial
conditions is called an initial value problem.
Theorem 1:
Suppose that the coefficients matrix P(t) and the forcing function f(t) are continuous on an interval (𝑎, b). Let the initial point t_{0} ∈ (𝑎, b) and let k be an arbitrary column constant. Then the initial value problem
A set of n vector functions x₁(t), x₂(t), …,
x_{n}(t) is said to be linearly dependent on an interval |𝑎, b| if there exists a set of numbers c₁, c₂, …, c_{n} with at least one nonzero, such that
are linearly independent on (−∞, ∞) since x(t) = e^{t}y(t) and there is no constant C such that x(t) = Cy(t).
■
Theorem 2:
Let x₁(t), … , x_{m}(t) be solutions of the homogeneous vector differential equation \eqref{EqVariable.3} on an interval |𝑎, b|, and let t_{0} be any point in this interval. Then the set of vector functions x₁(t), … , x_{m}(t) is linearly dependent if and only if the set of vectors { x₁(t_{0}), … , x_{m}(t_{0}) } is linearly dependent.
Theorem 3:
Let x_{k}(t), k = 1, 2, … , n be solutions of the initial value problem
Then x_{k}(t), k = 1, 2, … , n, are linearly independent solutions of the system \eqref{EqVariable.3}.
Theorem 4:
The dimension of the solution space of the n×n system of differential equations \eqref{EqVariable.3} is n.
Upon choosing n linearly independent vectors k_{i}, i = 1, 2, … , n, we can build, based on Theorem 1, n linearly independent solutions to the homogeneous vector equation \eqref{EqVariable.3}:
This set of linearly independent solutions is referred to as the fundamental set of solutions to the homogeneous vector equation \eqref{EqVariable.3}. it is convenient to place these vector-solutions side-by-side to make the matrix
In this n×n matrix, every column is a solution of the homogeneous vector equation \eqref{EqVariable.3}. This matrix deserves a special name given in the following definition.
A square \( n \times n \) non-singular matrix X(t) that satisfies the matrix differential equation (that contains n×n equations)
is called the fundamental matrix for the vector equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x} .\) The column-vectors of the fundamental matrix are said to form the fundamental set of solutions for homogeneous vector equation \eqref{EqVariable.3}.
A fundamental matrix X(t) satisfies the matrix differential equation \eqref{EqVariable.5}
that contains n² differential equations for each entry of the n×n matrix X(t) = [ x_{i,j}(t) ]. For example, consider a two-dimensional case:
The established relation between the matrix differential equation \eqref{EqVariable.5} and the vector equation \eqref{EqVariable.3} leads to the following sequence of statements.
Theorem 5:
If X(t) is a solution of the n×n matrix differential equation \eqref{EqVariable.5}, then for any constant column vector c = [ c_{1}, c_{2}, … , c_{n} ]^{T}, the n-vector u(t) = X(t) c is a solution of the vector equation \eqref{EqVariable.3}.
Theorem 6:
If an n×n matrix P(t) has continuous entries on an open interval, then the homogeneous vector differential equation\eqref{EqVariable.3} has an n×n fundamental matrix X(t) on the same interval. Every solution x(t) of this system can be rewritten as a linear combination of the columns vectors of the column vectors of the fundamental matrix in a unique way:
for appropriate constants c_{1}, c_{2}, … , c_{n}.
The determinant W(t) = detX(t) of a square matrix X(t) = [x_{1}(t) x_{2}(t) ··· x_{n}(t)] formed from the set of n column-vector functions x_{1}(t), x_{2}(t), …, x_{n}(t) is called the Wronskian of these column vectors x_{1}(t), x_{2}(t), …, x_{n}(t).
Apparently, Józef Maria Hoene-Wroński (1776--1853) did not develop the Wronskians himself. The term was introduced in 1882 bythe Scottish mathematician Thomas Muir (1844--1934).
Theorem Liouville--Ostrogradski:
Let P(t) be an n×n matrix with entries p_{i,j}(t), i, j = 1, 2, …, n, that are continuous functions on some interval including point t_{0}. Let x_{1}(t), x_{2}(t), …, x_{n}(t) be n solutions to the homogeneous vector differential equation
\( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) . \) Then the Wronskian of the set of vector solutions is
where ∫ represent any primitive. Here t_{0} is any point within an interval where the trace trP(t) = p_{11} + p_{22} + ··· + p_{nn} is continuous.
The theorem was proved in 1838 by the French mathematician Joseph Liouville (1809--1882) and the Russian mathematician Michail Ostrogradski (1801--1861) independently.
Corollary 1:
Let x_{1}(t), x_{2}(t), …, x_{n}(t) be column solutions of the homogeneous vector equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) \) on some interval |𝑎, b|, where n×n matrix P(t) is continuous. Then the corresponding matrix X(t) = [x_{1}(t), x_{2}(t), …, x_{n}(t)] of these column vectors is either a singular matrix for all t ∈ |𝑎, b| or nonsingular. In other words, detX(t) is either identically zero or it never vanishes on the interval |𝑎, b|.
Corollary 2:
Let P(t) be an n×n matrix function that is continuous on an interval |𝑎, b|. If { x_{1}(t), x_{2}(t), …, x_{n}(t) } is a linearly independent set of solutions to the homogeneous differential equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) , \) then the Wronskian
Theorem 7: [Superposition Principle for Homogeneous Equations]
Let x_{1}, x_{2}, …, x_{n} be a set of solution vectors of the homogeneous system of differential equations \( \dot{\bf x} = {\bf P}(t)\,{\bf x}(t) \) on an interval |𝑎, b|. Then the linear combination \eqref{EqVariable.6} is a solution of the system \eqref{EqVariable.3} for any constants c_{1}, c_{2}, … , c_{n}.
When a fundamental matrix X (t) is known, the coefficient matrix P (t) is obtained from it:
Homogeneous differential equations of arbitrary order with constant coefficients can be solved in straightforward matter by converting them into system of first order ODEs.
■
Theorem 8: [Superposition Principle for Inhomogeneous Equations]
Let P(t) be an n×n matrix function that is continuous on an interval |𝑎, b|. Let x₁(t) and x₂(t) be two vector solutions of the inhomogeneous differential vector equation
Then their linear combination x(t) = c₁x₁(t) + c₂x₂(t) is a solution of the vector differential equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) + \left( c_1 + c_2 \right) {\bf f} (t) . \)
Corollary 3:
The difference between any two solutions of the inhomogeneous equation \eqref{EqVariable.8} is a solution of the complementary homogeneous equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) . \)
Theorem 9:
Let X(t) be a fundamental matrix for the homogeneous linear system \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) , \) meaning that X(t) is a solution of the matrix differential equation
\( \dot{\bf X} = {\bf P} (t)\, {\bf X} (t) \) and detX(t) ≠ 0. Then the unique solution of the initial value problem \eqref{EqVariable.3}, \eqref{EqVariable.4} is given by
Corollary 4:
For a fundamental matrix X(t) of the vector differential equation \eqref{EqVariable.3}, the propagator matrix-function \eqref{EqVariable.9} is the unique solution of the initial value problem
where I is the identity matrix. Hence,
Φ(t, t_{0}) is a fundamental matrix of Eq.\eqref{EqVariable.3}.
Corollary 5:
Let X(t) and Y(t) be two fundamental matrices for the homogeneous linear system \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) . \) Then there exists an invertible constant matrix C such that X(t) = Y(t)C, detC ≠ 0. This means that the solution space of the matrix differential equation \( \dot{\bf X} = {\bf P} (t)\, {\bf X} (t) \) is 1.
Example 4:
Consider an inhomogeneous differential equation
Hallam, T.G., Classification of bounded solutions of a linear nonhomogeneous differential equation, Proceedings of the American mathematical Society, 1973, Vol. 40, No. 2, pp. 507--512. doi: 10.2307/2039402
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions