Preface


This section presents basic information about general linear systems of differential equations with variable coefficients.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part II of the course APMA0340
Introduction to Linear Algebra

Variable Coefficient Linear Systems of Differential Equations


We consider a system of n linear differential equations in normal form (when the derivatives are isolated) with respect to n unknown functions:

\begin{equation} \label{EqVariable.1} \begin{cases} \dot{x}_1 &= p_{11} \,x_1 + p_{12} (t)\, x_2 + \cdots + p_{1n} (t)\, x_n + f_1 (t), \\ \dot{x}_2 &= p_{21} \,x_1 + p_{22} (t)\, x_2 + \cdots + p_{2n} (t)\, x_n +f_2 (t), \\ \vdots & \qquad \vdots \\ \dot{x}_n &= p_{n1} \,x_1 + p_{n2} (t)\, x_2 + \cdots + p_{nn} (t)\, x_n +f_n (t). \end{cases} \end{equation}
In this system of differential equations, the \( n^2 \) coefficients \( p_{11}(t), p_{12}(t), \ldots , p_{nn}(t) \) and the n functions \( f_1 (t) , f_2 (t) , \ldots , f_n (t) \) are assumed to be known and continuous on some open interval (𝑎, b). Instead of \( {\text d}x /{\text d}t \) we will use either of shorter notations \( x' \) (Lagrange) or the more customary notation \( \dot{x} \) (Newton) to denote a derivative of \( x(t) \) with respect to variable t associated with time. If the coefficients \( p_{ij} \) are constants, we have a constant coefficient system of equations. Otherwise, we have a linear system of differential equations with variable coefficients. The system is said to be homogeneous or undriven if \( f_1 (t) \equiv f_2 (t) \equiv \cdots f_n (t) \equiv 0. \)

The linear system of differential equations can be written in compact vector form:

\begin{equation} \label{EqVariable.2} \dot{\bf x}(t) = {\bf P} (t)\, {\bf x} + {\bf f}(t) , \end{equation}
where \( {\bf P} (t) \) denotes the following square matrix:
\[ {\bf P} (t) = \begin{bmatrix} p_{11} (t) & p_{12} (t) & \cdots & p_{1n} (t) \\ p_{21} (t) & p_{22} (t) & \cdots & p_{2n} (t) \\ \vdots & \vdots & \vdots & \vdots \\ p_{n1} (t) & p_{n2} (t) & \cdots & p_{nn} (t) \\ \end{bmatrix} \qquad\mbox{and} \qquad {\bf x}(t) = \begin{bmatrix} x_1 (t) \\ x_2 (t) \\ \vdots \\ x_n (t) \end{bmatrix} , \quad {\bf f}(t) = \begin{bmatrix} f_1 (t) \\ f_2 (t) \\ \vdots \\ f_n (t) \end{bmatrix} . \]
Here \( {\bf x} (t) \) and \( {\bf f} (t) \) are n-dimensional vector-functions that are assumed to be columns. Again, the matrix P(t) and the column vector f(t) are assumed to be given, but the vector x(t) is unknown and has to be determined. A system of linear differential equations in normal form \eqref{EqVariable.2} is called a vector differential equation. Its complementary equation

\begin{equation} \label{EqVariable.3} \frac{{\text d}{\bf x}}{{\text d}t} = {\bf P} (t)\, {\bf x} \qquad \mbox{or} \qquad \dot{\bf x}(t) = {\bf P} (t)\, {\bf x} \end{equation}
is a homogeneous equation whose general solution is referred to as the complementary function, containing n arbitrary constants. The homogeneous equation \eqref{EqVariable.3} obviously has the zero solution x(t) ≡ 0, which is called the trivial solution. A differential equation is usually subject to the initial conditions:
\begin{equation} \label{EqVariable.4} x_1 (t_0 ) = x_{10} , \quad x_2 (t_0 ) = x_{20} , \quad \ldots , \quad x_n (t_0 ) = x_{n0} \qquad\mbox{or} \qquad {\bf x}(t_0 ) = {\bf x}_0 , \end{equation}

where \( t_0 \) is a specified value of t∈(𝑎, b) and \( x_{10} , x_{20} , \ldots , x_{n0} \) are prescribed constants. In what follows, we usually assume that t0 = 0. The problem of finding a solution to a system of differential equations that satisfies the given initial conditions is called an initial value problem.

Theorem 1: Suppose that the coefficients matrix P(t) and the forcing function f(t) are continuous on an interval (𝑎, b). Let the initial point t0 ∈ (𝑎, b) and let k be an arbitrary column constant. Then the initial value problem
\[ \dot{\bf x}(t) = {\bf P}(t)\,{\bf x} (t) + {\bf f}(t), \qquad {\bf x}(t_0 ) = {\bf k} \]
has a unique solution on (𝑎, b).
A set of n vector functions x₁(t), x₂(t), …, xn(t) is said to be linearly dependent on an interval |𝑎, b| if there exists a set of numbers c₁, c₂, …, cn with at least one nonzero, such that
\[ c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t) \equiv 0 \qquad\mbox{for all } \ t\in |a,b|. \]
Otherwise, these vector functions are called linearly independent.
Example 1: Two vectors are linearly dependent if and only if one of them is a constant multiple of another.

The vector functions

\[ {\bf x}(t) = \begin{bmatrix} e^t \\ t\, e^t \end{bmatrix} = e^t \begin{bmatrix} 1 \\ t \end{bmatrix} \qquad \mbox{and} \qquad {\bf y}(t) = \begin{bmatrix} 1 \\ t \end{bmatrix} \]
are linearly independent on (−∞, ∞) since x(t) = ety(t) and there is no constant C such that x(t) = Cy(t).    ■
Theorem 2: Let x₁(t), … , xm(t) be solutions of the homogeneous vector differential equation \eqref{EqVariable.3} on an interval |𝑎, b|, and let t0 be any point in this interval. Then the set of vector functions x₁(t), … , xm(t) is linearly dependent if and only if the set of vectors { x₁(t0), … , xm(t0) } is linearly dependent.
Theorem 3: Let xk(t), k = 1, 2, … , n be solutions of the initial value problem
\[ \dot{\bf x}_k (t) = {\bf P}(t)\,{\bf x}_k (t), \qquad {\bf x}_k (t_0 ) = {\bf e}_k (t_0 ) , \]
where P(t) is n×n matrix function and
\[ {\bf e}_1 = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} , \quad {\bf e}_2 = \begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{pmatrix} , \quad \ldots , \ {\bf e}_n = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{pmatrix} . \]
Then xk(t), k = 1, 2, … , n, are linearly independent solutions of the system \eqref{EqVariable.3}.
Theorem 4: The dimension of the solution space of the n×n system of differential equations \eqref{EqVariable.3} is n.
Upon choosing n linearly independent vectors ki, i = 1, 2, … , n, we can build, based on Theorem 1, n linearly independent solutions to the homogeneous vector equation \eqref{EqVariable.3}:
\[ {\bf x}_1 (t) = \begin{pmatrix} x_{11} (t) \\ x_{12} (t) \\ \vdots \\ x_{1n} (t) \end{pmatrix} , \quad {\bf x}_2 (t) = \begin{pmatrix} x_{21} (t) \\ x_{22} (t) \\ \vdots \\ x_{2n} (t) \end{pmatrix} , \quad \ldots , \quad {\bf x}_n (t) = \begin{pmatrix} x_{n1} (t) \\ x_{n2} (t) \\ \vdots \\ x_{nn} (t) \end{pmatrix} . \]
This set of linearly independent solutions is referred to as the fundamental set of solutions to the homogeneous vector equation \eqref{EqVariable.3}. it is convenient to place these vector-solutions side-by-side to make the matrix
\[ \left[ {\bf x}_1 (t) , \ {\bf x}_2 (t), \ldots , {\bf x}_n (t) \right] = \left[ \begin{pmatrix} x_{11} (t) \\ x_{12} (t) \\ \vdots \\ x_{1n} (t) \end{pmatrix} , \ \begin{pmatrix} x_{21} (t) \\ x_{22} (t) \\ \vdots \\ x_{2n} (t) \end{pmatrix} , \quad \ldots , \quad \begin{pmatrix} x_{n1} (t) \\ x_{n2} (t) \\ \vdots \\ x_{nn} (t) \end{pmatrix} \right] = \begin{bmatrix} x_{11}(t) & x_{21} (t) , & \cdots & x_{n1} (t) \\ x_{12}(t) & x_{22} (t) , & \cdots & x_{2n} (t) \\ \vdots & \vdots & \ddots & \vdots \\ x_{1n}(t) & x_{2n}(t) & \cdots & x_{nn}(t) \end{bmatrix} . \]
In this n×n matrix, every column is a solution of the homogeneous vector equation \eqref{EqVariable.3}. This matrix deserves a special name given in the following definition.
A square \( n \times n \) non-singular matrix X(t) that satisfies the matrix differential equation (that contains n×n equations)
\begin{equation} \label{EqVariable.5} \dot{\bf X} (t) = {\bf P}(t)\,{\bf X} \qquad \mbox{or} \qquad \frac{{\text d}\,{\bf X} (t)}{{\text d}\,t} = {\bf P}(t)\,{\bf X} \end{equation}
is called the fundamental matrix for the vector equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x} .\) The column-vectors of the fundamental matrix are said to form the fundamental set of solutions for homogeneous vector equation \eqref{EqVariable.3}.
A fundamental matrix X(t) satisfies the matrix differential equation \eqref{EqVariable.5} that contains n² differential equations for each entry of the n×n matrix X(t) = [ xi,j(t) ]. For example, consider a two-dimensional case:
\[ {\bf X}(t) = \begin{bmatrix} x_{11} (t) & x_{12} (t) \\ x_{21}(t) & x_{22} (t) \end{bmatrix} \qquad\mbox{and} \qquad {\bf P}(t) = \begin{bmatrix} p_{11} (t) & p_{12} (t) \\ p_{21}(t) & p_{22} (t) \end{bmatrix} \]
Then matrix differential equation \eqref{EqVariable.5} can be written as
\begin{align*} \begin{bmatrix} \dot{x}_{11} (t) & \dot{x}_{12} (t) \\ \dot{x}_{21}(t) & \dot{x}_{22} (t) \end{bmatrix} &= \begin{bmatrix} p_{11} (t) & p_{12} (t) \\ p_{21}(t) & p_{22} (t) \end{bmatrix} \, \begin{bmatrix} x_{11} (t) & x_{12} (t) \\ x_{21}(t) & x_{22} (t) \end{bmatrix} = \begin{bmatrix} p_{11} x_{11} (t) + p_{12} x_{21} (t) & p_{11} x_{12} (t) + p_{12} x_{22} (t) \\ p_{21} x_{11} + p_{22} x_{21}(t) & p_{21} x_{12} (t) + p_{22} x_{22} (t) \end{bmatrix} . \end{align*}
This fundamental matrix is equivalent to two separate vector equations (for each column):
\[ \begin{cases} \dot{x}_{11} &= p_{11} x_{11}(t) + p_{12} x_{21} (t) , \\ \dot{x}_{21} &= p_{21} x_{11}(t) + p_{22} x_{21} (t) , \end{cases} \qquad\mbox{and} \qquad \begin{cases} \dot{x}_{12} &= p_{11} x_{12}(t) + p_{12} x_{22} (t) , \\ \dot{x}_{22} &= p_{21} x_{12}(t) + p_{22} x_{22} (t) . \end{cases} \]
The established relation between the matrix differential equation \eqref{EqVariable.5} and the vector equation \eqref{EqVariable.3} leads to the following sequence of statements.
Theorem 5: If X(t) is a solution of the n×n matrix differential equation \eqref{EqVariable.5}, then for any constant column vector c = [ c1, c2, … , cn ]T, the n-vector u(t) = X(t) c is a solution of the vector equation \eqref{EqVariable.3}.
Theorem 6: If an n×n matrix P(t) has continuous entries on an open interval, then the homogeneous vector differential equation\eqref{EqVariable.3} has an n×n fundamental matrix X(t) on the same interval. Every solution x(t) of this system can be rewritten as a linear combination of the columns vectors of the column vectors of the fundamental matrix in a unique way:
\begin{equation} \label{EqVariable.6} {\bf x}(t) = c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t) \end{equation}
for appropriate constants c1, c2, … , cn.

The determinant W(t) = detX(t) of a square matrix X(t) = [x1(tx2(t)  ···  xn(t)] formed from the set of n column-vector functions x1(t), x2(t), …, xn(t) is called the Wronskian of these column vectors x1(t), x2(t), …, xn(t).
Apparently, Józef Maria Hoene-Wroński (1776--1853) did not develop the Wronskians himself. The term was introduced in 1882 bythe Scottish mathematician Thomas Muir (1844--1934).

Theorem Liouville--Ostrogradski: Let P(t) be an n×n matrix with entries pi,j(t), i, j = 1, 2, …, n, that are continuous functions on some interval including point t0. Let x1(t), x2(t), …, xn(t) be n solutions to the homogeneous vector differential equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) . \) Then the Wronskian of the set of vector solutions is
\begin{equation} \label{EqVariable.7} W(t) = W( t_0 ) \,\exp \left\{ \int_{t_0}^t \mbox{tr}\,{\bf P}(t)\,{\text d}t \right\} = \mbox{constant}\,\exp \left\{ \int \mbox{tr}\,{\bf P}(t)\,{\text d}t \right\} , \end{equation}
where ∫ represent any primitive. Here t0 is any point within an interval where the trace trP(t) = p11 + p22 + ··· + pnn is continuous.

The theorem was proved in 1838 by the French mathematician Joseph Liouville (1809--1882) and the Russian mathematician Michail Ostrogradski (1801--1861) independently.

Corollary 1: Let x1(t), x2(t), …, xn(t) be column solutions of the homogeneous vector equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) \) on some interval |𝑎, b|, where n×n matrix P(t) is continuous. Then the corresponding matrix X(t) = [x1(t), x2(t), …, xn(t)] of these column vectors is either a singular matrix for all t ∈ |𝑎, b| or nonsingular. In other words, detX(t) is either identically zero or it never vanishes on the interval |𝑎, b|.
Corollary 2: Let P(t) be an n×n matrix function that is continuous on an interval |𝑎, b|. If { x1(t), x2(t), …, xn(t) } is a linearly independent set of solutions to the homogeneous differential equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) , \) then the Wronskian
\[ W(t) = \det \left[ {\bf x}_1 (t), {\bf x}_2 (t), \ldots , {\bf x}_n (t) \right] \]
is not zero at every point t in |𝑎, b|.

Example 2: Consider the homogeneous linear equation

\[ \dot{\bf x} = {\bf P}(t) {\bf x} , \qquad {\bf P}(t) = \begin{bmatrix} \coth (t) & 1 - \coth^2 (t) \\ 0 & \coth (t) \end{bmatrix} . \tag{2.1} \]
This system has a fundamental matrix
\[ {\bf X}(t) = \begin{bmatrix} \cosh (t) & \sinh (t) \\ \sinh (t) & 0 \end{bmatrix} \qquad \Longrightarrow \qquad W(t) = \det {\bf X}(t) = - \sinh^2 t . \tag{2.2} \]
Since the trace of matrix P(t) is 2 coth(t), integration yields
\[ \int \mbox{tr}\,{\bf P}(t)\,{\text d}t = \int 2\,\coth (t)\,{\text d}t = 2\,\ln\sinh t . \]
Integrate[2*Coth[t], t]
2 Log[Sinh[t]]
Using Liouville--Ostrogradski theorem, we find the Wronskian
\[ W(t) = c\times \exp \left\{ \int \mbox{tr}\,{\bf P}(t)\,{\text d}t \right\} = c\times e^{2\,\ln\sinh t} = c\times e^{\ln\sinh^2 t} = c\times \sinh^2 t . \]
    ■
Theorem 7: [Superposition Principle for Homogeneous Equations] Let x1, x2, …, xn be a set of solution vectors of the homogeneous system of differential equations \( \dot{\bf x} = {\bf P}(t)\,{\bf x}(t) \) on an interval |𝑎, b|. Then the linear combination \eqref{EqVariable.6} is a solution of the system \eqref{EqVariable.3} for any constants c1, c2, … , cn.

When a fundamental matrix X (t) is known, the coefficient matrix P (t) is obtained from it:

\[ {\bf P}(t) = \dot{\bf X} (t)\,{\bf X}^{-1} (t) . \]

Example 3:

For instance, consider 2-dimensional system of equations

\[ \dot{\bf x}(t) = {\bf P}(t)\,{\bf x} (t) , \qquad \mbox{with} \quad {\bf P}(t) = \begin{bmatrix} 3 + \frac{2}{t-2} + \frac{1}{t-1} & \frac{4-3t}{t^2 -3t+2} \\ 3 + \frac{1}{t-2} & \frac{1}{t-2} \end{bmatrix} . \tag{3.1} \]
The fundamental matrix is
\[ \dot{\bf X}(t) = \begin{bmatrix} t-1 & e^{3t} \\ (t-1)^2 & e^{3t} \end{bmatrix} . \]
Since the fundamental matrix X(t) is a solution of the matrix differential equation (3.1), we get the equation
\[ \dot{\bf X} (t)\,{\bf X}^{-1} (t) = {\bf P} (t) . \]
X[t_] = {{t - 1, Exp[3*t]}, {(t - 1)^2, Exp[3*t]}}
Out[1]= {{-1 + t, E^(3 t)}, {(-1 + t)^2, E^(3 t)}}
P[t_] = FullSimplify[D[X[t], t].Inverse[X[t]]]
Out[2]= {{3 + 2/(-2 + t) + 1/(-1 + t), (4 - 3 t)/( 2 - 3 t + t^2)}, {3 + 1/(-2 + t), 1/(2 - t)}}
To check the answer, we type:
FullSimplify[D[X[t], t] - P[t].X[t]]
Out[3]= {{0, 0}, {0, 0}}

Homogeneous differential equations of arbitrary order with constant coefficients can be solved in straightforward matter by converting them into system of first order ODEs.

    ■
Theorem 8: [Superposition Principle for Inhomogeneous Equations] Let P(t) be an n×n matrix function that is continuous on an interval |𝑎, b|. Let x₁(t) and x₂(t) be two vector solutions of the inhomogeneous differential vector equation
\begin{equation} \label{EqVariable.8} \dot{\bf x}(t) = {\bf P} (t)\, {\bf x} (t) + {\bf f} (t) \qquad\mbox{or} \qquad \frac{{\text d}{\bf x}}{{\text d}t} = {\bf P} (t)\, {\bf x} (t) + {\bf f} (t) . \end{equation}
Then their linear combination x(t) = cx₁(t) + cx₂(t) is a solution of the vector differential equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) + \left( c_1 + c_2 \right) {\bf f} (t) . \)
Corollary 3: The difference between any two solutions of the inhomogeneous equation \eqref{EqVariable.8} is a solution of the complementary homogeneous equation \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) . \)
Theorem 9: Let X(t) be a fundamental matrix for the homogeneous linear system \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) , \) meaning that X(t) is a solution of the matrix differential equation \( \dot{\bf X} = {\bf P} (t)\, {\bf X} (t) \) and detX(t) ≠ 0. Then the unique solution of the initial value problem \eqref{EqVariable.3}, \eqref{EqVariable.4} is given by
\[ {\bf x} (t) = {\bf X}(t)\,{\bf X}^{-1}(t_0 ) {\bf x}_0 . \]
Let X(t) be a fundamental matrix for the homogeneous linear system \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) . \) The matrix-function
\begin{equation} \label{EqVariable.9} {\bf \Phi}\left( t, t_0 \right) = {\bf X} (t) {\bf X}^{-1} (t_0 ) \end{equation}
is referred to as a propagator matrix.

Corollary 4: For a fundamental matrix X(t) of the vector differential equation \eqref{EqVariable.3}, the propagator matrix-function \eqref{EqVariable.9} is the unique solution of the initial value problem
\[ \frac{{\text d} {\bf \Phi}}{{\text d}t} = {\bf P}(t) {\bf \Phi} \left( t, t_0 \right) , \qquad {\bf \Phi} \left( t_0 , t_0 \right) = {\bf I}, \]
where I is the identity matrix. Hence, Φ(t, t0) is a fundamental matrix of Eq.\eqref{EqVariable.3}.

Corollary 5: Let X(t) and Y(t) be two fundamental matrices for the homogeneous linear system \( \dot{\bf x} = {\bf P} (t)\, {\bf x} (t) . \) Then there exists an invertible constant matrix C such that X(t) = Y(t)C, detC ≠ 0. This means that the solution space of the matrix differential equation \( \dot{\bf X} = {\bf P} (t)\, {\bf X} (t) \) is 1.

Example 4: Consider an inhomogeneous differential equation

\[ \frac{{\text d} {\bf x}}{{\text d}t} = {\bf P} (t)\,{\bf x} + {\bf f}(t) , \tag{4.1} \]
given that
\[ {\bf X}(t) = \begin{bmatrix} e^{4t} & -1 \\ e^{6t} & e^{2t} \end{bmatrix} \]
is a fundamental matrix for the complementary system \eqref{EqVariable.3} with
\[ {\bf P} (t) = \begin{bmatrix} 2 & 2\,e^{-2t} \\ 2\, e^{2t} & 4 \end{bmatrix} \qquad \mbox{and} \qquad {\bf f} (t) = \begin{bmatrix} e^t \\ e^{-t} \end{bmatrix} . \]
We seek a solution of Eq.(4.1) in the form
\[ {\bf x} (t) = {\bf X}(t)\,{\bf u}(t) \tag{4.2} \]
for some u(t) to be determined. Substituting (4.2) into the differential equation (4.1), we get
\[ \dot{\bf X} (t)\,{\bf u}(t) + {\bf X}(t)\,\dot{\bf u}(t) = {\bf P}(t)\,{\bf X}(t)\,{\bf u}(t) + {\bf f}(t) . \]
Since
\[ \dot{\bf X} (t) = {\bf P}(t)\,{\bf X}(t) , \]
we have
\[ {\bf X}(t)\,\dot{\bf u}(t) = {\bf f}(t) \qquad \Longrightarrow \qquad \dot{\bf u}(t) = {\bf X}^{-1}(t)\,{\bf f}(t) . \]
Integration yields
\[ {\bf u}(t) = \int {\bf X}^{-1}(t)\,{\bf f}(t) \,{\text d}t + {\bf c} , \]
where c is an arbitrary constant column-vector. Substitution this integral into Eq.(4.2), gives the general solution
\[ {\bf x} (t) = {\bf X}(t) \int {\bf X}^{-1}(t)\,{\bf f}(t) \,{\text d}t + {\bf X}(t) \,{\bf c} . \]
To determine explicit expressions, we ask Mathematica for help
X[t_] = {{Exp[4*t], -1}, {Exp[6*t], Exp[2*t]}}
P[t_] = {{2, 2/Exp[2*t]}, {2*Exp[2*t], 4}}
f[t_] = 2*{{Exp[t]}, {Exp[-t]}}
u[t_] = Integrate[Inverse[X[t]].f[t], t]
{{-(1/7) E^(-7 t) - E^(-3 t)/3}, {-(1/3) E^(-3 t) - E^t}}
Finally, we find a particular solution
Simplify[X[t].u[t]]
{{(4 E^(-3 t))/21 + (2 E^t)/3}, {-(2/21) E^-t (5 + 14 E^(4 t))}}
\[ {\bf x} (t) = \frac{1}{21} \begin{bmatrix} 4 e^{-3t} + 14\,e^t \\ -28\,e^{-3t} - 10\, e^t \end{bmatrix} . \]
    ■

 

  1. Hallam, T.G., Classification of bounded solutions of a linear nonhomogeneous differential equation, Proceedings of the American mathematical Society, 1973, Vol. 40, No. 2, pp. 507--512. doi: 10.2307/2039402

Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions