# Preface

The Adomian decomposition method (ADM for short) provides a practical technique for the resolution of a large class of linear or nonlinear, ordinary or partial, differential equations and it has been applied to many fields of science and engineering. This section demonstrates an application of the Adomian decomposition method to a system of ordinary differential equations.

Introduction to Linear Algebra with Mathematica

The Adomian decomposition method (ADM) gives a series solution, or a rapidly convergent sequence of analytic approximants, generated by recursion. The ADM is applicable to a wide range of problems whose mathematical models yield nonlinear equations or systems of equations involving algebraic, differential, integral, integro-differential, or differential-delay terms. In this section we shall describe the main algorithm of Adomian’s decomposition method as it applies to an initial value problem involving a general nonlinear equation of the form

$\begin{cases} \dot{y}_1 &= f_1 \left( t, y_1 , y_2 , \ldots ., y_n \right) , \\ \dot{y}_2 &= f_2 \left( t, y_1 , y_2 , \ldots ., y_n \right) , \\ \vdots& \qquad \vdots \\ \dot{y}_n &= f_n \left( t, y_1 , y_2 , \ldots ., y_n \right) , \end{cases}$
In each equation, the first derivative of one of the unknown functions is equated to the known function that depends on the independent variable t (time) and n unknown variables. The equations are shown using Newton's notation $$\dot{y} ≝ {\text d}y/{\text d}t$$ a mapping depending on the independent variable t (which we associate with time) and n unknown variables y1, … ,yn. Of course, we can represent the above system of equations using only its i-th eqution as
$L\,y_i = f_i \left( t, y_1 , y_2 , \ldots ., y_n \right) ,$
where $$L = \texttt{D} ≝ {\text d}/{\text d}t$$ is the derivative operator, but we prefer a vector notation by introducing n column vectors
${\bf y} (t) = \begin{bmatrix} y_1 (t) \\ \vdots \\ y_n (t) \end{bmatrix} , \qquad {\bf f} = \begin{bmatrix} t_1 \left( t, y_1 , \ldots , y_n \right) \\ \vdots \\ f_n \left( t, y_1 , \ldots , y_n \right) \end{bmatrix} = {\bf g}(t) + {\bf N} \left( t, y_1 , \ldots , y_n \right) ,$
where we break the input functions f into a sum of two components: N is a vector-valued nonlinear algebraic operator (not containing derivatives), and g is known driving function. This allows us to rewrite the given system of ordinary differential equations subject to the initial conditions in compact vector form:
$L\,{\bf y} = {\bf N}({\bf y}) + {\bf g}(t) , \qquad {\bf y}(0) = {\bf y}_0 .$
The initial condition is specified at the point t = 0 for simplicity, and y0 is a given vector in n-dimensional Euclidean space. We consider the derivative operator $$L \,y =\dot{y} ≝ {\text d}y/{\text d}t$$ on the set of functions with specified condition at t = 0 to be y(0) = y0, then its inverse is known to be

$L^{-1}\left[ f(t) \right] = y_0 + \int_0^t f(s)\,{\text d} s .$

It is assumed that the given initial value problem has a unique solution that can be represented by a convergent series

${\bf y} = {\bf u}_0 + \sum_{k\ge 1} {\bf u}_k (t) ,$
and the nonlinear operator N is decomposed as
${\bf N}({\bf y}) = \sum_{k\ge 0} {\bf A}_k \left( {\bf u}_0 , {\bf u}_1 , \ldots {\bf u}_k \right) \qquad\mbox{or}\qquad {\bf N}({\bf y}) = \sum_{k\ge 0} {\bf J}_k \left( {\bf u}_0 , {\bf u}_1 , \ldots {\bf u}_k \right) .$
Here Ak are Adomian's classical polynomials (that first were discovered by G. Adomian and R. Rach in 1983)
${\bf A}_k = \frac{1}{k!}\,\frac{{\text d}^k}{{\text d}\lambda^n} \left[ {\bf N} \left( \sum_{i\ge 0} \lambda^i {\bf u}_i \right) \right]_{\lambda =0} , \qquad k=0,1,2,\ldots ;$
and Jk are accelerated Adomian's polynomials (that were first introduced by George Adomian in his 1989 book, and were detailed and intensively used in 2008 Randolph Rach article)
${\bf J}_k = {\bf N} \left( {\bf u}_0 , {\bf u}_1 , \ldots {\bf u}_k \right) - {\bf N} \left( {\bf u}_0 , {\bf u}_1 , \ldots {\bf u}_{k-1} \right) , \qquad k=1,2,\ldots ,$
with J0 = N(u0). Upon substituting the series for unknown solution y and the nonlinear operator N into the given initial value problem, we obtain
$L \left[ {\bf u}_0 + \sum_{k\ge 1} {\bf u}_k (t) \right] = \sum_{k\ge 0} {\bf A}_k \left( {\bf u}_0 , {\bf u}_1 , \ldots {\bf u}_k \right) .$
A similar relation helds for accelerated Adomian's polynomials. These relations yield
\begin{align*} L \left[ {\bf u}_0 \right] &= g(t), \qquad {\bf u}_0 (0) = {\bf g}(t) , \quad {\bf u}_0 (0) = {\bf y}_0 ,\qquad \Longrightarrow \qquad {\bf u}_0 (t) = {\bf y}_0 + \int_0^t {\bf g} (s)\,{\text d}s , \\ L \left[ {\bf u}_1 \right] &= {\bf N}\left( {\bf u}_0 \right) = {\bf A}_0 = {\bf J}_0 , \qquad {\bf u}_1 (0) = {\bf 0} . \end{align*}
One can generate the multivariable Adomian polynomials using the following Mathematica script (Jun-Sheng Duan, Randolph Rach, Dumitru Băleanu, Abdul-Majid Wazwaz, A review of the Adomian decomposition method and its applications to fractional differential equations, Communications in Fractional Calculus, 2012, Vol. 3, No. 2, pp. 73--99.):
APmulti[f_, m_, M_] := Module[{i, j, r},
Subscript[u, 0] = Table[Subscript[u, i, 0], {i, 1, m}];
A = f@@Subscript[u, 0]; Table[T[i, j], {i, 1, M}, {j, 1, i}];
se = Table[_, {m}] /. List -> Sequence;
For[r = 1, r <= M, r++,
T[r, 1]=Table[Subscript[u, i, r]*D[f@@Subscript[u, 0],
Subscript[u, i, 0]], {i, 1, m}];
For[k = 2, k <= r, k++,
T[r, k]=Union[Flatten[Table[D[Map[#*Subscript[u, i, 1]/
(Exponent[#, Subscript[u, i, 1]]+1)&,
T[r-1, k-1]], Subscript[u, i, 0]], {i, 1, m}]]]];
For[k = 2, k <= Floor[r/2], k++,
T[r, k] = T[r, k] $Union](T[r - k, k] /. Flatten[Table[Subscript[u, i, j] -> Subscript[u, i, j+1], {i,1,m}, {j, 1, r-2*k+1}]])]; A[r] = Sum[Total[T[r, k] ], {k, 1, r}]; If[EvenQ[r], Do[T[r/2, k] =., {k, 1, r/2}]] ]; Table[A[r], {r, 0, M}]] Thus, one can recurrently determine every term of the series $${\bf y}(t) = \sum_{k\ge 0} {\bf u}_k (t) .$$ Consider the initial value problem for a system of ordinary differential equations, written in vector form: \[ \dot{\bf y} \equiv \frac{{\text d} {\bf y}(t)}{{\text d}t} = {\bf f}(t, {\bf y}) , \qquad {\bf y}(0) = {\bf y}_0 .$

Example:    ■

Example: Consider the matrix

${\bf A} = \frac{1}{2} \begin{bmatrix} 227&-273&45 \\ 229 & -301& 71 \\ 303&-337&33 \end{bmatrix}$
that has complex eigenvalues: -½ -10±20j.    ■

Example: Consider the following system of Duffing oscillators:

$\begin{cases} \ddot{x} + 2\,x - y &= \varepsilon \,x\,y^2 , \\ \ddot{y} + 3\,y -2\,x &= \varepsilon \,y\, x^2 , \end{cases}$
where 0 < ε ≪ 1 is a small parameter. This system has three critical points:
$\left( 0, 0 \right) , \qquad \left( \frac{1}{\sqrt{\varepsilon}}, \frac{1}{\sqrt{\varepsilon}} \right) , \qquad \left( -\frac{1}{\sqrt{\varepsilon}}, -\frac{1}{\sqrt{\varepsilon}} \right) ,$
which Mathematica confirms
Solve[{2*x - y == e*x*y^2 , 3*x - 2*y == e*y*x^2}, {x, y}]
{{x -> 0, y -> 0}, {x -> -(1/Sqrt[e]), y -> -(1/Sqrt[e])}, {x -> 1/Sqrt[e], y -> 1/Sqrt[e]}}
We rewrite the given system of equations in vector form and attach the initial conditions:
$\frac{{\text d}^2}{{\text d} t^2} \left( {\bf z}\right) \overset{\mathrm{def}}{=} \ddot{\bf z} = {\bf A}\,{\bf z} + \varepsilon {\bf f}({\bf z}) , \qquad {\bf z}(0) = {\bf z}_0 , \quad \dot{\bf z}(0) = {\bf v} ,$
where dot stands for the derivative with respect to time variable t ($$\dott{\bf z} ≝ {\text d}^2 {\bf z}/{\text d}t^2$$ ) and
${\bf z} (t) = \begin{bmatrix} x(t) \\ y(t) \end{bmatrix} , \quad {\bf A} = \begin{bmatrix} -2&1 \\ 2&-3 \end{bmatrix} , \quad {\bf f}({\bf z}) = \begin{bmatrix} x\,y^2 \\ y\,x^2 \end{bmatrix} , \quad {\bf z}_0 = \begin{bmatrix} x(0) = a \\ y(0) = b \end{bmatrix} , \quad \quad {\bf v} = \begin{bmatrix} \dot{x}(0) = A \\ \dot{y}(0) = B\end{bmatrix}$
Eigenvalues[{{-2, 1}, {2, -3}}]
{-4, -1}
Since matrix A has two negative eigenvalues λ = -1 and λ = -4, we expect the origin to be a center.
s = DSolve[{x''[t] + 2*x[t] - y[t] == 0, y''[t] + 3*y[t] - 2*x[t] == 0, x == a, x' == A, y == b, y' == B}, {x, y}, t]
{{x -> Function[{t}, 1/6 (4 a Cos[t] + 2 b Cos[t] + 2 a Cos[2 t] - 2 b Cos[2 t] + 4 A Sin[t] + 2 B Sin[t] + A Sin[2 t] - B Sin[2 t])], y -> Function[{t}, 1/3 (2 a Cos[t] + b Cos[t] - 2 a Cos[2 t] + 2 b Cos[2 t] + 2 A Sin[t] + B Sin[t] - A Sin[2 t] + B Sin[2 t])]}}
Manipulate[ ParametricPlot[{1/ 6 (4 a Cos[t] + 2 b Cos[t] + 2 a Cos[2 t] - 2 b Cos[2 t] + 4 A Sin[t] + 2 B Sin[t] + A Sin[2 t] - B Sin[2 t]), 1/3 (2 a Cos[t] + b Cos[t] - 2 a Cos[2 t] + 2 b Cos[2 t] + 2 A Sin[t] + B Sin[t] - A Sin[2 t] + B Sin[2 t])}, {t, 0, 20}], {a, -2, 2}, {A, -2, 2}, {b, -2, 2}, {B, -2, 2}]
We denote the second order derivative operator $$\texttt{D}^2 ≝ {\text d}^2/{\text d}t^2 ,$$ acting on the set of functions with prescribed initial conditions by L. Of course, we can incorporate the linear term generated by matrix A into L, but its inverse will contain the exponential matrix $$e^{{\bf A}t} ,$$ which will complicate our job.

The inverse of L is known to be

$L^{-1} {\bf u} (t) = {\bf z}_0 + {\bf v}\,t + \int_0^t \left( t-s \right) {\bf u} (s)\, {\text d}s .$
This allows us to rewrite the given initial value problem in equivalent integral form:
${\bf z} = L^{-1} \,{\bf A}\,{\bf z} + \varepsilon L^{-1} \,{\bf f}({\bf z}) ,$
which resembles the fixed point problem.

According to the Adomian decomposition method (ADM for short), we assume that the solution is represented by infinite series

${\bf z} = {\bf u}_0 + \sum_{n\ge 1} {\bf u}_n ,$
and the nonlinear term is also expressed via series over so called Adomian polynomials:
${\bf f} ({\bf z}) = \sum_{k\ge 0} A_k \left( {\bf u}_0 , {\bf u}_1 , \ldots , {\bf u}_k \right) \qquad\mbox{or} \qquad {\bf f} ({\bf z}) = \sum_{k\ge 0} J_k \left( {\bf u}_0 , {\bf u}_1 , \ldots , {\bf u}_k \right) .$
The initial term incorporates all nonhomogeneous inputs, so it is the solution of the following initial value problem:
$\textt{D}^2 {\bf u}_0 (t) =0, \qquad {\bf u}_0 (0) = {\bf z}_0 , \quad \dot{\bf u}_0 (0) = {\bf v} .$
So we have
${\bf u}_0 (t) = {\bf z}_0 + {\bf v}\,t .$
All other terms for the required solution are obtained recursively by solving the initial value problems:
$\texttt{D}^2 {\bf u}_{m+1} (t) = {\bf A} \,{\bf u}_{m} (t)+ {\bf A}_m , \qquad {\bf u}_{m+1} (0) = \dot{\bf u}_{m+1} (0) = {\bf 0} , \qquad m=0,1,2,\ldots .$
Since we know the inverse operator L-1, we explicitly determine u(t):
${\bf u}_{m+1} (t) = {\bf A} \int_0^t \left( t-s \right) {\bf u}_{m} (s) \,{\text d}s + \varepsilon \int_0^t \left( t-s \right) {\bf A}_{m} (s) \,{\text d}s , \qquad m=0,1,2,\ldots .$
The first Adomian polynomial is alsways
$A_0 \left( {\bf u}_0 \right) = \varepsilon {\bf f}\left( {\bf u}_0 \right) = \varepsilon \begin{bmatrix} \left( x(0) + \dot{x}(0)\,t \right) \left( y(0) + \dot{y}(0)\,t \right)^2 \\ \left( y(0) + \dot{y}(0)\,t \right) \left( x(0) + \dot{x}(0)\,t \right)^2 \end{bmatrix} = \varepsilon \begin{bmatrix} a b^2 + A b^2 t + 2 a b B t + 2 A b B t^2 + a B^2 t^2 + A B^2 t^3 \\ a^2 b + 2 a A b t + a^2 B t + A^2 b t^2 + 2 a A B t^2 + A^2 B t^3 \end{bmatrix} .$
The classical Adomian polynomials are defined through the formula
$A_m \left({\bf u}_0 , {\bf u}_1 , \ldots , {\bf u}_m \right) = \frac{1}{m!}\,\frac{{\text d}^m}{{\text d} \lambda^m} \left[ {\bf f} \left( \sum_{k\ge 0} {\bf u}_k \lambda^k \right) \right]_{\lambda = 0} , \qquad m=0,1,2,\ldots .$
Instead of the above polynomials, one may want to use the accelerated Adomian polynomials:
$J_m \left({\bf u}_0 , {\bf u}_1 , \ldots , {\bf u}_m \right) = {\bf f} \left( {\bf u}_0 + {\bf u}_1 + \cdots + {\bf u}_m \right) - {\bf f} \left( {\bf u}_0 + {\bf u}_1 + \cdots + {\bf u}_{m-1} \right) , \qquad m=0,1,2,\ldots .$
■

Example: In 1966, Robertson investigated a chemical system containing fast and slow motions at the same time. By modelling this system he got the following mathematical model

$\begin{cases} \dot{y}_1 &= -0.04\, y_1 + 10^4 y_2 y_3 , \\ \dot{y}_2 &= 0.04\,y_1 -10^4 y_2 y_3 -3\cdot 10^7 y_2^2 , \\ \dot{y}_3 &= 3\cdot 10^7 y_2^2 , \end{cases} \qquad \begin{bmatrix} y_1 (0) \\ y_2 (0) \\ y_3 (0) \end{bmatrix} = \begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix} .$
The system has been investigated in several papers; it is often taken as a benchmark for numerical methods. A detailed analysis about it can be found in the text-book It was shown that explicit methods give oscillating solutions. Stability was guaranteed in the case of implicit methods Qualitative analysis of Robertson’s Problem indicates that the component y2 rapidly reaches its maximal value for which its derivative is zero. E. Hairer and G. Wanner. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Springer-Verlag, 1991.    ■

Example: Consider the initial value problem for nonlinear system of equations

$\begin{cases} \dot{x} &= 2\,y^2 , \\ \dot{y} &= e^{-t}x(t) , \\ \dot{z} &= y(t) + z(t); \end{cases}\qquad \begin{bmatrix} x(0) \\ y(0) \\ z(0) \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} .$
The true solution is
\begin{align*} x(t) &= e^{2t} , \\ y(t) &= e^t , \\ z(t) &= t\,e^t , \end{align*}
DSolve[{x'[t] == 520*x[t] - 2605*y[t], y'[t] == 304*x[t] - 1521*y[t], x == 1, y == -1}, {x, y}, t]
{{x -> Function[{t}, 1/333 E^(-1000 t) (-1042 + 1375 E^(999 t))], y -> Function[{t}, 1/333 E^(-1000 t) (-608 + 275 E^(999 t))]}}
The above system of differential equations can be reduced to one single equation:
$x'' (t) + 1001\, x' (t) + 1000\,x(t) = 0, \qquad x(0) =1, \quad x' (0) = 3125.$
■

Example: Consider the initial value problem for singular differential equation of third order

$y''' = \frac{1}{x}\,y + y'' , \qquad y(1) = e, \quad y' (1) = 2\,e, \quad y'' (1) = 3\,e .$
Its true solution is y(x) = xex. Considering three functions
$y_1 (x) = y(x), \quad y_2 (x) = y' (x) , \quad y_3 (x) = y''(x) ,$
we convert the given differential equation into the following nonlinear system of three equations of the first order:
\begin{align*} y'_1 &= y_2 (x) \\ y'_2 &= y_3 (x) \\ y'_3 &= y_3 (x) + \frac{1}{x}\, y_1 (x) . \\ \end{align*}
■

In Mathematica, defining

1. Bathiha K. and Bathiha, B., A new algorithmfor solving linear ordinary differential equations” World Applied sciences journal, 15(12), 1777-1779, (2011)
2. Rehman, M.S., Yaseen, M., and Kamran, T., New iterative method for solution of system of linear differential equations, International Journal of Science and Research, ISSN: 2319-7064, 2016, Volume 5 Issue 2, pp. 1287--1288.
3. Robertson, H.H., The solution of a set of reaction rate equations, in Walsh, J.E., Numerical analysis; an introduction. Based on a symposium organized by the Institute of Mathematics and Its Applications, Birmingham, England, 1965. Edited by J. Walsh. Washington, Thompson Book Co., 1967; pp. 178--182.