# Preface

Generally speaking, second-order differential equations with variable coefficients cannot be resolved in terms of the known functions. However, there is a fairly large class of differential equations whose solutions can be expressed either in terms of power series, or as simple combination of power series and elementary functions. It is this class of differential equations that we shall study in this chapter. For this, it is convenient to let the independent variable be x. This section provides a stream of examples demonstrating applications of power series method for solving initial value problems for second order differential equations.

# Series Solutions for the Second Order Equations

This section is devoting to series solutions of the second order differential equations. We start with linear differential equations. However, we first remind the important definition.

Definition: A complex and single-valued function ƒ of a complex variable z is said to be holomorphic at a point 𝑎 if it is differentiable at every point within some open disk centered at 𝑎; this is equivalent that function ƒ can be expanded as a convergent power series with positive radius of convergence:
$f(x) = \sum_{n\ge 0} c_n \left( z - a \right)^n .$
Holomorphic functions are also sometimes referred to as regular functions.

A complex-valued function is said to be an analytic function if it is obtained from some holomorphic function by analytic continuation. An analytic function has all derivatives at any point from its domain (so it is a holomorphic in some disk), but it can be multi-valued consisting of at most countable number of branches (= holomorphic functions).

The word "holomorphic" was introduced by two of Cauchy's students, Briot (1817–1882) and Bouquet (1819–1895). Its name is derived from the Greek ὅλος (holos) meaning "entire", and μορφή (morphē) meaning "form" or "appearance".

You studied in calculus some holomorphic functions---functions that are represented by connvergent power series. For example, a familiar cosine function has a Maclaurin series representation

$\cos (x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{5!} + \cdots = \sum_{n\ge 0} (-1)^n \frac{x^{2n}}{(2n)!} .$
Mathematica has a dedicated comamnd to determine a series approximation to solutions of linear equations. Since the cosine function is a solution to the initial value problem (IVP for short)
$y'' + y =0 , \qquad y(0) =1, \quad y'(0) =0,$
we apply AsymptoticDSolveValue command:
sol1 = AsymptoticDSolveValue[{y''[x] + y[x] == 0, y[0] == 1, y'[0] == 0}, y[x], {x, 0, 8}]
1 - x^2/2 + x^4/24 - x^6/720 + x^8/40320
In order to visualize how polynomial approximations fit the cosine function, we ask Mathematica for help.
sol[n_] = AsymptoticDSolveValue[{y''[x] + y[x] == 0, y[0] == 1, y'[0] == 0}, y[x], {x, 0, n}];
Plot[{sol[4], sol[7], sol[12], sol[18], sol[20], Cos[x]} // Evaluate, {x, 0, 3*Pi + 1}, PlotRange -> {-4.5, 4.5}, PlotStyle -> Thick]

We repeat the same plot using different Mathematica approach:

AsymptoticDSolveValue[{y''[x] + y[x] == 0, y[0] == 1, y'[0] == 0}, y[x], {x, 0, 8}]
Plot[Evaluate[Accumulate[List @@ %]], {x, 0, 10}, PlotTheme -> "Web", PlotLegends -> "Expressions"]

Linear Equations

There is an elegant Theorem, due to a Jewish-German mathematician Lazarus Fuchs (1833--1902) that guarantees existence of power series solutions to a linear differential equation of the second order which has the radius of convergence of the series solution at least as big as the minimum of the radii of convergence for its coefficients.

Fuchs's Theorem: Consider the initial value problem for a linear differential equation of second order written in a normalized form
$$y'' + p(x)\, y' + q(x) \,y =0, \qquad y\left( x_0 \right) = y_0 , \quad y' \left( x_0 \right) = v_0 , \label{EqSer2.1}$$
where prime stands for derivative with respect to independent variable x: $$y' = {\text d}y/{\text d}x .$$ Let r > 0. If both p(x) and q(x) have Taylor series, which converge on the interval $$\left\vert x - x_0 \right\vert < 0,$$ then the differential equation has a unique power series solution y(x) that also converges on the same interval.
In particular, if both p(x) and q(x) are polynomials, then y(x) solves the differential equation for all x ∈ ℝ.
Definition: If in the linear differential equation $$y'' + p(x)\, y' + q(x) \,y =0$$ both coefficients, p(x) and q(x) are holomorphic at point x = x0, then this point x0 is called an ordinary point of the differential equation,
Suppose that the coefficients p(x) and q(x) can be developed into power series
$$p(x) = \sum_{n\ge 0} p_n \left( x - x_0 \right)^n , \qquad q(x) = \sum_{k\ge 0} q_k \left( x - x_0 \right)^k . \label{EqSer2.2}$$
Now, if there is a solution that is analytic in some neighborhood about x = x0, we can write
$$y(x) = \sum_{n\ge 0} c_n \left( x - x_0 \right)^n , \qquad y'(x) = \sum_{n\ge 1} n\,c_n \left( x - x_0 \right)^{n-1} , \qquad y'' (x) = \sum_{n\ge 2} n\left( n-1 \right) c_n \left( x - x_0 \right)^{n-2} . \label{EqSer2.3}$$
Substituting \eqref{EqSer2.1} and {EqSer2.3} into the differential equation {EqSer2.1}, we get
\begin{align*} L\left[ x, \texttt{D} \right] y &= \left( \texttt{D}^2 + p(x)\,\texttt{D} + q(x) \right) y = y'' + p(x)\, y' + q(x)\, y \\ &= \sum_{n\ge 2} n\left( n-1 \right) c_n \left( x - x_0 \right)^{n-2} + \left[ \sum_{n\ge 0} p_n \left( x - x_0 \right)^n \right] \sum_{n\ge 1} n\,c_n \left( x - x_0 \right)^{n-1} + \left[ \sum_{k\ge 0} q_k \left( x - x_0 \right)^k \right] \sum_{n\ge 0} c_n \left( x - x_0 \right)^n = 0. \end{align*}
Shifting indices, we obtain
$y'(x) = \sum_{k\ge 0} (k+1)\,c_{k+1} \left( x - x_0 \right)^{k} , \qquad y'' (x) = \sum_{k\ge 0} \left( k+2 \right) \left( k+1 \right) c_{k+2} \left( x - x_0 \right)^{k} .$
This yields
$\[ L\left[ x, \texttt{D} \right] y = \sum_{k\ge 0} \left( k+2 \right) \left( k+1 \right) c_{k+2} \left( x - x_0 \right)^{k} + \left[ \sum_{k\ge 0} q_k \left( x - x_0 \right)^k \right] \sum_{k\ge 0} (k+1)\,c_{k+1} \left( x - x_0 \right)^{k} + \left[ \sum_{k\ge 0} q_k \left( x - x_0 \right)^k \right] \sum_{n\ge 0} c_n \left( x - x_0 \right)^n = 0 .$
Using the convolution rule for multiplication of series, we get
$\sum_{k\ge 0} \left\{ \left( k+2 \right) \left( k+1 \right) c_{k+2} + \sum_{n=0}^k (k+1)\,c_{k+1}\,p_{n-k} + \sum_{n=0}^k c_k q_{n-k} \right\} \left( x - x_0 \right)^{k} = 0$
Equating coefficients, we have
$c_{n+2} \left( n+2 \right) \left( n+1 \right) + \sum_{k=0}^n \left( k+1 \right) c_{k+1} p_{n-k} + \sum_{k=0}^n c_k q_{n-k} = 0 .$
Then we can express cn+2 via all previous values to obtain the full-history recurrence
$$c_{n+2} \left( n+2 \right) \left( n+1 \right) = - \sum_{k=0}^n \left[ c_k q_{n-k} + \left( k+1 \right) c_{k+1} p_{n-k} \right] , \qquad n=0,1,2,\ldots . \label{EqSer2.4}$$
Note that if p(x) and q(x) are polynomials of degree at most m, then the recurrence relation for determination of coefficients becomes a difference equation of order m.

Example 1: Let us consider an initial value problem (IVP) for a homogeneous linear differential equation:

$y'' + x\,y' + \cos x\,y(x) =0, \qquad y(0) =1, \quad y' (0) = 0.$
First, we create a differentail operator corresponding to the linear differential equation and set up the initial conditions:
ode = y''[x] - x*y'[x] + Cos[x] == 0; initconds = {y'[0] == 1, y[0] == 0}; odeOperator = D[#, {x, 2}] - t*D[#, x] + Cos[x] &;
We set up our Taylor series as a symbolic expansion using derivatives of x evaluated at the origin. I use an order of 15 but that is something one would probably make as an argument to a function, if automating all this.
zz = Series[y[x], {x, 0, 15}];
Next apply the differential operator and add the initial conditions. Then find a solution that makes all powers of x vanish.
soln = SolveAlways[Join[{odeOperator[xx] == 0}, initconds], t];
Let's look at the Taylor polynomial.
truncatedSol = Normal[zz /. soln[[1]]]
x + x^5/120 + x^7/1260 + (29 x^9)/362880 + ( 13 x^11)/1995840 + (2861 x^13)/6227020800 + (4649 x^15)/163459296000
To assess how accurate it might be I will compare with NDSolve.
approxSol = NDSolve[Join[{ode}, initconds], y[x], {x, 0, 4}]; Plot[{truncatedSol, y[x] /. approxSol[[1]]}, {x, 0, 4}]
This is a good start. Power series solutions, though, are frequently used to obtain recursion equations for the coefficients (of any solution that might be analytic within a neighborhood of the point of expansion). It would be nice, then, to have a function that outputs these equations (given a differential operator as input), rather than just obtaining an approximate solution with a limited radius of accuracy. In order to analyze singular points, it would also be useful to consider slightly more general series of the Frobenius form
$x^{\alpha} \left[ a_0 + a_1 x + a_2 x^2 + \cdots \right]$
or non-integral α.

Mathematica has a special command that can provide a series solution even for non-linear differential equations.

AsymptoticDSolveValue[{y'[x] == 1 + y[x]^2, y[0] == 0}, y[x], {x, 0, 10}]
■

Example 2: Consider the differential equation that we met previously in section

$y'' (t) - 2t\,y' (t) + 3\,y(x) = \sin t , \qquad y(0) =0, \quad y' (0) = 1.$
Using the ansatz about solution form
$y (t) = t + \sum_{n\ge 2} c_n t^n ,$
we substitutte this series into the given differential equation. This yields
$\sum_{n\ge 0} \left( n+2 \right) \left( n+1 \right) c_{n+2} t^n - 2\sum_{n\ge 1} n\,c_n t^n + 3\sum_{n\ge 0} c_n t^n = \sum_{k\ge 0} \frac{(-1)^k}{(2k+1)!}\, t^{2k+1}$
We check our answer with Mathematica:
AsymptoticDSolveValue[{y'' [t] - 2*t*y'[t] + 3*y[t] == Sin[t], y[0] == 0, y'[0] == 1}, y[t], {t, 0, 15}]
t - t^5/120 - t^7/840 - (67 t^9)/362880 - (251 t^11)/9979200 - ( 6359 t^13)/2075673600 - (43877 t^15)/130767436800
■

Example 3: Consider the differential equation

$\left( 4 - x^2 \right) y'' - x\, y' + 3\,y = 0.$
The corresponding coefficients
$p(x) = - \frac{x}{4-x^2} \qquad \mbox{and} \qquad q(x) = \frac{3}{4-x^2}$
are holomorphic functions in the circle |x - 2| < 2. Therefore, we expect that our differential equation has a solution that can be represented by a convergent in the same circle series:
$y(x) = \sum_{n\ge 0} c_n x^n , \qquad \left\vert x \right\vert < 2.$
Substituting this series into the given differential equation, we obtain
$\left( 4 - x^2 \right) \sum_{n\ge 2} c_{n+2} \left( n+2 \right) \left( n+1 \right) x^n - \sum_{n\ge 1} c_{n+1} n\,x^n + 3\,\sum_{n\ge 0} c_{n} n\,x^n = 0 .$
Equating like power coefficients, we get
$4\,c_{n+2} \left( n+2 \right) \left( n+1 \right) - c_n \left( n-1 \right) n - c_{n+1} n + 3\, c_n = 0 , \qquad n=2,3, \ldots .$
For the first two coefficients, we have
$3\,c_0 + 8\,c_2 = 0 \qquad 24\,c_3 = 0 .$
So two first values are $$c_2 = -3\,c_0 /8 , \quad c_3 = 0.$$ For other coefficients, we get
$c_{n+2} = \frac{n^2 -n -3}{4\,(n+1)(n+2)}\, c_n - \frac{n\,c_{n+1}}{4\,(n+1)(n+2)} , \qquad n=0,1,2,\ldots .$
we check our answer with Mathematica:
AsymptoticDSolveValue[{(4 - x^2)*y''[x] - x*y'[x] + 3*y[x] == 0, y[0] == 1, y'[0] == 0}, y[x], {x, 0, 8}]
1 - (3 x^2)/8 - x^4/128 - (13 x^6)/15360 - (143 x^8)/1146880
AsymptoticDSolveValue[{(4 - x^2)*y''[x] - x*y'[x] + 3*y[x] == 0, y[0] == 0, y'[0] == 1}, y[x], {x, 0, 8}]
x - x^3/12 - x^5/160 - (11 x^7)/13440
So we claim that the general solution becomes
$y(x) = c_0 \left[ 1 - \frac{3}{8}\, x^2 - \frac{x^4}{128} - \frac{13}{15360}\, x^6 - \frac{143}{1146880}\, x^8 - \cdots \right] + c_1 \left[ x - \frac{x^3}{12} - \frac{x^5}{160} - \frac{11}{13440}\, x^7 - \frac{253}{1935360}\, x^9 - \cdots \right] , \qquad |x| < 2.$
This is a general solution composed by two linearly independent solutions, where c0, c1 are arbitrary constants.

Use of the nearest singular point is of some worth since having a lower bound is stronger than not knowing anything about the radius of convergence. However, for particular types of differential equations, there is a method that has been proven to give an exact value.    ■

Example 4: Let us consider a second order differential equation

$y'' + x^2 y' - x^3 y = 0. \tag{4.1}$
We are going to determine a lower bound on the radius of convergence for the series solution about x0 = 0 and x0 = 3.

The point x0 = 0 is an ordinary point since the coefficients of the given differential equation $$\displaystyle y'' + p(x)\, y' + q(x)\, y = 0$$ are holomorphic functions:

$p(x) = x^2 \qquad \mbox{and} \qquad q(x) = x^3 .$
The radius of convergence of the series solution will be at least as large as the minimum of the radius of convergence of the series for p(x) = (x² and q(x) = (x³ about x0 = 0.

Since p(x) and q(x) are already expanded in power series, and these series are not infinite, the radius of convergence for them is ρ = ∞. Therefore, the series solution about x0 = 0 must have a radius of convergence that is at least as large as ρ = ∞, which of course means it must be ρ = ∞.

A similar argument holds for x0 = 3.    ■

Example 5: Let us consider a second order differential equation

$\left( x^2 + 2x -3 \right) y'' + 3x^2 y' - x\, y =0 .$
We are going to determine a lower bound on the radius of convergence for the series solution about x0 = 0 and x0 = 5.

The point x0 = 0 is an ordinary point because the coefficients

$p(x) = \frac{3 x^2}{x^2 + 2x -3} = \frac{3 x^2}{\left( x-1 \right) \left( x+3 \right)} \qquad\mbox{and} \qquad q(x) = \frac{- x}{x^2 + 2x -3} = \frac{- x}{\left( x-1 \right) \left( x+3 \right)}$
are both analytic about x0 = 0.

Let’s determine the radius of convergence of p and q without working out the Taylor series for them. The complex poles of p and p all occur when x² −x + 3 = 0, which means x = 1 and x = −3. These roots are at distance 1 and 3 from the origin. Since the nearest complex pole is 1, we conclude that the radius of convergence of the series for p(x) and q(x) is ρ = 1. Therefore, the minimum radius of convergence for the series solution about x0 = 0 to the differential equation is ρ = 1.

The distance from x0 = 5 to the nearest complex pole is ρ = 4. The minimum radius of convergence for the series solution about x0 = 5 to the differential equation is ρ = 4.    ■

Nonlinear Equations

Example 6: The conservative form of the Burgers' equation can be reduce to the following form

$v\,v' = \nu\,v'' ,$
where ν is a positive constant. Substituting the proposed series solution
$v(x) = \sum_{n\ge 0} a_n x^n$
into Burgers' equation, we obtain
$\sum_{n\ge 0} a_n x^n \,\sum_{k\ge 1}k\,a_k x^{k-1} = \nu\sum_{n\ge 2} n\left( n-1 \right) a_n x^{n-2} .$
Matching coefficients with the equal exponents, we derive the sequence of recursive relations:
1. n = 0:      $$a_0 a_1 = 2\nu\,a_2$$
2. n = 1:      $$a_1^2 + 2\,a_0 a_2 = 6\nu\,a_3$$
3. n = 2:      $$a_1 a_2 + a_0 a_3 = 4\nu\,a_4$$
4. n = 3:      $$a_1 a_3 + 2\,a_2^2 + a_0 a_5 = 5\nu\,a_5$$
Solving the algebraic system of 9 equations, for the 9 unknown variables [a2, a3, ... ,a9] as functions of two known initial values a0 = v(0) and a1 = v'(0) and the physical parameterν, we express coefficients explicitly:
\begin{align*} a_2 &= \frac{1}{2\nu}\, a_0 a_1 , \\ a_3 &= \frac{1}{6\nu^2} \left( a_0^2 a_1 + \nu\,a_1^2 \right) \\ a_4 &= \end{align*}
■

Example 7: Consider the initial value problem for undamped Duffing equation:

$y'' + k\, y - \varepsilon \,y^3 =0, \qquad y(0) = 1 , \quad y' (0) =0.$
We can rewrite the Duffing equation in the operator form:
$L\left[ y \right] \equiv \texttt{D}^2 y = N \left[ y \right] \equiv \varepsilon \,y^3 - k\, y.$
Physically, the resilience of the oscillator is directly proportional to N[y]. When k ≥ 0 and ε < 0, there is one unique equilibrium point y = 0 where the nonlinear term vanishes. However, when k ≤ 0 and ε > 0, there are two equilibrium points: y = 0 and $$y = \sqrt{|k/\varepsilon |} .$$ From the physical points of view, the sum of the kinetic and potential energy of the oscillator keeps the same, therefore it is clear that the oscillation motion is periodic, no matter k is positive or negative. Thus, from physical points of view, it is easy to know that y(t) is periodic, even if we do not directly solve the Duffing equation.    ■

Example 8:

Example: Move to chapter 4, ndsolve Consider the second-order nonlinear differential equation with the product of the derivative and a sinusoidal nonlinearity of the solution

$\frac{{\text d}^2 y}{{\text d}x^2} - \cos \left( x\,y \right) \frac{{\text d}y}{{\text d}x} = x, \qquad y(0) =1, \quad y' (0) =-1.$
■

Example 9:

Example: Consider the pendulum like equation

$\frac{{\text d}^2 y}{{\text d}t^2}= \sin y, \qquad y(0) = \p, \quad \dot{y}(0) = -2.$
This equation arises from the general sine-Gordon equation
$u_{tt} - u_{xx} - \sin u =0 ,$
which is a model for the nonlinear meson fields with periodic properties for the unified description of mesons and their particle sources. A simple solution of the sine-Gordon equation may be found by representing u as a function of $$\xi = \left( x - vt \right) /\sqrt{1-v^2} .$$ In this case, the sine-Gordon equation is reduced to the pendulum like equation $${\text d}^2 u/{\text d} \xi^2 = \sin u .$$ For a real physical velocity v < 1, there are implicit solutions given by
$\sin \left( \frac{u}{2} \right) = \pm \sech \left( \xi - \xi_0 \right) .,$
with total energy $$\displaystyle E = \frac{2}{\pi} \,\frac{1}{\sqrt{1-v^2}} .$$ These solutions may be interpreted as the fields associated with a particle of mass 2/π centered at ξ = ξ0 and moving with velocity v.

■

Exampl 10: The system of partial differential equations, called KdV equations, can be reduced by appropriate substitution to the following ordinary differential equation

$v' = k\,v''' - v\,v' .$
■

In all problems, y' denotes the derivative of function y(x) with respect to independent variable x and $$\dot{y} = {\text d} y/{\text d}t$$ denotes the derivative with respect to time variable.
1. Using ADM, solve the initial value problem: $$e^x \,y'' + x\,y = 0 , \quad y(0) =A, \quad y' (0) = B .$$
2. Using ADM, solve the initial value problem: $$e^x \,y'' + x\,y = 0 , \quad y(0) =A, \quad y' (0) = B .$$

1. Dettman, J.W., Power Series Solutions of Ordinary Differential Equations, The American Mathematical Monthly, 1967, Vol. 74, No. 3, pp. 428--430.
2. Grigorieva, E., Methods of Solving Sequence and Series Problems, Birkhäuser; 1st ed. 2016.