Preface


This section (including following subsections) is devoted to analysis of linear differential equations with singular points. Its theory is due mainly to the German mathematicians Carl Gauss (1777--1855), Bernhard Riemann (1826--1866), Lazarus Fuchs (1833--1902), and Georg Frobenius (1849--1917). Gauss and Riemann initiated the investigation by profound study of hypergeometric second order equations (1812, 1857). The general equation of order n with regular singular points was dealt with by Fuchs (1866), and his method was later simplified by Frobenius (1873). It was Fuchs who wrote down an equation that was called later as indicial equation, the roots of which determine the behavior of solutions near regular singular point. Fuchs also pointed out that the singular points of the solutions must lie amongst the singular points of the coefficients, and so, in particular, they are fixed. It was Frobenius who emphasized the convenience of introducing the factors x² and x in the coefficients of the second order equations that are now sometimes are said to be in Frobenius normal form. A history of discovery of power series method and its applications to differential equations with regular singular points is discussed in Gray's article.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part V of the course APMA0330

Singular points


First, we extend the definition of singular points for differential equations of the second order.
Let R(x, y, p) be a continuous functions in some neighborhood of the point P = (x0, y0, y1) except P. This point is called a singular point for the differential equation d²y/dx² = R(x, y, y') if the initial value problem
\[ y'' = R(x,y,y' ), \qquad y(x_0 ) = y_0 , \quad y' (x_0 ) = y_1 , \]
does not have a unique solution or its solution does not exist, or its solution or its derivatives y' = dy/dx or y'' = d²y/dx² are discontinuous. If the function R(x, y, p) is continuous within a neighborhood of the point P and the initial value problem has a unique continuous and bounded solution including y' and y'', we label this point as the ordinary point.

For a linear differential equation, the definition above can be simplified because linear differential equations with smooth coefficients do not posses singular solutions, so the corresponding initial value problem always has a unique solution. Therefore, a singular point can be identified by only its abscissa value for linear equations.
A point x0 is called a singular point of the linear homogeneous differential equation of order n
\begin{equation} \label{Eqsingular.1} y^{(n)} + a_{n-1} (x)\, y^{(n-1)} + \cdots + a_0 (x) \, y(x) =0 , \end{equation}
if at least one of the coefficients 𝑎n-1, …, 𝑎0 is not a holomorphic function at that point. If at a point x0 all the coefficients of Eq.\eqref{Eqsingular.1} are analytic, then the point is said to be an ordinary point.
In other words, a singular point is such a point where at least one coefficient is either undefined (discontinuous) or multivalued at that point. Initial conditions are usually not specified at singular points because the corresponding initial value problems either do not have solutions, or, if they have, initial conditions cannot be chosen arbitrary. In other words, uniqueness and existence theorems are not known for the problems with initial conditions specified at the singular point.

By changing an independent variable, we can always consider singular point at the origin (without any loss of generality). Now we need another definition.

A point x = 0 is called a regular singular point of the linear homogeneous differential equation of order n, if the equation can be written in the following Frobenius normal form:
\begin{equation} \label{Eqsingular.2} x^n a_n (x)\, y^{(n)} + x^{n-1} a_{n-1} (x)\, y^{(n-1)} + \cdots + a_0 (x) \, y(x) =0 , \end{equation}
with some regular (analytic) in a neighborhood of the origin functions \( a_n (x), a_{n-1} (x) , \ldots , a_0 (x) . \) Otherwise, a singular point is referred to as irregular singular point.
F.G. Frobenius (1876) was the first who suggested to use this form in analysis of differential equations with regular singular points. The above definition tells us that differential equations with regular singular points resembles an Euler equation. A second order differential equation in normalized form is
\begin{equation} \label{Eqsingular.3} y'' + p(x)\, y' + q(x)\,y =0 . \end{equation}
If either or both of these coefficients p(x) and q(x) is/are not holomorphic/analytic at point x = x0, but both (x - x0) p(x) and (x - x0q(x) are holomorphic in an neighborhood of x0, then x0 is a regular singular point for the given differential equation.

Singular points come in two different forms: regular and irregular. Regular singular points are well-behaved and we are going to analyze this case. Irregular singular points are very difficult to analyze and we do not touch this topic. If a differential equation has a singular point x = 0, the initial conditions are not specified at this point because the solutions may not be defined at the point; or if defined, it may be not unique or not developed in power series.

Example: We show a couple of examples of singular differential equations that cause difficulties in defining the solution at the singular point. For instance, consider the first order differential equation with regular singular point

\[ \frac{{\text d}y}{{\text d}t} + \frac{p}{t}\,y =0 \qquad \Longrightarrow \qquad \frac{{\text d}y}{y} = - \frac{p}{t} \,{\text d}t . \]
Obviously, the ratio y/t becomes undetermined at the origin and infinite when t = 0, y ≠ 0. Integration yields the general solution
\[ y(t) = C\, t^{-p} , \]
with a constant of integration C. So for p > 0, this function blows up at the origin. Moreover, it is impossible to find a finite value for C that will enable us us to assign a value to y other than 0 when t = 0.

On the other hand, the differential equation with irregular singular point

\[ \frac{{\text d}y}{{\text d}x} + \frac{p}{x^2}\,y =0 \qquad \Longrightarrow \qquad \frac{{\text d}y}{y} = - \frac{p}{x^2} \,{\text d}x , \]
has the general solution
\[ y(t) = C\, e^{p/x} . \]
Obviously, this function has not limit when x ↦ 0.

The differential equation \( \frac{{\text d}y}{{\text d}x} = \frac{x+y}{x} \) has a regular singular point at the origin and becomes infinite when x = 0, y ≠ 0. Its general solution \( y = x\,\ln x + C\,x \) takes the value 0 when x = 0. However, it is impossible to express the solution in the form of a Maclaurin series in powers of x. In this case, we have an infinite number of solutions for the one initial value 0, and no solution for any other initial value of y when x = 0.

The second order differential equation

\[ \frac{{\text d}^2 y}{{\text d}x^2} = \frac{2\,y}{x^2} \]
has a regular singular point at x = 0. Its general solution is
\[ y(x) = C_1 x^2 + \frac{C_2}{x} . \]
The initial condition at x = 0 eliminates the second term and bounded solution becomes \( y(x) = C_1 x^2 , \) with some unspecified constant C1. Therefore, two initial conditions \( y(0) = 0 , \quad y' (0) = 0 \) are unable to determine the constant C1, and such problem has infinite many solutions. On the other hand, any other initial condition at the origin other than 0 cannot be satisfied, and the initial value problem has no solution.    ■

We do not consider differential equations with irregular singularities for two reasons. First, they usually do not occur in practical applications. Also, for this equations, we do not have existence and uniqueness theorems mostly because the behavior of solutions near singular points is unknown.

Example: We present some examples of differential equations having regular singular points.

  1. The differential equation \( \displaystyle x^2 y'' + \left( \cos x -1 \right) y' + e^x\,y =0 \) has a singular point at the origin. Calculating the limits
    \begin{align*} \lim_{x \to 0} x\,p(x) &= \lim_{x \to 0} x\,\frac{\cos x -1}{x^2} = 0 \quad \mbox{according to l'Hopital rule}, \\ \lim_{x \to 0} x^2\,q(x) &= \lim_{x \to 0} x^2\,\frac{e^x}{x^2} = 1 < \infty . \end{align*}
    Since all these limits are finite, the point x = 0 is a regular singular point.
  2. The differential equation \( \left( x-2 \right) y'' + x^{-1} y' + \left( x+1 \right) y = 0 \) has two regular singular points at x = 0 and x = 2 because it can be written as
    \[ y'' + \frac{1}{x\left( x-2 \right)} \, y' + \frac{x+1}{x-2}\, y =0. \]
    In order to determine whether the finite singular points are regular or irregular, we find the limits:
    \begin{align*} \lim_{x \to 0} \left( x \times \frac{1}{x\left( x-2 \right)} \right) &= \lim_{x \to 0} \, \frac{1}{x-2} = -\frac{1}{2} ; \\ \lim_{x \to 0} \left( x^2 \times \frac{x+1}{x-2} \right) &= \lim_{x \to 0} \, \frac{x^2 \left( x+1 \right)}{x-2} = 0. \end{align*}
    Since both limits are finite, the singular point x = 0 is regular. Next, we consider another limits
    \begin{align*} \lim_{x \to 2} \left( (x-2) \times \frac{1}{x\left( x-2 \right)} \right) &= \lim_{x \to 2} \, \frac{1}{x} = \frac{1}{2} ; \\ \lim_{x \to 2} \left( (x-2)^2 \times \frac{x+1}{x-2} \right) &= \lim_{x \to 2} \, \left( x+1 \right) (x-2) = 0. \end{align*}
    So the finite singular point 𝑥=2 is regular.

    That's not the end. We should also check the singularities at infinity. Because these also act as one of the key points of distinguishing the ODE type. Therefore, we make a transformation t = 1/x. Then the derivatives will have the form

    \begin{align*} \frac{{\text d}y}{{\text d}x} &= \frac{{\text d}y}{{\text d}t} \, \frac{{\text d}t}{{\text d}x} = - \frac{1}{x^2} \, \frac{{\text d}y}{{\text d}t} = - t^2 \frac{{\text d}y}{{\text d}t} , \\ \frac{{\text d}^2 y}{{\text d}x^2} &= \frac{\text d}{{\text d}x} \left( -t^2 \frac{{\text d}y}{{\text d}t} \right) = \frac{\text d}{{\text d}t} \left( -t^2 \frac{{\text d}y}{{\text d}t} \right) \frac{{\text d}t}{{\text d}x} = t^4 \frac{{\text d}^2 y}{{\text d}t^2} + 2t^3 \frac{{\text d}y}{{\text d}t} \end{align*}
    Putting these quantities into the given equation, we obtain
    \[ t^4 \frac{{\text d}^2 y}{{\text d}t^2} + 2t^3 \frac{{\text d}y}{{\text d}t} - \frac{t^2}{1-2t} \, t^2 \frac{{\text d}y}{{\text d}t} + \frac{t+1}{1-2t}\, y = 0. \]
    Upon division by t4 and some simplification, we get
    \[ \frac{{\text d}^2 y}{{\text d}t^2} + \frac{2-t}{t(1-2t)} \, \frac{{\text d}y}{{\text d}t} + \frac{1}{t^4}\,\frac{t+1}{1-2t}\, y = 0. \]
    Finally, we check singularity at the origin by calculating the limits:
    \begin{align*} \lim_{t \to 0} \left( t \times \frac{2-t}{t(1-2t)} \right) = \lim_{t \to 0} \, \frac{2-t}{1-2t} = 0 , \\ \lim_{t \to 0} \left( t^2 \times \frac{1}{t^4}\,\frac{t+1}{1-2t} \right) = \lim_{t \to 0} \, \frac{1}{t^2}\,\frac{t+1}{1-2t} = \infty . \end{align*}
    Therefore, the singularities at 𝑥=±∞ are irregular. Hence, the given differential equation belongs to Heun's Confluent type ODE.
   ■

Example: We present some examples of differential equations having irregular singular points.

  1. The differential equation \( \displaystyle x^3 y'' + y =0 \) has a singular point at the origin. We rewrite the equation with isolated second derivative: \( \displaystyle y'' + \frac{y}{x^3} = 0 . \) Calculating the limit
    \begin{align*} \lim_{x \to 0} x^2\,q(x) &= \lim_{x \to 0} x^2\,\frac{1}{x^3} = \lim_{x \to 0} \frac{1}{x} = \infty. \end{align*}
    Since this limit is not finite, the point x = 0 is a irregular singular point.

    We check for singularity the infinite point, so we make substitution: t = 1/x. Then derivatives become

    \begin{align*} \frac{{\text d}y}{{\text d}x} &= \frac{{\text d}y}{{\text d}t} \, \frac{{\text d}t}{{\text d}x} = - \frac{1}{x^2} \, \frac{{\text d}y}{{\text d}t} = - t^2 \frac{{\text d}y}{{\text d}t} , \\ \frac{{\text d}^2 y}{{\text d}x^2} &= \frac{\text d}{{\text d}x} \left( -t^2 \frac{{\text d}y}{{\text d}t} \right) = \frac{\text d}{{\text d}t} \left( -t^2 \frac{{\text d}y}{{\text d}t} \right) \frac{{\text d}t}{{\text d}x} = t^4 \frac{{\text d}^2 y}{{\text d}t^2} + 2t^3 \frac{{\text d}y}{{\text d}t} . \end{align*}
    For the new independent variable t, the differential equation becomes
    \[ \frac{1}{t^3} \left( t^4 \frac{{\text d}^2 y}{{\text d}t^2} + 2t^3 \frac{{\text d}y}{{\text d}t} \right) + y = 0, \]
    which is simplified to
    \[ t\,\frac{{\text d}^2 y}{{\text d}t^2} + 2\, \frac{{\text d}y}{{\text d}t} + y = 0 , \]
    for which the original is a regular singular point. therefore, we conclude that the given differential equation \( \displaystyle x^3 y'' + y =0 \) has a regular singular point at infinity.
  2. Consider the differential equation
    \[ \frac{{\text d} y}{{\text d}x} + 2\, \frac{\sec^x}{\tan x}\,y = 3\,\sec^2 x , \qquad y(1) =0 . \]
    Singularities for this differential equation are caused by two functions: nulls of the tangent function, which are \( \displaystyle x = \frac{\pi}{2} + n\,\pi, \) and singularities of the cosecant function \( \displaystyle x = k\,\pi , \) where k,n = 0, ±1, ±2, ... . The initial conditions cannot be specified at singular points because we don't know whether the solution of such initial value problem exist (and unique). Since our initial condition y(1) = 0 is specified at x = 1, we expect the corresponding solution exists within the interval 0 < x < π/2 ≈ 1.5708.

    The singular point x=0 is a regular singular point because tanxx as x → 0. On the other hand, the point x = π/2 is an irregular singular point for the given differential equation.

   ■

 

Second order differential equations


     
       Lazarus Immanuel Fuchs (1833-1902)            Ferdinand Georg Frobenius (1849--1917)

We reformulate the definition of regular singular point for second order linear differential equation.
A differential equation
\[ \frac{{\text d}^2 y}{{\text d} x^2} + p(x)\, \frac{{\text d} y}{{\text d} x} + q(x) y(x) = 0 \tag{3} \]
is said to a singular point at x = x0 if at least one of the coefficients, p(x) or q(x) is not holomorphic at this point. This singular point is called a regular singular point if and only if both following products are holomorphic functions and limits exist (and finite):
\begin{equation} \label{Eqsingular.4} \begin{split} \lim_{x \to x_0} \left( \left( x-x_0 \right) \times p(x) \right) <\infty , \\ \lim_{x \to x_0} \left( \left( x-x_0 \right)^2 \times q(x) \right) <\infty . \end{split} \end{equation}
Otherwise, the singular point is called irregular.    ▣
The above definition can be reformulated in more general form. The point x0 is a regular singular point for the differential equation \( y'' + p(x)\,y' + q(x)\,y =0 \) if and only if the functions \( \left( x- x_0 \right) p(x) \) and \( \left( x- x_0 \right)^2 q(x) \) are holomorphic in a neighborhood of point x = x0. In other words, the functions \( \left( x- x_0 \right) p(x) \) and \( \left( x- x_0 \right)^2 q(x) \) admit power series expansions with positive radius of convergence:
\begin{align*} \left( x- x_0 \right) p(x) &= p_0 + p_1 \left( x- x_0 \right) + p_2 \left( x- x_0 \right)^2 + \cdots , \\ \left( x- x_0 \right)^2 q(x) &= q_0 + q_1 \left( x- x_0 \right) + q_2 \left( x- x_0 \right)^2 + \cdots . \end{align*}

Example: Consider the differential equation

\[ \left( x+1 \right) \left( 3x-1 \right) y'' + \cos x \, y' -3x\,y = 0. \]
This equation has two regular singular points (where the coefficient of the second derivative vanishes): x = -1 and x = 1/3. Therefore, the initial conditions cannot be specified at these two points. If the initial conditions are specified at the origin, the radius of convergence for power-sum solution is the smallest distance to each singular point. So we conclude that the corresponding power series solution converges within the circle |x| < 1/3.    ■

Example: Consider the Picard–Fuchs equation:

\[ x^2 \left( 1-x \right)^2 \frac{{\text d}^2 y}{{\text d} x^2} + x \left( 1-x \right)^2 \frac{{\text d} y}{{\text d} x} + \frac{31x-4}{144} \, y = 0 . \tag(A) \]
There is known another differential equation that is also referred to as the Picard–Fuchs equation; \[ x \left( x-1 \right) y'' + \left( 2x -1 \right) y' + \frac{1}{4}\, y = 0. \tag{B} \] Both Picard–Fuchs equations have two regular singular points at x = 0 and x = 1.    ■

Example: The following differential equation \[ x \left( 1 - x^2 \right) y'' + \left( 1 - 3x^2 \right) y' -x\,y =0 \] had been studied both by Legendre (1825) and Kummer (1836) because it describes the periods of the complete elliptic integrals as functions of the modulus x. Kummer recognized this equation as reducible to the hypergeometric equation \[ x \left( 1 - x \right) y'' + \left[ \gamma - \left( \alpha + \beta + 1 \right) \right] y' - \alpha\beta\, y =0. \] The Legendre--Kummer differential equation has three regular singular points at x = 0 and x = ±1.    ■

 

Initial conditions at a singular point


Initial conditions are usually not specified at a singular point because a corresponding initial value problem may have either multiple solutions or no solution at all. In case of an irregular singular point, the initial conditions may have no sense. For example, the differential equation
\[ x^4 \, \frac{{\text d}^2 y}{{\text d} x^2} + 2\,x^3 \, \frac{{\text d} y}{{\text d} x} + y = 0 \qquad \Longleftrightarrow \qquad x^2 \frac{{\text d} }{{\text d} x} \left( x^2 \frac{{\text d} y}{{\text d} x} \right) + y = 0 , \quad x\ne 0, \]
has two linearly independent solutions
\[ u(x) = \sin \left( \frac{1}{x} \right) \qquad\mbox{and} \qquad v(x) = \cos \left( \frac{1}{x} \right) . \]
None of them has a limit at x = 0.

A Swiss astrophysicist and meteorologist Robert Emden (1862--1940) posted in the early 1900's the following problem:

Determine the first point on the positive x-axis where the solution to the initial value problem
\[ x\,y'' + 2\,y' + x\,y(x) = 0 , \qquad y(0) = 1, \quad y' (0) = 0 , \]
is zero.
His problem shows that sometimes a physical situation requires to consider the initial value problems with conditions imposed at the singular point. We will show later that the second initial conditions for the derivative \( y' (0) = 0 \) cannot be chosen arbitrary and actually not needed because it is satisfied automatically.

Example: Let us consider the Euler equation \[ x^2 y'' + x\,y' + y =0 . \] It has the general solution \[ y(x) = C_1 x + C_2 x^{-1} , \] with some arbitrary constants C1 and C2. Suppose we are given the initial condition y(0) = 1. To satisfy it, we have to eliminate C2 = 0 because x-1 is undefined at the origin. However, it would not help since with any choice of C1 the initial condition cannot be reached. So our initial value problem has no solution.

Now suppose that the initial condition is homogeneous, so y(0) = 0. Then the equation has the general solution \[ y(x) = C_1 x , \] for any choice of constant C1. Adding another condition, we get the initial value problem \[ x^2 y'' + x\,y' + y =0 , \qquad y(0) = 0 , \quad y' (0) = a. \] It has infinite many solution, y(x) = C1x, only when 𝑎 = 0 and no solution with any other second initial condition.

Let us consider another Euler equation subject to one initial condition \[ x^2 y'' + 2x\,y' = 0 \qquad \Longleftrightarrow \qquad \frac{{\text d}^2 y}{{\text d} x^2} + \frac{2}{x}\,\frac{{\text d} y}{{\text d} x} = 0 , \qquad y(0) = 1. \] The general solution of the given Euler equation is \[ y(x) = C_1 + C_2 x^{-1} , \] with some arbitrary constants C1 and C2. To satisfy this single initial condition, we have to set C1 = 1 and C2 = 0. This yields the constant solution \[ y(x) = 1. \] Any other second initial condition is either redundant (because it will be satisfied automatically if you set the derivative to be zero at x = 0) or leads to no solution.    ■

 

Summary


Suppose we are given a homogeneous linear second order ordinary differential equation in the form \[ %begin{equation} \label{Eqsingular.5} y''+p(x)\,y'+ q(x)\,y=0, \tag{3} \] where \( y' = \texttt{D} y = {\text d}y/{\text d}x \) and \( y'' = \texttt{D}^2 y = {\text d}^2 y/{\text d}x^2 \) are derivatives of y with respect to the independent variable x. We search for a series solution around the point x = 0. (An expansion about any other point, x0, could be determined by changing the independent variable to t = x - x0 and then analyzing the resulting equation near t = 0.) There are several different cases to consider.
  1. If x = 0 is an ordinary point of equation \eqref{Eqsingular.3}, then we may assume that p(x) and q(x) have the known Maclaurin expansions \begin{equation} \label{Eqsingular.6} p(x) = \sum_{n\ge 0} p_n x^n , \qquad q(x) = \sum_{n\ge 0} q_n x^n , \end{equation} in the region |x| < ρ, where ρ represents the minimum of the radii of convergence of the two series in formulas \eqref{Eqsingular.6}. In this case, equation \eqref{Eqsingular.3} will have two linearly independent solutions of the form \begin{equation} \label{Eqsingular.7} y(x) = \sum_{n=0}^{\infty} c_n x^n . \end{equation}
  2. Alternately, if x = 0 is a regular singular point of equation \eqref{Eqsingular.3}, then we may assume that p(x) and q(x) have the known expansions \begin{equation} p(x) = \sum_{n=-1}^{\infty} p_n x^n , \qquad q(x) = \sum_{n=-2}^{\infty} q_n x^n , \label{Eqsingular.8} \end{equation} in the region |x| < ρ. A solution to the homogeneous equation \eqref{Eqsingular.3} with regular singular point at the origin is assumed to have a generalized power series expansion \begin{equation} \label{Eqsingular.9} y(x) = x^{\alpha} \sum_{n\ge 0} c_n x^n . \end{equation} Upon substituting the solution series \eqref{Eqsingular.9} and coefficient series \eqref{Eqsingular.8} into the given differential equation \eqref{Eqsingular.3}, we need to determine the roots of the indicial equation \begin{equation} \label{Eqsingular.10} \alpha^2 + \alpha \left( p_{-1}-1\right) + q_{-2} = 0, \end{equation} This equation is obtained by using \( y=x^{\alpha} \) in equation \eqref{Eqsingular.3}, along with the expansions from \eqref{Eqsingular.9} and \eqref{Eqsingular.8} and then determining the coefficient of the lowest order term. The two roots of this equation, α1, α2, are called the exponents of the singularity. There are several sub-cases, depending on the values of the exponents of the singularity. Assuming that they are real:
    1. If α1 ≠ α2 and α1 - α2 is not equal to an integer, then equation \eqref{Eqsingular.3} will have two linearly independent solutions in the forms \begin{equation} \label{Eqsingular.11} \begin{split} y_1(x)&= |x|^{\alpha_1}\left( 1+\sum_{n=1}^{\infty}b_n x^n \right) , \\ y_2(x)&= |x|^{\alpha_2}\left( 1+\sum_{n=1}^{\infty}c_n x^n \right) . \end{split} \end{equation}
    2. If α1 = α2, then (calling α = α1 = α2) equation \eqref{Eqsingular.3} will have two linearly independent solutions of the form \begin{equation} \label{Eqsingular.12} \begin{split} y_1(x)&= |x|^{\alpha}\left( 1+\sum_{n=1}^{\infty}d_nx^n \right) , \\ y_2(x)&= y_1(x)\ln |x| + |x|^{\alpha}\sum_{n=0}^{\infty}e_nx^n. \end{split} \end{equation}
    3. If α1 = α2 + M,where M is an integer greater than 0, then equation \eqref{Eqsingular.3} will have two linearly independent solutions of the form \begin{equation} \label{Eqsingular.13} \begin{split} y_1(x)&= |x|^{\alpha_1}\left( 1+\sum_{n=1}^{\infty}f_nx^n \right) , \\ y_2(x)&= hy_1(x)\ln | x| + |x|^{\alpha_2}\sum_{n=0}^{\infty} g_nx^n, \end{split} \end{equation} where the parameter h may be equal to zero.
  3. If the α1 and α2 are distinct complex conjugate numbers, then a solution can be found in the form \begin{equation} \label{Eqsingular.14} y(x) = x^{\alpha_1} \sum_{n=0}^{\infty} h_n x^n , \end{equation} where the { hn } are complex. In this case two linearly independent solutions are the real and imaginary parts of y(x). That is, y1(x) = Re y(x), and y2(x) = Im y(x).

 

  1. Birkhoff, G.D., Singular Points of Ordinary Linear Differential Equations, Transactions of the American Mathematical Society, 1909, Vol. 10, No. 4 (Oct., 1909), pp. 436-470 (35 pages)
  2. Dettman, J.W., Power Series Solutions of Ordinary Differential Equations, The American Mathematical Monthly, 1967, Vol. 74, No. 3, pp. 428--430.
  3. Frobenius, F.G., Ueber die Integration der linearen Differentialgleichungen durch Reihen (in German; its English translation: On the integration of linear differential equations by means of series), Journal für die reine und angewandte Mathematik, 1873, 76, pp. 214--235.
  4. Fuchs, L., Zur Theorie der linearen Differentialgleichungen mit veränderlichen Coefficienten (in German; its English translation: On the theory of linear differential equations with variable coefficients), Journal für die reine und angewandte Mathematik, 1866, 66, pp. 121--160.
  5. Gray, J.J., Fuchs and the theory of differential equations, Bulletin of the American Mathematical Society, 1984, Volume 10, Number 1, pp. 1--26.
  6. Grigorieva, E., Methods of Solving Sequence and Series Problems, Birkhäuser; 1st ed. 2016.
  7. Kreshchuk, M., Gulden, T., The Picard–Fuchs equation in classical and quantum physics: application to higher-order WKB method, Journal of Physics A: Mathematical and Theoretical, 2019, Volume 52, Number 15, doi: 10.1088/1751-8121/aaf272
  8. Motsa, S.S., and Sibanda, P., Anew algorithm for solving singular IVPs ofLane-Emden type, Latest Trends on Applied Mathematics, Simulation Modeling, 2010,
  9. Yiğider, M., Tabatabaei, K., and Çelik, E., The Numerical Method for Solving Differential Equations of Lane-Emden Type by Padé Approximation, Discrete Dynamics in Nature and Society, Volume 2011, Article ID 479396, 9 pages http://dx.doi.org/10.1155/2011/479396

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)