Preface


This section discusses an important topic about convergence of power series.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340

Series Convergence


This section reminds the reader basic facts about convergence of sequences and series. These two concepts are actually equivalent because every series is uniquely related to a sequence of partial sums and vice versa, every sequence of numbers or functions can represent partial sums of some series. Recall that a sequence is a function of a discrete variable, of which we prefer to use the set of all nonnegative integers ℕ = {0,1,2,…}. So a sequence is nothing more than a list of entries written in a specific order. In our applications, entries of a series or a sequence are either numbers (real ℝ or complex ℂ) or real-valued functions; however, we will also use complex-valued functions. It is a custom to denote the elements of a series or sequence with subscripts such as cn rather than c(n). Mathematica supports the list notation by representing sequences in curly brackets: {𝑎,b,c,…}.

One of the first people who realized the importance of divergent series and developed some techniques to sum divergent series was Leonhard Euler. In fact, Euler was of the opinion that any series is summable and one should just find the right method of summing it! During centuries, mathematician developed several definitions of convergence. we will use only two of them, as the most practical.

A sequence {𝑎n}n≥0 is said to be convergent if it approaches some limit A:
\[ \lim_{N\to \infty} \, a_n = A . \]
This means that for any ε > 0, there exists an N such that \( \left\vert a_n - A \right\vert < \varepsilon \) for n > N.
This definition is naturally extended for series. We say that the infinite sum of real or complex numbers \( \sum_{n\ge 0} c_n \) converges to S, if the sequence of partial sums \( S_n = \sum_{k= 0}^n c_k \) converges to S.
Let \( \sum_{n\ge 0} a_n \) be a series of real or complex numbers, and let { sn }n≥0 denote the corresponding sequence of partial sums, where for each n ∈ ℕ, \( \displaystyle s_n = \sum_{k=0}^n a_k = a_0 + a_1 + \cdots + a_n . \)

A sequence {𝑎n}n≥0 is called Cesàro summable, with Cesàro sum S if, as n tends to infinity, the arithmetic mean of its first n partial sums s0, s1, s2, …tends to S:

\[ \lim_{N\to \infty} \,\frac{1}{N+1}\,\sum_{n=0}^N s_n = \lim_{N\to \infty} \,\sum_{k=1}^N \left( 1- \frac{k}{N+1} \right) a_k = S, \qquad\mbox{where} \quad s_n = \sum_{k=0}^n a_k . \]
When a series \( \sum_{n\ge 0} a_n \) is Cesàro summable to S, we abbreviate is as (C,1)\( \sum_{n\ge 0} a_n = S . \)
Note that if the series \( \sum_{n\ge 0} a_n \) converges to S, then its Cesàro sum always exists and equals to the same value S. However some divergent series may have a limit in the Cesàro sense. For instance, the series \( \sum_{n\ge 0} (-1)^n \) does not converges because its general term does not tends to zero, but it has a Cesàro sum equals to ½. If a series \( \sum_{n\ge 1} a_n \) starts with index 1, its Cesàro sum is
\[ (C,1)\sum_{n\ge 1} a_n = \lim_{N\to \infty} \,\frac{1}{N}\,\sum_{n=1}^N s_n = \lim_{N\to \infty} \,\sum_{k=1}^N \left( 1- \frac{k-1}{N} \right) a_k = S, \qquad\mbox{where} \quad s_n = \sum_{k=1}^n a_k . \]

If the sequence does not converge, it is said to diverge. However, there are two specific divergent sequences that are in great use. We say that the sequence { 𝑎n ) tends to infinity if we can make 𝑎n as large as we want for all sufficiently large n. We abbreviate it as \( \lim_{n\to \infty} a_n = \infty , \) and say that the sequence diverges to ∞. Similarly, a sequence of real numbers { 𝑎n ) tends to minus infinity if for n > N we have 𝑎n < X for any X < 0.

Theorem 1: If the un > 0 are monotonically decreasing to zero, that is, un > un+1 for all n, then ∑nun is converging to S if, and only if, sn - nun converges to S; here sn = u0 + u1 + ··· + un is the n-th partial sum.

Example: A geometric series is a series with a constant ratio between successive terms:

\[ G(q) = \sum_{n\ge 0} q^n = 1 + q + q^2 + q^3 + \cdots . \]
This series converges for |q| < 1 and diverges for |q| > 1. However, for q = -1, it is Cesàro summable to ½. Its sum can be found with a simple arithmetic operation:
\[ G(q) - q\,G(q) = 1 \qquad \Longrightarrow \qquad G(q) = \frac{1}{1-q} . \]
Its partial sums are
\[ G_n (q) = \sum_{k=0}^n q^k = 1 + q + \cdots + q^n = \frac{1 - q^{n+1}}{1-q} , \]
which can be found in a similar way as G(q).    ■

Divergent sequences and series play a prominent role in applications. For example, the series

\[ S(x) = \frac{1}{x} - \frac{1!}{x^2} + \frac{2!}{x^3} - \frac{3!}{x^4} + \frac{4!}{x^5} - \cdots , \qquad x > 0, \]
converges for no finite value of x. nevertheless, if the series is truncated after a finite number of terms it will give a good approximation to the value of the integral
\[ f(x) = \int_0^{\infty} \frac{e^{-t}}{x+t}\,{\text d}t , \qquad x > 0, \]
provided x is sufficiently large. It transpires that very many problems have solutions that are more usefully expressed in terms of divergent sequences or series rather than convergent counterparts.

Another important example of a divergent series is the harmonic series,

\[ \sum_{n\ge 1} \frac{1}{n} = \lim_{n\to\infty} H_n = \infty , \qquad\mbox{where} \quad H_n = \sum_{k=1}^n \frac{1}{k} = 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \]
is the n-th harmonic number. Its name derives from the concept of overtones, or harmonics in music.

 

The phrase "of the order of"


It is frequently necessary to compare the magnitude of two sequences, particularly when making approximations. Therefore, we need a specific notation allowing such comparisons. For example, suppose we have two functions f(n) = n³ - 100 and g(n) = 1010n + 100; which one grows faster? It is clear that f(n) dominates.

Let f and g be two real or complex valued functions defined on some unbounded interval of ℝ. If the absolute value of f(x) is at most a positive constant multiple (denoted Mg(x) for all sufficiently large values of x,
\[ \left\vert f(x) \right\vert \le M\left\vert g(x) \right\vert \qquad \mbox{for all} \quad x \ge x_0 , \]
then we say that f is of order g and abbreviate it as
\[ f(x) = O\left( g(x) \right) \qquad\mbox{or} \qquad f(x) \in O\left( g(x) \right) \qquad \mbox{as} \quad x\to\infty . \]
If 𝑎n and bn are two sequences and a number N exists such that
\[ \left\vert a_n \right\vert M\left\vert b \right\vert \qquad\mbox{whenever} \quad n > N, \]
where positive constant M is independent of n, we say that 𝑎n is of order of bn and write
\[ a_n = O\left( b_n \right) \qquad\mbox{or} \qquad a_n \in O\left( b_n \right) \qquad \mbox{as} \quad n\to\infty . \]
Frequently, the phrase "as n → ∞", or equivalent, is omitted as it is clear from the context.

Example: Consider a sequence \( \displaystyle a_n = \frac{6\,n^2 -3\,n +7}{3\,n^2 + 5\,n -2} . \) Then upon some arithmetic manipulations, we get

\[ \frac{6\,n^2 -3\,n +7}{3\,n^2 + 5\,n -2} = 2\, \frac{1 - 1/(2\,n) + 7/(6\,n^2 )}{1 + 5/(3\,n) - 2/(3\,n^2 )} < 2 \]
for all n > 1. Thus, we write
\[ \frac{6\,n^2 -3\,n +7}{3\,n^2 + 5\,n -2} = O \left( 1 \right) . \]

Consider two sequences 𝑎n = n³ - 100 and bn = 1010n + 100. Their ratio

\[ \frac{b_n}{a_n} = \frac{10^{10} n + 100}{n^3 - 100} = 10^{10}\,\frac{1}{n^2} \,\frac{1+ 10^{-8}/n}{1- 100/n^3} \qquad \longrightarrow \qquad \frac{b_n}{a_n} \in O\left( \frac{1}{n^2} \right) . \]
It would be perfectly correct to write
\[ b_n = O\left( a_n \right) \qquad \mbox{or} \qquad 10^{10} n + 100 = O \left( n^3 - 100 \right) = O \left( n^3 \right) . \]
as n → ∞.    ■
In many applications, we need to know the order of a function f(x) as x tends to some value, not necessarily infinity. For example, when we need to find a derivative of a power function f(x) = (xm, we need to estimate the difference
\[ \left( x+h \right)^m - x^m = m\,x^{m-1}h + O\left( h^2 \right) \qquad \Longrightarrow \qquad \left( x+h \right)^m - x^m = O\left( h \right) \]
as h → 0. Here we used the binomial formula
\[ \left( x+ y \right)^m = x^m + m\,x^{m-1} y + \binom{m}{2} x^{m-2} y^2 + \cdots , \qquad\mbox{where} \quad \binom{m}{k} = \frac{m^{\underline{k}}}{k!} . \]

 

Absolute and conditional convergence


We would like to manipulate with infinite series in the same way as we do with polynomials. In particular, we would like to interchange the entries in the infinite sum that would not modify its value. The situation is similar to multiplication of matrices; the order in which multiplication is performed requires different number of arithmetic operations. It turns out that not every convergent series admits regrouping of its entries, but only those that satisfy a special property---absolute convergence.
A real or complex series \( \displaystyle \sum_{n\ge 0} a_n \) is called absolutely convergent if the series of its absolute values \( \displaystyle \sum_{n\ge 0} \left\vert a_n \right\vert \) converges, and if the latter diverges, the series is called conditionally convergent.

Theorem 2: If a series of real or complex numbers \( \displaystyle \sum_{n\ge 0} a_n \) is absolutely convergent then it is also convergent.
First notice that \( \displaystyle \left\vert a_n \right\vert \) is either the element 𝑎n or its negative value, \( \displaystyle - a_n \) depending on its sign. This means that we can estimate
\[ 0 \le a_n \left\vert a_n \right\vert \le 2 \left\vert a_n \right\vert . \]
If the given series is absolutely convergent, then \( \displaystyle \sum_{n\ge 0} 2\left\vert a_n \right\vert \) is also convergent. Using the Comparison Test, we claim that the series \( \displaystyle \sum_{n\ge 0} \left( a_n + \left\vert a_n \right\vert \right) \) is also convergent. Hence, we get
\[ \sum_{n\ge 0} a_n = \sum_{n\ge 0} \left( a_n + \left\vert a_n \right\vert \right) - \sum_{n\ge 0} \left\vert a_n \right\vert , \]
which shows that the series is represented as the difference of two convergent series and so is also convergent.    ▣
The above statement reflects the fact that absolute convergence is a “stronger” type of convergence. Series that are absolutely convergent are guaranteed to be convergent. However, series that are convergent may or may not be absolutely convergent.
Theorem 3: If a series \( \displaystyle \sum_{n\ge 0} a_n \) is absolutely convergent and its value is S, then any rearrangement of elements in the series does not change the sum S.

If a series of real numbers \( \displaystyle \sum_{n\ge 0} a_n \) is conditionally convergent and r is any real number, then there is a rearrangement of \( \sum_{n\ge 0} a_n \) whose value will be r.

Example: Consider the series

\[ S = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots = \sum_{n\ge 1} \frac{(-1)^{n+1}}{n} = \ln 2 \approx 0.693147 \]
Sum[(-1)^n /n, {n, 1, Infinity}]
-Log[2]
and
\[ W = 1 + \left( \frac{1}{3} - \frac{1}{2} + \frac{1}{5}\right) + \left( \frac{1}{7} - \frac{1}{4} + \frac{1}{9}\right) + \left( \frac{1}{11} - \frac{1}{6} + \frac{1}{13}\right) + \cdots , \]
formed from the same terms but in a different order. Let Sn and Wn denote the sums of the first n terms of these series. Let Hn be the n-th harmonic number:
\[ H_n = 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} = \sum_{k=1}^n \frac{1}{k} . \]
Then
\begin{align*} H_{2n} - H_n &= \left( 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{2n} \right) - \left( 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \right) \\ &= 1 + \left( \frac{1}{2} - 1 \right) + \frac{1}{3} + \left( \frac{1}{4} - \frac{1}{2} \right) + \cdots + \frac{1}{2k-1} + \left( \frac{1}{2k} - \frac{1}{k} \right) + \cdots + \left( \frac{1}{2n} - \frac{1}{n} \right) \\ &= 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots + \frac{1}{2k-1} - \frac{1}{2k} + \cdots - \frac{1}{2n} . \end{align*}
Hence, S2n = H2n - Hn. Similarly,
\begin{align*} W_{3n} &= 1 + \frac{1}{3} + \cdots + \frac{1}{4n-1} - \left( \frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{4n} \right) \\ &= H_{4n} = \frac{1}{2}\, H_{2n} - \frac{1}{2}\, H_n = H_{4n} - H_{2n} + \frac{1}{2} \left( H_{2n} - H_n \right) = S_{4n} + \frac{1}{2}\, S_{2n} . \end{align*}
Upon letting n → ∞, we obtain \( W = \frac{3}{2}\, S . \)    ■

 

Uniform convergence


Our next topic of discussion is an infinite series whose components are functions of a real variable x ∈ ℝ:
\[ S(x) = u_0 (x) + u_1 (x) + u_2 (x) + \cdots = \sum_{n\ge 0} u_n (x) , \]
where each un(x) is a smooth function of x in some interval |𝑎,b| (note that the notation |𝑎,b| means any interval, open, closed, or semi-closed with endpoints 𝑎 and b). Since the convergence of such series is reduced to finding the limit of its partial sums \( S_n (x) = \sum_{k=0}^n u_k (x) , \) it is sufficient to define a convergence of a sequence of functions Sn( x). Several mathematicians, Seidel, Stokes, and Weierstrass, independently developed the notion of uniform convergence.
Suppose that { fn } is a sequence of real-valued functions fn : |𝑎,b| → ℝ, defined on some interval |𝑎,b| (open, closed, or semi-closed, does not matter) of the real axis ℝ. Then this sequence is said to converge pointwise to f : |𝑎,b| → ℝ on |𝑎,b| if fn(x) → f(x) as n → ∞ for every x ∈ |𝑎,b|.
Pointwise convergence is, perhaps, the most natural way to define the convergence of functions, and it is one of the most important. Nevertheless, as the following examples illustrate, it is not as well-behaved as one might initially expect.

Example: Consider the series \[ S(x) = \sum_{n\ge 1} a_n (x) = \sum_{n\ge 1} \frac{x}{\left[ \left( n-1 \right) x +1 \right] \left( nx +1 \right)} . \] Its partial sum is \[ S_n (x) = \sum_{k= 1}^n a_k (x) = \sum_{k= 1}^n \frac{x}{\left[ \left( k-1 \right) x +1 \right] \left( kx +1 \right)} = \frac{nx}{n\,x+1} , \] which we prove by mathematical induction. It may be verified that this expression for Sn(x) holds for the first few values of n = 1, 2. We assume it holds for n terms and then prove it holds for n + 1 terms:

\begin{align*} S_{n+1} &= S_n + \frac{x}{\left[ n\, x +1 \right] \left[ \left( n+1 \right) x +1 \right]} = \frac{nx}{n\,x+1} + \frac{x}{\left[ n\, x +1 \right] \left[ \left( n+1 \right) x +1 \right]} \\ &= \frac{\left( n+1 \right) x}{\left( n+1 \right) x +1} . \end{align*}
Letting n approach infinity, we obtain
\[ \lim_{n\to \infty} S_n (x) = \lim_{n\to \infty} \,\frac{nx}{n\,x+1} = \begin{cases} 0, & \ \mbox{ if } \ x=0, \\ 1, & \ \mbox{ if } \ x \ne 0 . \end{cases} \]
We have a discontinuity in our series limit at x = 0. However, every partial sum Sn(x) is a continuous function of x, for all x ≠ -1/n.    ■

Example: Suppose that fn : (0,1) → ℝ is defined by

\[ f_n (x) = \frac{n}{nx+1} , \]
For x ≠ 0, we have
\[ \lim_{n\to\infty} f_n (x) = \lim_{n\to\infty} \,\frac{1}{x+1/n} = \frac{1}{x} . \]
Therefore, the given sequence of functions converges pointwise to
\[ f(x) = \frac{1}{x} . \]
So pointwise limit function is unbounded although each term is bounded: \( | f_n (x) | < n \) for all x ∈ (0,1).
    Elements of the sequence fn for n = 10, 50, and 500.
 
f[x_, n_] = n/(1 + x*n);
Plot[{f[x, 10], f[x, 50], f[x, 500]}, {x, 0, 0.6}, PlotStyle -> Thick, PlotLabels -> Automatic]
   ■

Example: Consider the series

\[ f(x) = x^2 + \frac{x^2}{1+ x^2} + \frac{x^2}{\left( 1+ x^2 \right)^2} + \cdots + \frac{x^2}{\left( 1+ x^2 \right)^n} + \cdots . \]
This series is absolutely convergent for all real values of x, except possibly x = 0. Since the above series is actually a geometric series with the general term uk(x) = x²/(1 + x²)k = qk, its partial sum is
\[ f_n (x) = \sum_{k=0}^n \frac{x^2}{\left( 1 + x^2 \right)^k} = \sum_{k=0}^{n} q^k = \frac{q^{n+1} -1}{q -1} = x^2 \left( \frac{1}{\left( 1 + x^2 \right)^n} -1 \right) \frac{-x^2}{1 + x^2} = 1 + x^2 - \frac{1}{\left( 1 + x^2 \right)^{n}} , \]
giving the limit
\[ f(x) = \lim_{n\to\infty} f_n (x) = 1 + x^2 , \qquad x\ne 0. \]
q = 1/(1 + x^2); Simplify[x^2 *Sum[q^k, {k, 0, n}]]
1 + x^2 - (1/(1 + x^2))^n
But we have established that f(0) = 0, so the limit function is not continuous at x = 0, even though each term in the sum and each partial sum is continuous.
    Partial sums with n = 50, 100, and 500 terms.
 
S[x_, n_] = 1 + x^2 - 1/(1 + x^2)^n
Plot[{S[x, 50], S[x, 100], S[x, 500]}, {x, 0, 0.6}, PlotStyle -> Thick, PlotLabels -> Automatic]
   ■
Suppose that { fn } is a sequence of functions fn : [𝑎,b] → ℝ and f : [𝑎,b] → ℝ. Then fnf uniformly on [𝑎,b] if, for every positive ε, there exists N ∈ ℕ such that
\[ \left\vert f_n (x) - f(x) \right\vert < \varepsilon \qquad\mbox{for all}\quad x\in [a,b] \mbox{ and } n> N. \]
The crucial point in this definition is that N depends only on ε and not on x ∈ [𝑎,b], whereas for a pointwise convergent sequence N may depend on both ε and x. A uniformly convergent sequence is always pointwise convergent (to the same limit), but the converse is not true.

Example: The sequence \( f_n (x) = x^n \) converges pointwise on [0,1] to the discontinuous function

\[ f_n (x) \to f(x) = \begin{cases} 0 , & \ \mbox{ when } 0 \le x < 1 , \\ 1, & \ \mbox{ when } x = 1. \end{cases} \]
However, such convergence is not uniform. For 0 ≤ x < 1 and 0 < ε < 1, we have
\[ \left\vert f_n (x) - f(x) \right\vert = \left\vert x^n \right\vert < \varepsilon \]
if and only if 0 ≤ x < ε1/n. Since ε1/n < 1 for all integers n ∈ ℕ, no integer N works for all x sufficiently close to 1 (although there is no difficulty at x = 1). The sequence does,however, converge uniformly on [0,b] for every 0 < b < 1 because for 0 < ε < 1, we can take \( N = \ln\varepsilon /\ln b . \)    ■
All definitions of convergence have their own advantages and disadvantages; pointwise convergence is an easier condition to check and study, while uniform convergence preserved the properties of continuity and the integration.

Example: The series

\[ \sum_{k\ge 0} \frac{x^2}{\left( 1 + x^2 \right)^k} \]
is absolutely, but not uniformly, convergent near x = 0. On the other hand, the series
\[ \sum_{k\ge 0} \frac{(-1)^{k}}{k+x^2} \]
is only conditionally convergent, but nevertheless, is uniformly convergent.    ■
Unlike pointwise convergence, uniform convergence preserves boundedness and continuity. Uniform convergence does not preserve differentiability any better than pointwise convergence. Nevertheless, we can differentiate a convergent sequence subject that the derivatives converge uniformly.
Theorem 4: Suppose that fn : [𝑎,b] → ℝ is bounded on [𝑎,b] for every integer n ∈ ℕ and fnf uniformly on [𝑎,b]. Then f : [𝑎,b] → ℝ is bounded on [𝑎,b].
Taking ε = 1 in the definition of the uniform convergence, we find that there exists N ∈ ℕ such that
\[ \left\vert f_n (x) - f(x) \right\vert < 1 \qquad \mbox{ for all} \quad x\in [a,b] \quad \mbox{for} \quad n > N. \]
Choose some n > N. Then since fn is bounded, there is a constant Mn ≥ 0 such that
\[ \left\vert f_n (x) \right\vert \le M_n \qquad\mbox{for all} \quad x \in [a,b] . \]
It follows that
\[ \left\vert f (x) \right\vert \le \left\vert f_n (x) - f(x) \right\vert + \left\vert f_n (x) \right\vert < 1 + M_n \qquad\mbox{for all} \quad x \in [a,b] , \]
meaning that f is bounded on [𝑎,b] by 1 + Mn.

We do not assume here that all the functions in the sequence are bounded by the same constant. (If they were, the pointwise limit would also be bounded by that constant.) In particular, it follows that if a sequence of bounded functions converges pointwise to an unbounded function, then the convergence is not uniform.    ▣

Theorem 5: If a sequence { fn } of continuous functions fn : [𝑎,b] → ℝ converges uniformly on [𝑎,b] ⊂ ℝ to f : [𝑎,b] → ℝ, then f is continuous on [𝑎,b].
For every integer n ∈ ℕ, we have the inequality
\[ \left\vert f(x) - f(c) \right\vert \le \left\vert f(x) - f_n (x) \right\vert + \left\vert f_n (x) - f_n (c) \right\vert + \left\vert f_n (c) - f(c) \right\vert , \]
where c is an arbitrary point from the interval [𝑎,b]. By the uniform convergence of the sequence { fn }, for any positive ε, we can choose n ∈ ℕ such that
\[ \left\vert f(x) - f_n (x) \right\vert < \frac{\varepsilon}{3} \qquad\mbox{for all} \quad x\in [a,b]. \]
It follows that for such n,
\[ \left\vert f(x) - f (c) \right\vert < \left\vert f_n (x) - f_n (c) \right\vert + \frac{2\varepsilon}{3} . \]
Here we use the fact that fn is closed to f at both x and ε, where x is an arbitrary point in a neighborhood of c; this is where we use the uniform convergence in a crucial way.

Since fn is continuous on [𝑎,b], here exists δ such that

\[ \left\vert f_n (x) - f_n (c) \right\vert < \frac{\varepsilon}{3} \]
if |x - c| < δ and x ∈ [𝑎,b], which implies that
\[ \left\vert f (x) - f (c) \right\vert < \varepsilon \]
when |x - c| < δ and x ∈ [𝑎,b]. This proves that f is continuous.    ▣
Corollary 6: If a sequence { fn } of continuous functions fn : [𝑎,b] → ℝ converges uniformly on [𝑎,b] ⊂ ℝ to f : [𝑎,b] → ℝ, then
\[ \lim_{n\to\infty} \,\lim_{x\to c} f_n (x) = \lim_{x\to c} \,\lim_{n\to\infty} \, f_n (x) \]
for any c ∈ [𝑎,b].
The pointwise limit of a sequence of continuous functions may be continuous even if the convergence is not uniform, as the following examples shows.

Example: Define a sequence fn : [0,1] → ℝ of unbounded continuous functions by

\[ f_n (x) = \begin{cases} 3\,n^3 x, & \ \mbox{if } 0 \le x \le 1/(3n^2 ) , \\ 3\,n^3 \left( \frac{2}{3n^2} - x \right) , & \ \mbox{if } 1/(3n^2 ) \le x \le 2/(3n^2 ) , \\ 0 , & \ \mbox{if } 2/(3n^2 ) \le x \le 1. \end{cases} \]
Since fn(x) = 0 for all \( n^2 \ge 2/(3x) , \) so fn(x) → 0 as n → ∞. This sequence converges pointwise to zero although its elements are unbounded.    ■
The uniform convergence of differentiable functions does not, in general, imply anything about the convergence of their derivatives or the differentiability of their limit. You may observe this phenomenon when two functions may be close together while the values of their derivatives are far apart (if, for example, one function varies slowly while the other oscillates rapidly). Thus, we have to impose strong conditions on a sequence of functions and their derivatives to ensure differentiabily of the limit function.
Theorem 7: Suppose that the sequence of differentiable functions { fn } mapping an open interval (𝑎,b) into ℝ such that fn(x) → f(x) pointwise and their derivatives f'n(x) → g(x) converge uniformly for some f,g : (𝑎,b) → ℝ. Then f is differentiable on (𝑎,b) and f' = g.
Let c ∈ (𝑎,b) and let ϵ > 0. To prove that f'(c) = g(c), we estimate the difference quotient of f in terms of the difference quotients of the fn:
\[ \left\vert \frac{f(x) - f(c)}{x-c} - g(c) \right\vert \le \left\vert \frac{f(x) - f(c)}{x-c} - \frac{f_n (x) - f_n (c)}{x-c} \right\vert + \left\vert \frac{f_n (x) - f_n (c)}{x-c} - f'_n (c) \right\vert + \left\vert f;_n (c) - g(c) \right\vert , \]
where x ∈ (𝑎,b) and xc. We want to make each of the terms on the right-hand side of the inequality less than ϵ/3. This is straight forward for the second term (since fn is differentiable) and the third term (since f'g). To estimate the first term, we approximate f by fm, use the mean value theorem, and let m → ∞.

Since fm - fn is differentiable, the mean value theorem implies that there exists ξ between c and x such that

\[ \frac{f_m (x) - f_m (c)}{x-c} - \frac{f_n (x) - f_n (c)}{x-c} = \frac{\left( f_m - f_n \right) (x) - \left( f_m - f_n \right) (c)}{x-c} = f'_m (\xi ) - f'_n (\xi ) . \]
Since the sequence of derivatives { f'n } converges uniformly, it is a uniformly Cauchy sequence. Therefore, there exists M ∈ ℕ such that
\[ \left\vert f'_m (\xi ) - f'_n (\xi ) \right\vert < \frac{\epsilon}{3} \qquad\mbox{for all} \quad \xi \in (a,b) \quad\mbox{ if }\quad m,n > M , \]
which implies that
\[ \left\vert \frac{f(x) - f(c)}{x-c} - \frac{f_n (x) - f_n (c)}{x-c} \right\vert < \frac{\epsilon}{3} . \]
Taking the limit of this equation as m → ∞, and using the pointwise convergence of { fm } to f, we get that
\[ \left\vert \frac{f (x) - f (c )}{x-c} - \frac{f_n (x) - f_n (c )}{x-c} \right\vert ≤ \frac{\epsilon}{3} \qquad\mbox{for all} \quad n > M . \]
Next, since the sequence of derivatives { f'n } converges uniformly to g, there exists P ∈ ℕ such that
\[ \left\vert f'_n (c) - g(c) \right\vert \frac{\epsilon}{3} \qquad\mbox{for all} \quad n> P. \]
Choose some n > max{ M, P }. Then the differentiability of fn implies that there exists δ > 0 such that
\[ \left\vert \frac{f_n (x) - f_n (c)}{x-c} - f'_n (c) \right\vert < \frac{\epsilon}{3} \qquad\mbox{if} \quad 0 < |x-c| < \delta . \]
Putting these inequalities together, we get that
\[ \left\vert \frac{f (x) - f (c)}{x-c} - g(c) \right\vert < \epsilon \qquad \mbox{if} \quad 0 < |x-c| < \delta , \]
which proves that f is differentiable at c and f'(c) = g(c).    ▣

Example: Consider the series

\[ f(x) = \sum_{n\ge 1} \frac{1}{2^n}\,\sin \left( 3^n x \right) , \]
which converges uniformly on ℝ. Therefore, f(x) is a continuous function. Taking the formal term-by-term derivative of the series for f, we get a series whose coefficients grow with n,
\[ \sum_{n\ge 1} \left( \frac{3}{2}\right)^n \cos \left( 3^n x \right) , \]
so we cannot do this. Plotting the Weierstrass continuous function, we see that it does not appear to be smooth. Karl Weierstrass (1872) proved that f is not differentiable at any point of ℝ. Bernard Bolzano (1830) had also constructed a continuous, nowhere differentiable function, but his results weren’t published until 1922. Subsequently, Teiji Tagaki (1903) constructed a similar function to the Weierstrass function whose nowhere-differentiability is easier to prove. Such functions were considered to be highly counter-intuitive and pathological at the time Weierstrass discovered them, and they weren’t well-received by many prominent mathematicians.
    The Weierstrass function.
 
phi[x_] = Sum[Sin[3^k*x]/2^k, {k, 1, 1500}];
Plot[{phi[x]}, {x, 0, 1*Pi}, PlotStyle -> Thick]
   ■
An equivalent, and often clearer, way to describe uniform convergence is in terms of the uniform, or sup, norm.
Suppose that f : [𝑎,b] → ℝ. The uniform, or sup, norm \( \| f \| \) of f on the interval [𝑎,b] is
\[ \| f \| = \| f \|_{\infty} = \sup_{x\in [a,b]} \left\vert f(x) \right\vert . \]

The uniform norm is a way of measuring the "distance" between two functions by taking the supremum among these individual distances. Recall: the supremum of a set is the least upper bound of the set, a number M such that no element of the set exceeds M, but for any positive ε, there is a member of the set that exceeds M - ε.
Theorem 8: A sequence { fn } of continuous functions fn : [𝑎,b] → ℝ converges uniformly on [𝑎,b] ⊂ ℝ to f : [𝑎,b] → ℝ if and only if \[ \lim_{n\to \infty} \| f_n - f \|_{\infty} = 0. \]
The concept of uniform convergence is related to a rate of convergence of a series of functions in an interval [𝑎, b] of the variable x that is independent of the location of x in that interval. Given uniform convergence, a series of continuous functions converges to a continuous function, limiting values of the sum exist at any point x of the interval, and even termwise differentiation of a series is valid under suitable conditions. Note that for uniform convergence, the interval must be closed, that is, include both end points. When we speak about uniflrm convergence in an open interval (𝑎, b), it means that the sequence converges uniformely at any closed subinterval from (𝑎, b).

 

Tests of Convergence


Comparison test

Let bn ≥ 0 for n > N ∈ ℕ and suppose the series \( \sum_n b_n \) converges. If 0 ≤ |𝑎n| ≤ bn, then the series \( \sum_n a_n \) absolutely converges.

If the series \( \sum_n a_n \) diverges and 𝑎nbn, then the series \( \sum_n a_n \) diverges.

The Weierstrass test (Majorant or M test)

Let { fn } be a sequence of functions fn : [𝑎,b] → ℝ, and suppose that for every integer n ∈ ℕ there exists a constant Mn ≥ 0 such that
\[ \left\vert f_n (x) \right\vert \le M_n \qquad \mbox{for all} \quad x\in [a,b] \qquad\mbox{and} \quad \sum_{n\ge 0} M_n < \infty . \]
Then the series \( \sum_n f_n (x) \) converges uniformly on [𝑎,b].

Abel's uniform convergence test

Let { un(x) } be a sequence of functions. If the following conditions hold
  1. un(x) can be written as un(x) = 𝑎nfn(x);
  2. the series \( \sum_{n\ge 0} a_n < \infty \) converges;
  3. fn(x) is a monotonic decreasing sequence (i.e. fn+1(x) ≤ fn(x)) for all n;
  4. fn(x) is bounded in some region i.e., \( 0 \le f_n (x) < M \) for all x ∈ [𝑎,b];
then, for all x ∈ [𝑎,b], the series \( \sum_{n\ge 0} u_n (x) \) converges uniformly.

Quotion test for series of non-negative terms

For two positive sequences { 𝑎n }n≥0 } and { bn }n≥0 }, where 𝑎n ≥ 0 and bn ≥ 0, if
\[ \lim_{n\to\infty} \frac{a_n}{b_n} = L \ne 0 \quad\mbox{or}\quad \infty , \]
then \( \sum_n a_n \) and \( \sum_n b_n \) either both converge or both diverge.

If L = 0 and if \( \sum_n b_n \) converges, then \( \sum_n a_n \) converges.

Dirichlet test

If 𝑎n, n = 1,2,3,…, is a sequence such that \( \displaystyle \left\vert \sum_{n=1}^p \right\vert < N, \) where N is a number independent of p; and if another sequence of numbers fn satisfy fn+1fn > 0 and \( \displaystyle \lim_{n\to\infty} f_n = 0 , \) then the sum
\[ \sum_{n\ge 1} a_n f_n \]
converges.

Integral test

Suppose that f(x) is a continuous, positive and decreasing function on the interval [k,∞) for some positive k and that bn = f(n). Then
\[ \int_1^{\infty} f(x)\,{\text d}x \le \sum_{n\ge 1} b_n \le \int_1^{\infty} f(x)\,{\text d}x + b_1 . \]
g[x_] = Piecewise[{{1, 0 <= x < 0.2}, {Exp[-0.2], 0.2 < x < 0.4}, {Exp[-0.4], 0.4 < x < 0.6}, {Exp[-0.6], 0.6 < x < 0.8}, {Exp[-0.8], 0.8 < x < 1}}];
plot = Plot[{g[x], Exp[-x]}, {x, 0, 1}, Filling -> {1 -> {2}}, FillingStyle -> Purple, PlotRange -> {{-0.2, 1.2}, {-0.3, 1.2}}, Axes -> False];
ar1 = Graphics[{Arrowheads[0.08], Arrow[{{-0.1, 0}, {1.2, 0}}]}];
ar2 = Graphics[{Arrowheads[0.08], Arrow[{{-0.07, -0.1}, {-0.07, 1.2}}]}];
exp = Plot[Exp[-x], {x, 0, 1.2}, PlotStyle -> {Red, Thickness[0.014]}];
l1 = Graphics[{Dashed, Line[{{0, 0}, {0, 1}}]}];
l2 = Graphics[{Dashed, Line[{{0.2, 0}, {0.2, 0.81}}]}];
l3 = Graphics[{Dashed, Line[{{0.4, 0}, {0.4, 0.67}}]}];
l4 = Graphics[{Dashed, Line[{{0.6, 0}, {0.6, 0.54}}]}];
l5 = Graphics[{Dashed, Line[{{0.8, 0}, {0.8, 0.45}}]}];
l6 = Graphics[{Dashed, Line[{{1.0, 0}, {1.0, 0.36}}]}];
Show[ar1, ar2, l1, l2, l3, l4, l5, l6, exp, plot, Epilog -> Style[{Text["f(1) = ", {0.29, 1}], Text[Subscript[b, 1], {0.39, 1}], Text["f(2) = ", {0.49, 0.81}], Text[Subscript[b, 2], {0.59, 0.81}], Text["f(3) = ", {0.69, 0.67}], Text[Subscript[b, 3], {0.79, 0.67}], Text["f(x)", {1.1, 0.25}]}, 14]];
;
h[x_] = Piecewise[{{Exp[-0.2], 0 <= x < 0.2}, {Exp[-0.4], 0.2 < x < 0.4}, {Exp[-0.6], 0.4 < x < 0.6}, {Exp[-0.8], 0.6 < x < 0.8}, {Exp[-1.0], 0.8 < x < 1}}];
ploth = Plot[{h[x], Exp[-x]}, {x, 0, 1}, Filling -> {1 -> {2}}, FillingStyle -> Pink, PlotRange -> {{-0.2, 1.2}, {-0.3, 1.2}}, Axes -> False];
lh1 = Graphics[{Dashed, Line[{{0, 0}, {0, 0.81}}]}];
lh2 = Graphics[{Dashed, Line[{{0.2, 0}, {0.2, 0.67}}]}];
lh3 = Graphics[{Dashed, Line[{{0.4, 0}, {0.4, 0.54}}]}];
lh4 = Graphics[{Dashed, Line[{{0.6, 0}, {0.6, 0.45}}]}];
lh5 = Graphics[{Dashed, Line[{{0.8, 0}, {0.8, 0.367}}]}];
lh6 = Graphics[{Dashed, Line[{{1.0, 0}, {1.0, 0.36}}]}];
Show[ar1, ar2, lh1, lh2, lh3, lh4, lh5, lh6, exp, ploth, Epilog -> Style[{Text["f(1) = ", {0.29, 1}], Text[Subscript[b, 1], {0.39, 1}], Text["f(2) = ", {0.49, 0.81}], Text[Subscript[b, 2], {0.59, 0.81}], Text["f(3) = ", {0.69, 0.67}], Text[Subscript[b, 3], {0.79, 0.67}], Text["f(x)", {1.1, 0.25}]}, 14]]

          
       Comparison of integral and sum-blocks leading.            Comparison of integral and sum-blocks lagging.

The Euler summation formula

For a smooth function f, we have
\[ \sum_{n=1}^{\infty} f(n) = \int_1^{n} {\text d}t\,f(t) + \frac{1}{2} \left[ f(n) + f(1) \right] + \int_1^{n} \left( x - \lfloor x \rfloor - \frac{1}{2} \right) f'(x)\,{\text d} x, \]
where ⌊ x ⌋ is the floor of a real number x. This formula can be generalized
\[
\[ \int_a^{a+x} {\text d}t\, f(t) = \frac{1}{2} \left[ f(a) + f(a+x) \right] - \sum_{k=1}^N \frac{B_{2k} x^{2k}}{(2k)!} \left[ f^{(2k-1)} (a+x) - f^{(2k-1)} (a) \right] + O\left( x^{@N+2} \right) , \]
where Bk are the Bernoulli numbers.

Cauchy root test

If
\[ \lim_{n\to\infty} \left\vert a_n \right\vert^{1/n} < 1 , \]
then the series \( \displaystyle \sum_{n\ge 0} a_n \) converges absolutely. If \( \displaystyle \lim_{n\to\infty} \left\vert a_n \right\vert^{1/n} > 1 , \) the series diverges because the general term 𝑎n does not tend to zero.

D'Alembert ratio test

Let
\[ \lim_{n\to\infty} \left\vert \frac{a_{n+1}}{a_n} \right\vert = L. \]
If L < 1, then the series \( \displaystyle \sum_n a_n \) absolutely converges, and if L > 1, then the series diverges.

Alternating Series test (Leibniz Criterion)

Suppose that elements of the infinite series \( \displaystyle \sum_{n\ge 0} a_n \) alternate, so 𝑎n = ±(-1)nbn, where bn ≥ 0 for all n ∈ ℕ. Then if
\[ \lim_{n\to\infty} b_n = 0 \quad\mbox{and} \quad \{ b_n \} \quad\mbox{is a decreasing sequence}, \]
then the series \( \displaystyle \sum_{n\ge 0} a_n \) is convergent.

Raabe's test

Let
\[ \lim_{n\to\infty} \left( 1 - \left\vert \frac{a_{n+1}}{a_n} \right\vert \right) = L. \]
Then the series \( \displaystyle \sum_n a_n \) absolutely converges if L > 1 and diverges or converges conditionally if L < 1.

Gauss's test

If
\[ \left\vert \frac{a_{n+1}}{a_n} \right\vert = 1 - \frac{L}{n} + \frac{c_n}{n^2} , \]
where \( \displaystyle \left\vert c_n \right\vert < P \) for all n > N, then the series \( \displaystyle \sum_n a_n \) converges absolutely if L > 1 and diverges or converges conditionally if L ≤ 1.

 

Convergence of Power Series


As we see from the previous examples, the set of holomorphic function 𝓗(𝑎,b) on the interval (𝑎,b) is a proper subset of C(𝑎,b) of infinitely differentiable functions. Now we reiterate the convergence of power series.

A power series \( \sum_{n\ge 0} c_n \left( x - x_0 \right)^n \) is said to converge at x if
\[ \lim_{N\to \infty} \,\sum_{n=0}^N c_n \left( x - x_0 \right)^n \]
exist for that x. The series is said to converge absolutely at x if the series obtained by taking the absolute value of each term
\[ \lim_{N\to \infty} \,\sum_{n=0}^N \left\vert c_n \right\vert \left\vert x - x_0 \right\vert^n \]
converges.
Every power series converges for x = x0, called the center or expansion point. It can be shown that if a series (not necessary a power series) converges absolutely, then the series also converges; so from absolute convergence follows a pointwise convergence. However, the converse is not necessarily true. The sum of an absolutely convergent series is unaffected by the order of the terms.

A power series converges absolutely in a symmetric interval about its expansion point, and diverges outside that symmetric interval. The distance from the expansion point to an endpoint is called the radius of convergence. We assign R = 0 when the set of convergence is {0}, and R = ∞ when when the set of convergence is ℝ.

A power series \( \sum_{n\ge 0} c_n \left( x - x_0 \right)^n \) is said to converge uniformly to S(x) at some closed interval x∈[𝑎,b] if for every ε > 0, there exists some positive integer N such that whenever n > N, we have
\[ \left\vert \sum_{n=0}^N c_n \left( x - x_0 \right)^n - S(x) \right\vert < \varepsilon \]
for every x ∈ [𝑎,b].
Every power series converges uniformly, absolutely, and pointwise within any symmetric closed interval ;[-𝑎,𝑎] ⊂ |x - x0| < R that is subset of the interval of convergence. However, on the boundary |x - x0| = R, these three types of convergence may be different.

There are known many tests to determine the radius of convergence, out of which we present two the most important.

The Ratio Test: If \( \displaystyle \lim_{n\to\infty} \,\left\vert \frac{c_n}{c_{n+1}} \right\vert = R , \) then the series \( \sum_{n\ge } c_n \left( x - x_0 \right)^n \) converges within the interval \( \left\vert x - x_0 \right\vert < R, \) and diverges outside the closed interval [x0 - R, x0 + R].
The radius of convergence can be estimated as
\[ \liminf_{n \to \infty} \, \frac{|c_n |}{|c_{n+1}|} \le R \le \limsup_{n \to \infty} \, \frac{|c_n |}{|c_{n+1}|} . \]
The Root Test: If
\[ \liminf_{n\to \infty} \left\vert c_n \right\vert^{-1/n} = R, \]
then R is the radius of convergence for the series \( \sum_{n\ge } c_n \left( x - x_0 \right)^n . \) Here the limit inferior of a sequence can be thought of as limiting (i.e., eventual and extreme) bounds on the sequence.
A power series works just as well for complex numbers as real numbers, and are in fact best viewed from that perspective, but we restrict our attention here to real-valued power series. Considering complex-valued power series by taking its argument to be a complex variable: \( S(z) = \sum_{n\ge 0} c_n \left( z- x_0 \right)^n , \) its radius of convergence remains the same as if we consider the variable z either complex or real. Therefore, the radius of convergence does not depend whether the argument is complex or real.
The Radius of convergence of a power series \( S(z) = \sum_{n\ge 0} c_n \left( z- x_0 \right)^n , \) centered on a point x0 is equal to the distance from x0 to the nearest point where S(z) cannot be defined in a way that makes it holomorphic.
The nearest point means the nearest point in the complex plane ℂ, not necessarily on the real line ℝ, even if the center and all coefficients are real. For example, the function
\[ f(z) = \frac{1}{1+z^2} = \sum_{n\ge 0} \left( -1 \right)^n z^{2n} \]
has no singularities on the real line because the equation 1 + z² = 0 has no real roots, but ±j∈ℂ.

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)