MATLAB TUTORIAL for the Second Cource. Part 6: Cesàro summation

Cesàro summation

Cesàro summation

Fourier series allow us to represent a perhaps complicated function defined on a finite interval as simply a linear combination (which is usually infinite) of its projections onto a basis that consists of trigonometric functions. Such a compact representation has proven exceedingly useful in the analysis of many real-world systems involving periodic phenomena, such as waves propagating on a string, electrical circuits with oscillating current sources, and heat diffusion on a metal ring---an application we will later examine in detail. More generally, Fourier series usually arise in the ubiquitous context of boundary value problems, making them a fundamental tool among mathematicians, scientists, and engineers.

However, there is a caveat. Except in degenerate cases, a Fourier series (more precisely, its finite sum truncation version,

\[ F_N (x) = \frac{a_0}{2} + \sum_{k=1}^N \left( a_k \cos \frac{k\pi x}{\ell} + b_k \sin \frac{k\pi x}{\ell} \right) = \sum_{k=-N}^N \alpha_k e^{k{\bf j} \pi x/\ell} , \]
with which we deal in applications) is usually not an exact replica of the original function. Thus, a natural question is: exactly how does the truncated sum \( F_N (x) \) approximate the function? If we say that the Fourier series converges to the function, then precisely in what sense does the series converge? And under what conditions?

One notion of convergence between functions is L2-convergence, or convergence in the square mean. We consider only functions defined on a finite interval of length \( 2\ell . \) Once we represent this function as a Fourier series, we get its periodic extension with period \( T=2\ell . \) Therefore, it is convenient to assume that the original function is a periodic function; then its convegent Fourier series gives exact equality for the given function. For a \( 2\ell \) -periodic function f, we have L2-convergence of the Fourier series FN if

\[ \lim_{N\to \infty} \,\int_{-\ell}^{\ell} \left\vert f(x) - F_N (x) \right\vert^2 {\text d} x = \lim_{N\to \infty} \,\int_{-\ell}^{\ell} \left\vert f(x) - \sum_{k=-N}^N \alpha_k e^{k{\bf j} \pi x/\ell} \right\vert^2 {\text d} x = 0 . \]

One of the first results regarding Fourier series convergence is that if f is square-integrable (that is, if \( \int_{-\ell}^{\ell} \left\vert f(x) \right\vert^2 {\text d} x < \infty \) ), then its Fourier series L2-converges to f. This is a nice result, but it leaves more to be desired. L2-convergence only says that over an interval of length \( 2\ell , \) the average deviation between f and its Fourier approximation FN must tend to zero. However, for a fixed x in \( (-\ell , \ell ) , \) there are no guarantees on the difference between f(x) and the series approximation at x.

A stronger---and quite natural---sense of convergence is pointwise convergence, in which we demand that at each point \( x \in (-\ell ,\ell ), \) the series approximation converges to f(x). The Pointwise Convergence Theorem then states that if f is sectionally continuous and x0 is such that the one-sided derivatives \( f' (x_0 +0) = \lim_{\epsilon \,\mapsto 0, \epsilon >0} \, f(x_0 +\epsilon ) \) and \( f' (x_0 -0) = \lim_{\epsilon\, \mapsto 0, \epsilon >0} \, f(x_0 -\epsilon ) \) both exist, then the Fourier series converges to \( f (x_0 ) . \) However, in cases when the function experiences a finite jump of dicontinuity pointwise convergence is not an inappropriate notion of convergence as Gibbs phenomenon shows.

To avoid such problems, we desire the even stronger notion of uniform convergence, such that the rate at which the series converges is identical for all points in \( [-\ell , \ell ] , \) and consequently, everywhere (due to periodicity). By adopting the metric

\[ d(f,g) = \sup \left\{ |f(x)-g(x)) \, : \, x\in [-\ell , \ell ] \right\} \]
over the space of continuous functions from \( [-\ell , \ell ] \) to \( \mathbb{R} , \) we can force convergence to imply uniform convergence, simply by definition. This metric space is denoted by \( C([-\ell , \ell ], \mathbb{R}) . \) It can also be proven that \( C([-\ell , \ell ], \mathbb{R}) \) is a vector space, and thus the concept of series is well-defined.

To define uniform convergence of Fourier series, we need more general definition of convergence for infinite sum, that is known as Cesàro summation, named after the Italian analyst Ernesto Cesàro (1859--1906). So it is right time to remind the definition of Cesàro summability of infinite series. Let \( \{ a_k \} \) be a sequence, and let

\[ s_n = a_1 + a_2 + \cdots + a_n =\sum_{k=1}^n a_k \qquad \mbox{or} \qquad S_n = a_0 + a_1 + \cdots + a_n =\sum_{k=0}^n a_k \]
be the nth partial sum of the series
\[ \sum_{k=1}^{\infty} a_k \qquad \mbox{or} \qquad \sum_{k=0}^{\infty} a_k \]
depending whether summation starts with k = 1 or k = 0. In both cases, we will call \( s_n \quad\mbox{or} \quad S_n \) nth partial sum of the given infinite series. The series \( \sum_{k\ge 1} a_k \quad \mbox{or} \quad \sum_{k\ge 0} a_k \) is called Cesàro summable, with Cesàro sum \( A \in \mathbb{R} \quad \mbox{or} \quad A \in \mathbb{C} , \) if the average value of its partial sums tends to A:
\[ \lim_{n \to \infty} \,\frac{1}{n} \,\sum_{k=1}^{n} s_k =A \qquad \mbox{or} \qquad \lim_{n \to \infty} \,\frac{1}{n+1} \,\sum_{k=0}^{n} S_k =A , \]
depending on what index summation starts. In other words, the Cesàro sum of an infinite series is the limit of the arithmetic mean (average) of the first n partial sums of the series, as n goes to infinity. If a series is convergent, then it is Cesàro summable and its Cesàro sum is the usual sum. For any convergent sequence, the corresponding series is Cesàro summable and the limit of the sequence coincides with the Cesàro sum.

Example: Consider the infinite series \( 1-1+1-1+1- \cdots , \) also written

\[ \sum_{k\ge 0} (-1)^k , \]
which is sometimes called Grandi's series, after Italian mathematician, philosopher, and priest Guido Grandi, who gave a memorable treatment of the series in 1703. It is a divergent series, meaning that it lacks a sum in the usual sense. On the other hand, its Cesàro sum is 1/2. ■

In 1890, Ernesto Cesàro stated a broader family of summation methods which have since been called (C, α) for non-negative integers α. The (C, 0) method is just ordinary summation, and (C, 1) is Cesàro summation as described above. For our purposes, we will use only (C, 1) Cesàro summation only.

We are now primed to appreciate Fej´er’s remarkable theorem that was proved in 1899 by a Hungarian mathematician Lipót Fejér (1880--1959). He was born in Jewish family as Leopold Weiss (which means "white" in German) and changed his name around 1900 which resembles the word "white" in Hungarian. He was the chair of mathematics at the University of Budapest since 1911 and led a highly successful Hungarian school of analysis. He was the thesis advisor of mathematicians such as John von Neumann, Paul Erdős, George Pólya, Marcel Riesz, Gábor Szegő, and Pál Turán.

Fej´er’s Theorem: Let \( f\,: \,[-\ell , \ell ]\, \mapsto \,\mathbb{R} \) be a continuous function with \( f(-\ell )= f(\ell ) . \) Then the Fourier series of f (C,1)-converges to f in \( C([-\ell , \ell ], \mathbb{R}) , \) where \( C([-\ell , \ell ], \mathbb{R}) \) is the metric space of continuous functions from \( [-\ell , \ell ] \) to the set of real numbers. ■

Without imposing any additional conditions on f aside from being continuous and periodic, Fej´er’s theorem shows that Fourier series can still achieve uniform convergence, granted that we instead consider the arithmetic means of partial Fourier sums.

A more general form of the theorem applies to functions which are not necessarily continuous. Suppose that f is absolutely integrable on the finite interval \( [-\ell , \ell ] . \) If the left and right limits \( f(x_0 \pm 0) \mbox{ of } f(x) \) exist at x0, or if both limits are infinite of the same sign, then

\[ \frac{1}{n}\,\sum_{k= 0}^{n-1} S_k = \sum_{k= 0}^{n-1} \left( 1- \frac{k}{n} \right) a_k \,\mapsto \, \frac{1}{2} \left[ f(x_0 +0) + f(x_0 -0) \right] \quad\mbox{as}\quad n \,\mapsto \infty . \]
Existence or divergence to infinity of the Cesàro mean is also implied. ■

Example: Let f be the periodic function defined on the interval [-1,1]:

\[ f(x) = \begin{cases} 0 , & \quad \mbox{if $x\in (-1, -1/2)$, } \\ 1 , & \quad \mbox{if $x\in (-1/2, 1/2)$, } \\ 0 , & \quad \mbox{if $x\in (1/2, 1)$. } \end{cases} \]
Expanding this even function into Fourier series, we get
\[ f(x) = \frac{1}{2} + \frac{2}{\pi} \, \sum_{k\ge 1} \frac{1}{k} \, \sin \left( \frac{k\pi}{2} \right) \cos \left( k\pi x \right) = \frac{1}{2} + \frac{2}{\pi} \, \sum_{k\ge 0} \frac{(-1)^k}{2k+1} \, \cos \left( (2k+1)\pi x \right) . \]
Its Cesàro partial sums are
\[ C_m = \frac{1}{2} + \frac{2}{\pi} \, \sum_{k= 1}^m \frac{1}{k} \left( 1- \frac{k}{m+1} \right) \sin \left( \frac{k\pi}{2} \right) \cos \left( k\pi x \right) , \quad m=1,2,\ldots . \]
Ploting partial sums with n = 20 terms for Fourier series and Cesàro one, we don't see Gibbs phenomenon in Cesàro partial sums:

f[x_] := Piecewise[{{0, -1 < x < -1/2}, {1, -1/2 < x < 1/2}, {0, 1/2 < x < 1}}]
an = Integrate[Cos[k*Pi*x], {x, -1/2, 1/2}]
cos[m_] = 1/2 + (2/Pi)* Sum[(1/k)*Sin[k*Pi/2]*Cos[k*Pi*x], {k, 1, m}]
cecos[m_] = 1/2 + (2/Pi)* Sum[(1/k)*(1 - k/(m + 1))*Sin[k*Pi/2]*Cos[k*Pi*x], {k, 1, m}]
Plot[{cecos[20], f[x]}, {x, -2, 2}, PlotStyle -> {{Thick, Blue}, {Thick, Orange}}]
Now we use complex form of Fourier series:
\[ f(x) = \lim_{N\,\to \,\infty}\sum_{k=-N}^N \alpha_k \,e^{k{\bf j}\pi x} , \qquad \alpha_k = \frac{1}{2} \,\int_{-1}^1 f(x) \, e^{-k{\bf j}\pi x} \,{\text d}x = \frac{1}{k\pi} \,\sin \frac{k\pi}{2} , \quad k=0,\pm 1, \pm 2, \ldots . \]
Its Cesàro partial sums are
\[ C_m (x) = \sum_{k=-m}^m \sin \frac{k\pi}{2} \,e^{k{\bf j}\pi x} , \qquad m=1, 2, \ldots . \]
When we plot these partial sums, we get exactly the same graphs:
alphak = Integrate[f[x]*Exp[-k*Pi*I*x], {x, -1, 1}]/2
complex[m_] = Sum[alphak*Exp[k*Pi*I*x], {k, -m, m}]
cesaro[m_] = 1/2 + Sum[alphak*Exp[k*Pi*I*x]*(1 - Abs[k]/(m + 1)), {k, 1, m}] + Sum[alphak*Exp[k*Pi*I*x]*(1 - Abs[k]/(m + 1)), {k, -m, -1}]
Plot[{cesaro[10]}, {x, -3, 3}, PlotStyle -> {Thick, Red}]

Example: Let \( f(x) = \mbox{sign}(x) -x \) on the interval [-1,1]. Expanding this odd function into sine-Fourier series, we get

\[ f(x) = \frac{2}{\pi} \, \sum_{k\ge 1} \frac{1}{k} \, \sin \left( k\pi x \right) . \]
Upon ploting partial Fourier sum and Cesàro sum
\[ S_n (x) = \frac{2}{\pi} \, \sum_{k= 1}^n \frac{1}{k} \left( 1- \frac{k}{n} \right) \sin \left( k\pi x \right) , \]
with n = 20 terms, we don't observe Gibbs phenomenon in the latter one.

f[x_] := Sign[x] - x
bn=Integrate[f[x]*Sin[k*Pi*x], {x, -1, 1}]
sin[m_] = (2/Pi)*Sum[(1/k)*Sin[k*Pi*x], {k, 1, m}]
cesin[m_] = (2/Pi)*Sum[(1/k)*(1 - k/m)*Sin[k*Pi*x], {k, 1, m}]
Plot[{sin[20], f[x]}, {x, -2, 2}, PlotStyle -> {{Thick, Blue}, {Thick, Orange}}]
Plot[{cesin[20], f[x]}, {x, -2, 2}, PlotStyle -> {{Thick, Blue}, {Thick, Orange}}]

Example: Let \( f(x) = 4x^3 -3x^2 -6x \) on the interval [-2,2]. Expanding this function into Fourier series, we get

\[ f(x) = -4 - \frac{8}{\pi^3} \, \sum_{k\ge 1} \left[ 6\pi k \,\cos \frac{k\pi x}{2} + \left( 5 k^2 \pi^2 -48 \right) \sin \frac{k\pi x}{2} \right] \frac{(-1)^k}{k^3} \]
f[x_] = 4*x^3 - 3*x^2 - 6*x
a0=Integrate[f[x], {x, -2, 2}]/2
ak=Integrate[f[x]*Cos[k*Pi*x/2]/2, {x, -2, 2}]
bk=Integrate[f[x]*Sin[k*Pi*x/2]/2, {x, -2, 2}]
fourier[m_]:=-4+Sum[ak*Cos[k*Pi*x/2] + bk*Sin[k*Pi*x/2], {k,1,m}]
We build Fourier and Cesàro approximations and then plot partial sums with n = 20 terms:

cesaro[m_] := -4 + Sum[(ak*Cos[k*Pi*x/2] + bk*Sin[k*Pi*x/2])*(1 - k/(m + 1)), {k, 1, m}]
pp = Plot[{f[x], fourier[20]}, {x, -3, 3}, PlotStyle -> {{Thick, Blue}, {Thick, Red}}, Epilog -> {Text[11.58, {-.66, 11.579}], Text[-35.58, {-.66, -35.58}]}]
p = Graphics[{Green, PointSize[Large], {Point[{0, 11.57959}], Point[{0, -35.58}]}}]
Show[pp, p]
The Fourier graph clearly shows Gibbs phenomenon at points of discontinuity x = 2 and x = -2. Since \( f(2+0) = -32 \quad\mbox{and}\quad f(2-0) =8 , \) the given function has finite jump of 40 at both points of discontinuity. Therefore, we expect overshoot/undershoot by the value \( (1.1789797444721675*40 - 40)/2 \approx 3.579594889 . \) As a result we expect overshoot at points of discontinuity to be about 11.57959 and underhoot to be around -35.5796.

Separable equations

Enter text here

Equations reducible to separable equations
Exact equations
Integrating Factors
Linear and Bernoulli equations
Riccati equation
Existence and Uniqueness
Qualitative analysis