Preface


This tutorial was made solely for the purpose of education and it was designed for students taking Applied Math 0340. It is primarily for students who have some experience using Mathematica. If you have never used Mathematica before and would like to learn more of the basics for this computer algebra system, it is strongly recommended looking at the APMA 0330 tutorial. As a friendly reminder, don't forget to clear variables in use and/or the kernel. The Mathematica commands in this tutorial are all written in bold black font, while Mathematica output is in regular fonts.

Finally, you can copy and paste all commands into your Mathematica notebook, change the parameters, and run them because the tutorial is under the terms of the GNU General Public License (GPL). You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute and refer to this tutorial, as long as this tutorial is accredited appropriately. The tutorial accompanies the textbook Applied Differential Equations. The Primary Course by Vladimir Dobrushkin, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part V of the course APMA0340
Introduction to Linear Algebra with Mathematica

Orthogonal Expansions


When separation of variable method (see next chapter) is applied to partial differential equations, it leads to a corresponding Sturm--Liouville problem. However, it is only one part in constructing the solution to the (initial) boundary value problem. The second important part is representation of a function as a series over eigenfunctions. In many problems for second order partial differential equations, the series representing a function is orthogonal and coefficients could be calculated explicitly. In the majority of higher than two equations, eigenfunctions are not orthogonal and, therefore, separation of variable approach is not successful. Nevertheless, we are going to treat a wide class of partial differential equations of the second order for which separation of variables reduces the problems under consideration to the Sturm--Liouville problems where eigenfunctions are orthogonal. We are forced to remind some definitions.

Two functions f(x) and g(x) defined on some interval [a,b] are called orthogonal, if the integral of the product of functions over the interval is zero:
\[ \left\langle f, g \right\rangle = \int_a^b \overline{f(x)}\,g(x)\,{\text d}x =0 , \]
where overline indicates complex conjugate operation. Such integral, or more general with weight w,
\[ \left\langle f, g \right\rangle = \int_a^b \overline{f(x)}\,g(x)\,w(x) \,{\text d}x \]
is called the inner product of functions f and g. It is assumed that all integrals exist.

Recall that the integral of a function over an interval [a,b] does not depend on the values of the function at discrete number of points because they do not affect the integral to compute. Therefore, when one integrate a function over [a,b], it does not matter what are the values of the function at the end points and even whether the function is defined at these points. When we don't care about end points, we use a special and convenient notation: |a,b| which represent one of the following cases:

\[ |a,b| \quad\mbox{ means } \quad [a,b] \quad\mbox{or}\quad [a,b) \quad\mbox{or}\quad (a,b] \quad\mbox{or}\quad (a,b) . \]
If f(x) is a function defined on an interval (finite or not) |a,b|, we define the norm of f to be
\[ \| f \| = \| f \|_2 = \left\langle f , f \right\rangle^{1/2} = \left( \int_a^b \overline{f(x)}\,g(x)\,w(x) \,{\text d}x \right)^{1/2} . \]
The set of all functions on the interval |a,b| having finite norm is denoted by \( L^2 [a,b] \) or simply L2.
A collection of functions { f0(x), f1(x), … , fn(x), … } defined on |a,b| is called orthogonal on |a,b| if
\[ \left\langle f_i , f_j \right\rangle = \int_a^b \overline{f_i (x)}\,f_j (x) \,{\text d}x = 0 \qquad \mbox{for } i \ne j. \]
A collection of functions { f0(x), f1(x), … , fn(x), … } defined on |a,b| is called orthonormal on |a,b| if
\[ \left\langle f_i , f_j \right\rangle = \int_a^b \overline{f_i (x)}\,f_j (x) \,{\text d}x = \begin{cases} 0, & \ \mbox{ if } i \ne j , \\ 1 , & \ \mbox{ when } i=j . \end{cases} \]

There are significant differences between the behavior of Fourier- and power-series expansions. A power series is essentially an expansion about a point, using only information from that point about the function to be expanded (including, of course, the values of its derivatives). We already know that such expansions only converge within a radius of convergence defined by the position of the nearest singularity. However, a Fourier series (or any expansion in orthogonal functions) uses information from the entire expansion interval, and therefore can describe functions that have “nonpathological” singularities within that interval. However, we also know that the representation of a function by an orthogonal expansion is only guaranteed to converge in the mean. This feature comes into play for the expansion of functions with discontinuities, where there is no unique value to which the expansion must converge. However, for Fourier series, it can be shown that if a function f(x) satisfying the Dirichlet conditions is discontinuous at a point x0, its Fourier series evaluated at that point will be the arithmetic average of the limits of the left and right approaches.

Consider the collection of cosine functions { 1, cos(x), cos(2x), cos(3x), … } on an interval of length 2π, which we know from the previous sections is orthogonal. We have

\[ \| 1 \|^2 = \int_{-\pi}^{\pi} 1^2 {\text d}x = 2\pi \qquad\Longrightarrow \qquad \| 1 \| = \sqrt{2\pi} . \]
For k ≥ 1, we have
\[ \| \cos (kx) \|^2 = \int_{-\pi}^{\pi} \cos^2 (kx)\,{\text d}x = \int_{-\pi}^{\pi} \frac{1 + \cos (2kx)}{2}\,{\text d}x = \left[ \frac{x}{2} + \frac{\sin (4kx)}{4k} \right]_{x=-\pi}^{x=\pi} = \pi . \]
Hence, \( \| \cos (kx) \| = \sqrt{\pi} . \) It follows that the collection of functions
\[ \left\{ \frac{1}{\sqrt{2\pi}} , \frac{\cos x}{\sqrt{\pi}} , \frac{\cos 2x}{\sqrt{\pi}} , \frac{\cos 3x}{\sqrt{\pi}} , \ldots \right\} \]
is orthonormal on any interval of length 2π. ■
An orthogonal system of functions \( \displaystyle \left\{ f_k (x) \right\}_{k\ge 0} \) defined on an interval (finite or not) |a,b| is said to be complete relative to a class of functions if there is no nontrivial function that is orthogonal to each member of the collection of the functions. In other words, if f(x) on |a,b| satisfies \( \langle f, f_k \rangle =0 \) for all k ≥ 0 implies that f(x) ≡ 0 on [a,b], or, more precisely, that \( \displaystyle \| f \|^2 = \int_a^b \left\vert f^2 (x) \right\vert {\text d}x =0 . \)
The term "complete" was introduced in 1910 by the Russian mathematician Vladimir Andreevich Steklov (1864--1926), a student of Alexander Lyapunov.

Considered previously the collection of cosine functions { 1, cos(x), cos(2x), cos(3x), … } on an interval of length 2π in not complete, but it is orthogonal. Indeed, if f is any odd function (for instance, x or sinx), then \( \displaystyle f(-x) = -f(x) . \) Calculations show that \( \langle f, \cos (kx) \rangle =0 \) because the product of odd function and an even function is an odd function. ■

Theorem: If \( \displaystyle \left\{ f_k (x) \right\}_{k\ge 0} \) is a complete orthogonal system of functions on the interval |a,b|, then every (piecewise continuous or, more generally, square integrable) function f(x) on |a,b| has the expansion
\[ f(x) = \sum_{n\ge 0} \frac{\langle f, f_n \rangle}{\| f_n \|^2} \, f_n (x) . \]
The series in the right hand side converges in L2 sense to f(x) that means
\[ \lim_{N \to \infty} \int_a^b \left\vert f(x) - \sum_{k=1}^N \frac{\langle f, f_k \rangle}{\| f_k \|^2}\,f_k (x) \right\vert^2 {\text d}x =0 . \]
The system of trigonometric functions
\[ \left\{ 1, \cos x, \sin x , \cos 2x, \sin 2x, \cos 3x, \sin 3x , \ldots \right\} \]
is a complete orthogonal system on [-π, π]. ■
An important remark:

The orthogonal expansion \( \displaystyle f(x) \sim \sum_{n\ge 0} c_n f_n (x) \) for a complete orthogonal system on [a,b], holds in L2 sense, but not necessarily pointwise, i.e. for a fixed x∈[a,b] the series on the right hand side might not necessarily converge and, even if it does, it might not converge to f(x).

Theorem: Let f1, f2, …, fn, … be the normalized eigenfunctions of the self-adjoint Sturm--Liouville problem
\[ \begin{split} \frac{\text d}{{\text d}x} \left[ p(x)\,\frac{{\text d}y}{{\text d}x} \right] - q(x)\, y + \lambda \,w(x)\,y(x) =0 , \qquad 0 < x < \ell , \\ \alpha_1 y(0) - \alpha_2 y'(0) =0 , \qquad \beta_1 y(\ell ) + \beta_2 y' (\ell ) =0 , \qquad |\alpha_1 | + |\alpha_2 | \ne 0 \quad\mbox{and} \quad |\beta_1 | + |\beta_2 | \ne 0. \end{split} \]
Let f and its derivative \( f' \) be piecewise continuous on the closed interval \( 0 \le x \le \ell . \) Then the series
\[ f(x) = \sum_{n\ge 1} c_n f_n (x) \]
whose coefficients cn are given by
\[ c_n = \int_0^{\ell} w(x)\,f(x)\, f_n (x)\,{\text d}x = \left\langle f, w\,f_n \right\rangle , \qquad n=1,2,3,\ldots ; \]
converges to \( \displaystyle \frac{1}{2} \left[ f(x+0) + f(x-0) \right] \) at each point in the open interval (0,ℓ).

 

There are known other orthogonal and complete sets of functions that are used in other than differential equations areas. In particular, Li--Torney system of step function is very useful in computer science. The collection of Walsh functions form a complete orthogonal set of functions that can be used to represent any discrete function---just like trigonometric functions can be used to represent any continuous function in Fourier analysis. These functions as well as the Walsh--Hadamard code are named after the American mathematician Joseph L. Walsh (1895--1973). Applications of the Walsh functions can be found wherever digit representations are used, including speech recognition, medical and biological image processing, and digital holography. The Haar wavelet is a sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. Wavelet analysis is similar to Fourier analysis. The Haar sequence was proposed in 1909 by the Hungarian mathematician Alfréd Haar (1885--1933).

The Faber–Schauder system is a Schauder basis for the space C([0, 1]) of continuous functions on [0, 1]. The Franklin system is obtained from the Faber--Schauder system by the Gram–Schmidt orthonormalization procedure.

Bessel inequality: Let H be an inner product space or Hilbert space and let { e1, e2, … , en, … } be an orthonormal sequence. Then for all elements fH from the space, we have
\[ \sum_{k\ge 1} \left\vert \langle e_k , f \rangle \right\vert^2 \le \| f \|^2 = \langle f, f \rangle , \]
where \( \langle \cdot , \cdot \rangle \) denotes the inner product in the Hilbert space.
Corollarily (Bessel inequality): Consider the Fourier series associated with a square integrable on a symmetric interval [-ℓ,ℓ] function f, namely, \( \displaystyle f(x) \sim \frac{a_0}{2} + \sum_{n\ge 1} \left[ a_n \cos \cos \left( \frac{n\pi x}{\ell} \right) + b_n \sin \left( \frac{n\pi x}{\ell} \right) . \right] \) Then
\[ \frac{a_0^2}{2} + \sum_{n\ge 1} \left( a_n^2 + b_n^2 \right) \le \frac{1}{\ell}\,\| f \|^2 = \frac{1}{\ell}\,\langle f, f \rangle = \frac{1}{\ell}\,\int_{-\ell}^{\ell} \left\vert f(x) \right\vert^2 {\text d}x = E[f] , \]
where E[f] is known as the energy of the 2ℓ-periodic function fL²[-ℓ,ℓ].
Theorem (Parseval): If a square integrable function has a Fourier series on the interval of length 2π given by \( \displaystyle f(x) \sim \frac{a_0}{2} + \sum_{n\ge 1} \left[ a_n \cos \cos (nx) + b_n \sin (nx) \right] \) or a complex Fourier series, \( \displaystyle f(x) \sim \sum_{n=-\infty}^{\infty} c_n e^{{\bf j} nx} , \) then the following identity holds:
\[ \frac{1}{\pi} \,\int_{-\pi}^{\pi} \left\vert f(x) \right\vert^2 {\text d}x = \frac{1}{2}\, a_0^2 + \sum_{n\ge 0} \left( a_n^2 + b_n^2 \right) , \]
or
\[ \frac{1}{2\pi} \,\int_{-\pi}^{\pi} \left\vert f(x) \right\vert^2 {\text d}x = \sum_{n=-\infty}^{\infty} \left\vert c_n \right\vert^2 . \]
This theorem originated in the 1799 work of the French mathematician Marc-Antoine Parseval (1755--1836). It is also known as Rayleigh's energy theorem, or Rayleigh's Identity because the integral \( \displaystyle E[f] = \frac{1}{\ell} \int_{-\ell}^{\ell} \left\vert f (x) \right\vert^2 {\text d}x = \| f \|^2 / \ell \) usually represents the energy in applications. Therefore, the Parseval theorem could be interpreted as the energy in f equals the energy in its Fourier coefficients. The function space formed by all functions with finite energy (= square integrable, is usually denoted by L²) fill so called the Hilbert space.

Example. Expand the function

\[ f(x) =x, \qquad 0\le x \le 1, \]
in terms of the normalized eigenfunctions { φn } of the Sturm--Liouville problem
\[ y'' + \lambda\,y =0, \qquad y(0) -y'(0) =0, y'(1) =0. \]
This problem occurs, for instance, in the heat conduction problem in a bar or rod of unit length. The boundary condition at x = 0 corresponds to a rate of heat flow that is proportional to the temperature, and units are chosen so that the constant of proportinality is 1. The boundary condition at x = 1 corresponds to a perfect insulation there.

The solution of the differential equation \( y'' + \lambda\,y =0 \) may have one of three forms, depending on λ, so it is necessary to consider these cases. Let us start with λ = 0, the general solution becomes a linear function

\[ y = a+ b\,x , \]
with some constants a and b. The boundary conditions require that
\[ a-b =0 \qquad\mbox{and}\qquad b=0, \]
respectively. The only solution is a = b = 0; hence, the boundary value problem has only trivial solution. So λ = 0 is not an eigenvalue.

If λ is negative, we can let λ = -μ² so that μ > 0. Then the differential equation for y becomes

\[ y'' - \mu^2 y =0 \]
and its general solution is
\[ y(x) = a\,\cosh (\mu x) + b\,\sinh (\mu x) , \]
with some constants a and b. From the boundary conditions
\[ \begin{split} y(0) - y'(0) =0 \qquad \Longrightarrow \qquad a - \mu\,b =0 , \\ y'(1) =0 \qquad \Longrightarrow \qquad a\,\sinh (\mu ) + b\,\cosh (\mu ) =0 . \end{split} \]
We express a through b from the first equation, a = μ b, and substitute into the last equation:
\[ b\,\mu\,\sinh (\mu ) + b\,\cosh (\mu ) =0 \qquad \Longrightarrow \qquad \mu = - \coth (\mu ) . \]
From the following figure
Plot[{x, -Coth[x]}, {x, 0, 3}, PlotStyle -> Thick]
it is clear that these two graphs do not intersect.

Next consider λ > 0, then the general solution is

\[ y(x) = a\,\cos \left( \sqrt{\lambda}x \right) + b\, \sin \left( \sqrt{\lambda}x \right) , \qquad \lambda > 0. \]
The boundary condition at x = 0 requires
\[ a -b\,\sqrt{\lambda} =0 \qquad \Longrightarrow \qquad a = b\,\sqrt{\lambda} . \]
From the boundary condition at x = 1 we obtain
\[ -a\,\sin \left( \sqrt{\lambda} \right) +b\, \cos \left( \sqrt{\lambda} \right) =0 \qquad \Longrightarrow \qquad \sqrt{\lambda} \,\sin \left( \sqrt{\lambda} \right) = \cos \left( \sqrt{\lambda} \right) . \]
For nontrivial solution y we must have b ≠ 0 and λ must satisfy
\[ \sqrt{\lambda} = \cot \sqrt{\lambda} . \]
Solutions of the above transcendent equation can be determined numerically.
FindRoot[Cot[Sqrt[x]] == Sqrt[x], {x, 1}]
{x -> 0.740174}
They can also be found approximately by sketching the graphs of \( f(\mu ) = \cot (\mu ) \) and \( g(\mu ) = \mu , \) where λ² = μ > 0.
Plot[{x, Cot[x]}, {x, 0, 10}, PlotStyle -> Thick]
Then taking square roots of the abscissas of points of intersection, we determine the roots. Usng Mathematica, we find several first roots:
\[ \lambda_1 \approx 0.740174, \quad \lambda_2 \approx 11.7349, \quad \lambda_3 \approx 41.4388 , \quad \lambda_4 \approx 90.8082, \ldots . \]
The other roots are given with reasonable accuracy by
\[ \lambda_n \approx \left[ (n-1)\pi \right]^2 , \quad n= 5,6,\ldots . \]
Finally, the eigenfunction corresponding to the eigenvalue λ2 is
\[ \phi_n (x) = \sqrt{\lambda_n} \,\cos \left( \sqrt{\lambda_n} \,x \right) + \sin \left( \sqrt{\lambda_n} \,x \right) , \qquad n=1,2,\ldots . \]

 

To find the expansion for f in terms of the eigenfunctions, we write

\[ f(x) = \sum_{n\ge 1} c_n \phi_n (x) = \sum_{n\ge 1} c_n \left[ \sqrt{\lambda_n} \,\cos \left( \sqrt{\lambda_n} \,x \right) + \sin \left( \sqrt{\lambda_n} \,x \right) \right] , \]
where the coefficients are given by
\[ \| \phi_n \|^2 c_n = \int_0^1 f(x)\,\phi_n (x) \,{\text d}x = \int_0^1 x \left[ \sqrt{\lambda_n} \,\cos \left( \sqrt{\lambda_n} \,x \right) + \sin \left( \sqrt{\lambda_n} \,x \right) \right] {\text d}x \]
Mathematica will do this tedious job for you:
Integrate[x*(Sqrt[r]*Cos[Sqrt[r]*x] + Sin[Sqrt[r]*x]), {x, 0, 1}]
(-Sqrt[r] + (1 + r) Sin[Sqrt[r]])/r
Integrate[(Sqrt[r]*Cos[Sqrt[r]*x] + Sin[Sqrt[r]*x])^2, {x, 0, 1}]
1/4 (2 + 2 (1 + r) - 2 Cos[2 Sqrt[r]] + ((-1 + r) Sin[2 Sqrt[r]])/ Sqrt[r])
Therefore,
\[ c_n = 4\,\frac{\left( \frac{1 + \lambda_n}{\lambda_n} \right) \cos \sqrt{\lambda_n} -1}{\left( \lambda_n -1 \right) \sin \left( 2 \sqrt{\lambda_n} \right) + 2 \sqrt{\lambda_n} + 2 \sqrt{\lambda_n} \left( 1 + \lambda_n \right) - 2\sqrt{\lambda_n} \,\cos \left( 2 \sqrt{\lambda_n} \right)} , \quad n=1,2,\ldots . \]

 

  1. Huaien Li and David C. Torney, A complete system of orthogonal step functions, Proceedings of the American Mathematical Society 132 No 12, 2004, 3491--3502.
  2. J. L. Walsh, A closed set of normal orthogonal functions, American Journal of Mathematics, 45, (1923), 5--24.

 

Return to Mathematica page

Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions