# Preface

This is a
tutorial made solely for the purpose of education and it was designed
for students taking Applied Math 0330. It is primarily for students who
have very little experience or have never used
*Mathematica* before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font,
while *Mathematica* output is in regular fonts. This means that you can
copy and paste all comamnds into *Mathematica*, change the parameters and
run them. You, as the user, are free to use the scripts
to your needs for learning how to use the *Mathematica* program, and have
the
right to distribute this tutorial and refer to this tutorial as long as
this tutorial is accredited appropriately.

Return to computing page for the second course APMA0340

Return to Mathematica tutorial for the first course APMA0330

Return to Mathematica tutorial for the second course APMA0340

Return to the main page for the course APMA0340

Return to the main page for the course APMA0330

Return to Part III of the course APMA0330

** Theorem (Mean Value Theorem):** Let
\( f:\, [a,b] \,\mapsto \,\mathbb{R} \) be a continuous function on the closed interval
[a,b], and differentiable on the open interval (a,b), where

*a*<

*b*. Then there exists some ξ in (a,b) such that

** Theorem (Cauchy Mean Value Theorem):**
If functions

*f*and

*g*are both continuous on the closed interval [a,b], and differentiable on the open interval (a, b), then there exists some \( \xi \in (a,b) , \) such that

** Theorem (Mean Value Theorem for definite integrals):**
If \( f:\, [a,b] \,\mapsto \,\mathbb{R} \) is integrable and
\( g:\, [a,b] \,\mapsto \,\mathbb{R} \) is an integrable function that does not change sign
on [a, b], then there exists some \( \xi \in (a,b) , \) such that

# Error Estimates

We discussed so far different numerical algorithms to solve the initial value problems for first order differential equations:

*f*is given and defined on some compact 2D domain to which the initial point \( \left( x_0 , y_0 \right) \) belongs. We associate with this problem a uniform grid (only for simplicity, in practice it is almost always variable) of points \( x_n = x_0 + n\,h , \quad n=0,1,2,,\ldots , \) where

*h*is the step size. We always assume that the given initial value problem has a unique solution, which we denote by \( y= \phi (x) . \) A numerical approximation leads to the sequence of ordinates \( y_n , \quad n=0,1,2,\ldots , \) that we wish approximate the true values \( \phi (x_n ) . \) Their difference \( E_n = \phi (x_n ) -y_n \) is known as the global truncation error.

The methods we introduce for approximating the solution of an initial value problem are called **difference methods** or **discrete variable methods**. The solution is approximated at a set of discrete points called a grid (or mesh) of points. An elementary single-step method has the form \( y_{n=1} = y_n + h\, \Phi (x_n, y_n ) \) for some function Φ called an **increment function**.

The errors incurred in a single integration step are
of two types:
1. The local truncation error or discretization
error - the error introduced by the approximation of the
differential equation by a difference equation.
2. Errors due to the deviation of the numerical solution
from the exact theoretical solution of the difference
equation. Included in this class are round-off errors, due
to the inability of evaluating real numbers with infinite
precision with the use of computer (i.e., computers usually
operate on fixed word length) , and errors which are incurred
if the difference equation is implicit and is not solved
exactly at each step.

If a multistep method is used, an additional source of
error results from the use of an auxiliary method (usually
a single-step method, e.g., Runge--Kutta method), to develop
the needed starting values for the multistep method.

In the numerical solution of an ODE, a sequence of
approximations *y*_{n} to the actual solution \( y = \phi (x ) \) is generated. Roughly speaking, the stability of a numerical method refers
to the behavior of the difference or error, \( y_n - \phi (x_n ) , \) as *n*
becomes large. To make our exposition simple, we consider the propagation of error in a one-step method by studying the particular representative one-step
method of Euler:

*n+1*to the error at step

*n*. To do this let \( \phi_n = \phi (x_n ) \) denote the true value at

*x = x*

_{n}of the actual solution \( y = \phi (x) \) to the initial value problem \( y' = f(x,y) , \quad y(x_0 )= y_0 . \) For simplicity, we consider only the uniform grid of points \( x_n = x_0 + n\,h , \quad n=0,1,2,\ldots , \) with a fixed step size

*h*. Then the total accumulated solution error ε

_{n}at step

*n*is defined by

*R*

_{n+1}, denotes the round-off errors resulting from evaluating Euler's rule. Similarly the true values of actual solution satisfy the relation

*T*

_{n+1}denotes the local truncation error.

Subtracting the latter from the former yields

*f*

_{y}denotes \( \frac{\partial f}{\partial y} \) and η

_{n}lies between

*y*

_{n}and \( \phi_n . \) If \( |f_y (x , y )| \le K \) and \( |E_{n+1} | \le E , \) where

*K*and

*E*are both positive constants, the error expression above can be replaced by a related first-order difference equation

*e*

_{n}can be found by successive substitution as follows since it is actually a geometric series:

The global discretization error

*e*

_{k}is defined by

The local discretization error ε_{k+1} is defined by

*x*

_{k}to

*x*

_{k+1}.

**Theorem (Precision of Euler's Method):** Assume that \( y= \phi (x) \) is the solution to the initial value problem

*b = x*

_{m}of teh interval [a,b] is called the final global error:

**Theorem (Precision of Heun's Method):** Assume that \( y= \phi (x) \) is the solution to the initial value problem

# 1.3.6. Accuracy

Sometimes you may be interested to find out what methods are being used in **NDSolve**.

Here you can see the coefficients of the default 2(1) embedded pair.

{{{1}, {1/2, 1/2}}, {1/2, 1/2, 0}, {1, 1}, {-(1/2), 2/3, -(1/6)}}

You also may want to compare some of the different methods to see how they perform for a specific problem.

Implicit Runge--Kutta methods have a number of desirable properties.

The Gauss--Legendre methods, for example, are self-adjoint, meaning that they provide the same solution when integrating forward or backward in time.

http://reference.wolfram.com/language/tutorial/NDSolveImplicitRungeKutta.html

A generic framework for implicit Runge--Kutta methods has been implemented. The focus so far is on methods with interesting geometric properties and currently covers the following schemes:

"ImplicitRungeKuttaGaussCoefficients"

"ImplicitRungeKuttaLobattoIIIACoefficients"

"ImplicitRungeKuttaLobattoIIIBCoefficients"

"ImplicitRungeKuttaLobattoIIICCoefficients"

"ImplicitRungeKuttaRadauIACoefficients"

"ImplicitRungeKuttaRadauIIACoefficients"

The derivation of the method coefficients can be carried out to arbitrary order and arbitrary precision.

## Fixed Point Iteration

## Bracketing Methods

## Secant Methods

## Euler's Methods

## Heun Method

## Runge-Kutta Methods

## Runge-Kutta Methods of order 2

## Runge-Kutta Methods of order 3

## Runge-Kutta Methods of order 4

## Polynomial Approximations

## Error Estimates

## Adomian Decomposition Method

## Modified Decomposition Method

## Multistep Methods

## Multistep Methods of order 3

## Multistep Methods of order 4

## Milne Method

## Hamming Method

Return to Mathematica page

Return to the main page (APMA0330)

Return to the Part 1 (Plotting)

Return to the Part 2 (First Order ODEs)

Return to the Part 3 (Numerical Methods)

Return to the Part 4 (Second and Higher Order ODEs)

Return to the Part 5 (Series and Recurrences)

Return to the Part 6 (Laplace Transform)