Glossary
To find a term in the glossary, click on the letter that the term you are searching for begins with, or enter a search term.
Abel, Niels  Niels Henrik Abel (18021829) was a Norwegian mathematician who made pioneering contributions in a variety of fields. 
Abel equation  The Abel equation of the first lind is the first order differential equation \( y' = f_3 (x)\, y^3 + f_2 (x)\, y^2 + f_1 (x)\,y + f_0 (x) . \) 
Abel's formula  Abel's formula or Abel's identity is an equation that expresses the Wronskian of two solutions of a homogeneous secondorder linear ordinary differential equation in terms of a coefficient of the original differential equation. 
Abscissa  Abscissa is the first coordinate (usually horizontal) of a point in a coordinate system. 
Abscissa of convergence 
Abscissa of convergence for the Laplace transform is a real number σ such that the integral \( \int_0^{\infty} f(t)\,e^{\lambda t}\,{\text d}t \) converges for Reλ ≥ σ and diverges for Reλ < σ. 
Acceleration of convergence 
Acceleration of series convergence is a technique involving transformations for improving the rate of convergence of a series. 
Adams, John  Adams, John Couch (18191892) was a British mathematician and astronomer. His most famous achievement was predicting the existence and position of Neptune, using only mathematics. 
AdamsBashford formula 
AdamsBashford formula is an example of multistep methods that uses information from the previous steps to calculate the next value. For example, the twostep AdamsBashforth formula \( y_{n+2} = y_{n+1} + \frac{3}{2}\,h\,f \left( t_{n+1}, y_{n+1} \right)  \frac{1}{2}\,h\,f \left( t_{n}, y_{n} \right) \) can be used to solve numerically the initial value problem \( y' = f(t,y) \quad y(x_0 ) = y_0 . \) 
AdamsMoulton formula 
AdamsMoulton formula is an example of implicit multistep methods that uses information from the previous steps to calculate the next value. For example, the twostep AdamsMoulton formula \( y_{n+1} = y_{n} + \frac{1}{2}\,h\,f \left( t_{n+1}, y_{n+1} \right) + \frac{1}{2}\,h\,f \left( t_{n}, y_{n} \right) \) can be used to solve numerically the initial value problem \( y' = f(t,y) \quad y(x_0 ) = y_0 . \) 
Adaptive numerical method  Some methods for the numerical solution of ordinary differential equations use an adaptive stepsize in order to control the errors of the method and to ensure stability properties such as Astability. Romberg's method is an example of a numerical integration method which uses an adaptive stepsize. 
Adjoint differential operator  The adjoint of the differential operator T acting on a function u according to the formula \( T\,u = \sum_{k=0}^n a_k (x)\, \frac{{\text d}^k u}{{\text d} x^k} \) is defined as the operator T^{*} such that \( \langle T\,u , v \rangle = \langle u, T^{\ast} v \rangle , \) where the notation ⟨ ⋅ , ⋅ ⟩ is used for the scalar product or inner product. 
ADM  The Adomian Decomposition Method (ADM) is a semianalytical method for solving nonlinear equations with analytical nonlinearities including ordinary differential equations, partial differential equations, and integrodifferential nonlinear equations. The crucial aspect of the method is employment of the "Adomian polynomials" that allow you to represent the nonlinear portion of the equation as a convergent series. These polynomials mathematically employ the Faà di Bruno's formula. 
Adomian, George  Dr. George Adomian (19221996) was an American mathematician, theoretical physicist, and electrical engineer of Armenian descent. He received his Ph.D. degree from UCLA. He first proposed and considerably developed the Adomian Decomposition Method (ADM) for solving nonlinear differential equations, both ordinary, and partial, deterministic and stochastic, also integral equations, algebraic and transcendental (functional), and matrix equations. He was a Distinguished Professor (academic rank), the David C. Barrow Professor of Mathematics (Chair), and the Director of the Center for Applied Mathematics at the University of Georgia, the founder and Chief Scientist of General Analytics Corporation, a winner of the 1989 Richard Bellman Prize for outstanding contributions to nonlinear stochastic analysis, and a 1988 National Academy of Sciences Scholar. 
Airy, George  Sir Airy, George Biddell (18011892) was an English mathematician and astronomer, Astronomer Royal from 1835 to 1881. His many achievements include work on planetary orbits, measuring the mean density of the Earth, a method of solution of twodimensional problems in solid mechanics and, in his role as Astronomer Royal, establishing Greenwich as the location of the prime meridian. 
Airy equation  The differential equation \( \frac{{\text d}^2 y}{{\text d} x^2}  x\,y(x) =0 , \) or more general \( \frac{{\text d}^2 y}{{\text d} x^2}  k\,x\,y(x) =0 , \) known as the Airy equation or the Stokes equation. 
Amplitude  The amplitude is simply the maximum displacement of the object from the equilibrium position. 
Amplitude modulation  Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. In amplitude modulation, the amplitude (signal strength) of the carrier wave is varied with time in proportion to that of the message signal being transmitted. 
Analytic function  A function is analytic at a point if the function has a power series expansion valid in some neighborhood of that point. This power series is usually referred to as a (holomorphic) branch of the analytic function. 
Angular momentum  The angular momentum is the rotational equivalent of linear momentum. In three dimensions, the angular momentum for a point particle is a pseudovector r × p, the cross product of the particle's position vector r (relative to some origin) and its momentum vector p = mv. 
Annihilator  The annihilator method is a procedure used to find a particular solution to certain types of inhomogeneous ordinary differential equations (ODE's). It is similar to the method of undetermined coefficients, but instead of guessing the particular solution, the particular solution is determined systematically in this technique. 
Archimedes  Archimedes (287212 BC) of Syracuse was a Greek mathematician, physicist, engineer, inventor, and astronomer. Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. 
Asymptotic equivalence  Two functions, f(x) and g(x), are said to be Asymptotically equivalent as \( x\to x_0 \) if \( f(x)/ g(x) \sim 1 \) as \( x\to x_0 ,\) that is: \( f(x)=g(x) \{ 1+o(1)\} \) as \( x\to x_0 .\) 
Asymptotic expansion  Given a function f(x) and an asymptotic series \( \{ g_k(x)\} \) at x_{0}, the formal series \( \sum_{k=0}^{\infty} a_k g_k(x) , \) where the \(\{ a_k\} \) are given constants, is said to be an asymptotic expansion of f(x) if \( f(x)  \sum_{k=0}^{n} a_k g_k (x) =o(g_n(x)) \) as \( x\to x_0 \) for every n; this is expressed as \( f(x) \sim \sum_{k=0}^{\infty} a_k g_k(x) .\) Partial sums of this formal series are called asymptotic approximations to f(x). Note that the formal series need not converge. 
Asymptotic series  A sequence of functions, \( \{g_k(x)\} ,\) forms an asymptotic series at x_{0} if \( g_{k+1}(x)=o(g_k(x)) \) as \( x \to x_0 .\) 
Asymptotic stability  An equilibrium solution v to an autonomous equation of first order ordinary differential equations \( y' = f(x,y) \) is said to be asymptotically stable if, in addition to being stable, \( {\bf v}(x){\bf u}(x) \to 0\) as \( x\to\infty . \) 
Autonomous equation  An ordinary differential equation is autonomous if the independent variable does not appear explicitly in the equation. For example, \( y_{x x x }+(y_x)^2=y \) is autonomous while \( y_x =x \) is not. 
Backward Euler formula  
Bashforth, Francis  
Beat  
Bernoulli, Daniel  
Bernoulli, Jacob  
Bernoulli, Johann  Johann Bernoulli (also known as Jean or John (16671748) was a Swiss mathematician and was one of the many prominent mathematicians in the Bernoulli family. 
Bernoulli equation  
Bertalanffy, Ludwig  Karl Ludwig von Bertalanffy (19011972) was an Austrian biologist known as one of the founders of general systems theory. 
Bessel, Friedrich  Friedrich Wilhelm Bessel (17841846) was a German astronomer, mathematician, physicist and geodesist. 
Bessel equation  A secondorder linear ordinary differential equation \( x^2 y'' + x\,y' + \left( x^2  \nu^2 \right) y(x) =0 , \) or, in selfadjoint form: \( \left( x\,y' \right)' + \left( x  \frac{\nu^2}{x} \right) y(x) = 0 \) is called the Bessel equation of order ν. 
Bessel functions 
There are actually three kinds of Bessel functions. The Bessel function of the first kind of order ν:
\[
J_{\nu} (x) = \sum_{k\ge 0} \frac{(1)^k}{k!\,\Gamma (k+\nu +1)} \left( \frac{x}{2} \right)^{2k+\nu} ,
\]
where \( \Gamma (z) = \int_0^{\infty} x^{z1} e^{x} {\text d} x \) is the gamma function.
There are two Bessel functions of the second kind of order ν: one is called the Weber function:
\[
Y_{\nu} (x) = \frac{\cos \nu \pi \,J_{\nu} (x)  J_{\nu} (x)}{\sin \nu\pi} .
\]
Its product with π is called the Neumann functions: N_{ν}(x) = πY_{ν}(x).
The Bessel function of the third kind of order ν or Hankel functions:
\[
\begin{split}
H_{\nu}^{(1)} (x) &= J_{\nu} (x) + {\bf j}\,Y_{\nu} (x) , \\
H_{\nu}^{(2)} (x) &= J_{\nu} (x)  {\bf j}\,Y_{\nu} (x) ,
\end{split}
\]
where j is the unit vector in positive vertical direction on the complex plane ℂ.
Because of the linear independence of the Bessel function of the first and second
kind, the Hankel functions provide an alternative pair of solutions to the Bessel
differential equation.

Bifurcation diagram  
Bifurcation point  A bifurcation point is a point were the integral curves of the solutions of a differential equation qualitatively change their bihavior. Most commonly applied to the mathematical study of dynamical systems, a bifurcation occurs when a small smooth change made to the parameter values of a system. 
Binomial formula  The binomial theorem states \( \displaystyle (1+x)^m = \sum_{k\ge 0} \binom{m}{k} x^k , \) where \( \displaystyle \binom{m}{k} = \frac{m^{\underline{k}}}{k!} \) is the binomial coefficient. Note that \( m^{\underline{k}} = m\left( m1 \right) \left( m2 \right) \cdots \left( mk+1 \right) \) is the kth falling factorial. 
Bisection method  The bisection method is a rootfinding method that applies to any continuous functions for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. 
Boltzmann, Ludwig  Ludwig Eduard Boltzmann (18441906) was an Austrian physicist and philosopher whose greatest achievement was in the development of statistical mechanics, which explains and predicts how the properties of atoms (such as mass, charge, and structure) determine the physical properties of matter (such as viscosity, thermal conductivity, and diffusion). 
Boundary conditions  A condition that is required to be satisfied at all or part of the boundary of a region in which a set of differential equations is to be solved. 
Brent's method  The Brent's method is a rootfinding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the lessreliable methods. The algorithm tries to use the potentially fastconverging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method is due to Richard Brent and builds on an earlier algorithm by Theodorus Dekker(born 1927). Consequently, the method is also known as the Brent–Dekker method. 
Boundary value problem  A boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. 
BVP  BVP is the abreviation for Boundary Value Problem. 
Brachistochrone  A brachistochrone curve (from Ancient Greek βράχιστος χρόνος (brákhistos khrónos), meaning 'shortest time'), or curve of fastest descent, is the one lying on the plane between a point A and a lower point B, where B is not directly below A, on which a bead slides frictionlessly under the influence of a uniform gravitational field to a given end point in the shortest time. The problem was posed by Johann Bernoulli in 1696. 
Carrying capacity  The carrying capacity of an environment is the maximum population size of a biological species that can be sustained indefinitely. 
Catenary  The curve described by a uniform chain hanging from two supports in a uniform gravitational field is called a catenary, a name apparently coined by Thomas Jefferson. Leonhard Euler proved in 1744 that the catenary is the curve which, when rotated about the xaxis, gives the surface of minimum surface area (the catenoid) for the given bounding circle. 
Characteristic polynomial 
For every linear differential operator with constant coefficients
\( L \left[ \texttt{D} \right] = a_n \,\texttt{D}^n + a_{n1} \, \texttt{D}^{n1} + \cdots + a_1 \, \texttt{D} + a_0 \,\texttt{I} ,
\) corresponds a polynomial
\[
L\left( \lambda \right) = a_n \,\lambda^n + a_{n1} \,\lambda^{n1} + \cdots + a_1 \,\lambda + a_0 ,
\]
which
is called the characteristic polynomial for the linear operator L[D] or differential equation \( L\left[ \texttt{D} \right] y = 0 . \) Equating the characteristic polynomial to zero, we obtain the characteristic equation: L(λ) = 0.

Chebyshev method  The Chebyshev iteration is an iterative method for determining the solutions of a system of equations. The method is named after Russian mathematician Pafnuty Chebyshev, who discovered it in 1838 as a student project. \[ x_{k+1} = x_k  \frac{f \left( x_k \right)}{f' \left( x_k \right)}  \frac{f^2 \left( x_k \right) f'' \left( x_k \right)}{2\left( f' \left( x_k \right) \right)^3} , \qquad k=0,1,2,\ldots . \] 
Chebyshev P.  Pafnuty Lvovich Chebyshev (18211894) was a Russian mathematician. His name has seven different spellings including two Russians. He was long time friend of the British mathematician James Sylvester. Chebyshev is known for his work in the fields of probability, statistics, mechanics, and number theory. He enjoyed solving some mechanical problems; in particular, we solved the famous problem of converting a circular motion into horizontal one that comes from the steam engine. Chebyshev discovered orthogonal polynomials, some of them are called after other mathematicians such as Hermite and Laguerre. 
Confluent hypergeometric equation  The secondorder ordinary linear differential equation \( x\,y'' + \left( \gamma  x \right) y'  \alpha\, y(x) =0 \) or, in selfadjoint form, ( \left( e^{x} x^{\gamma} y' \right)'  \alpha\, e^{x}\, y (x) =0 , \) is called the confluent hypergeometric equation or the degenerate hypergeometric equation. 
Clairaut, Alexis  Alexis Claude Clairaut (17131765) was a French mathematician, astronomer, and geophysicist. 
Clairaut's equation  Clairaut's Differential Equation \( y = x\, y' + f \left( y' \right) \) has the general solution \( y = c\,x + f(c) . \) The singular solution envelopes are \( x =  f' (c) \) and \( y = f(c)  c\,f' (c) . \) 
Cooling model  Newton's Law of Cooling states that the rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its surroundings. The law is frequently qualified to include the condition that the temperature difference is small and the nature of heat transfer mechanism remains the same. 
Degenerate hypergeometric equation  The secondorder ordinary linear differential equation \( x\,y'' + \left( \gamma  x \right) y'  \alpha\, y(x) =0 \) or, in selfadjoint form, ( \left( e^{x} x^{\gamma} y' \right)'  \alpha\, e^{x}\, y (x) =0 , \) is called the confluent hypergeometric equation or the degenerate hypergeometric equation. 
Derivative 
Let f(x) be a function of a real variable, and let x_{0} be some point from the domain of f. The function f(x) is said to be differentiable at x = x_{0} if for a small ε such that ε ≪ 1 and for all h we have
\[
f\left( x_0 + \varepsilon h \right) = f\left( x_0 \right) + A\left( x_0 \right) h + O \left( \varepsilon^2 \right) .
\]
Then A(x_{0}) is called the derivative of f at
x = x_{0} and denoted either by f' (Lagrange notation) or \( {\text d}f/{\text d}x \) (Leibniz notation), or by dot
\( \dot{f} \) (Newton's notation) if independent variable is time. L. Euler suggested to denote the differential operator by \( \texttt{D}\,f = f' . \)

Differential equation  A differential equation is an equation that relates one or more functions and their derivatives. 
Direction fields  A direction field or a slope field for a first order differential equation \( {\text d}y / {\text d}x = f(x,y) , \) is a field of short either straight line segments or arrows of slope f(x,y) drawn through each point (x,y) in some chosen grid of points in the (x,y) plane. 
Driven equation  The differential equation \( a_n \, y^{(n)} + a_{n1} \, y^{(n1)} + \cdots + a_1 \, y' + a_0 \, y= f(x) , \) where f(x) is not zero, is called the driven or nonhomogeneous equation. 
Driving function  The nonhomogeneous term f(x) in the differential equation \( a_n \, y^{(n)} + a_{n1} \, y^{(n1)} + \cdots + a_1 \, y' + a_0 \, y= f(x) \) is called the driving function of nonhomogeneous term or forcing term. 
Elzaki transform  The Elzaki transform of a function f(t) is \[ E\left[ f(t) \right] (\nu ) = \nu \int_0^{\infty} f(t)\, e^{t/\nu}\,{\text d}t = \nu^2 \int_0^{\infty} f(t\nu )\,e^{t}\,{\text d}t . \] The Elzaki and Laplace transforms exhibit a duality \[ {\cal L} \left[ f (t) \right] (\lambda ) = \lambda \,E \left[ f (t) \right] \left( \frac{1}{\lambda} \right) , \qquad E \left[ f (t) \right] (\nu ) = \nu\,{\cal L} \left[ f (t) \right] \left( \frac{1}{\nu} \right) , \] where \( {\cal L} \left[ f (t) \right] (\lambda ) = f^L (\lambda ) = \int_0^{\infty} f(t)\,e^{\lambda\,t} \,{\text d}t \) is the Laplace transform of the function f(t). The inverse Elzaki transform for a meromorphic function T(s): \[ E^{1} \left[ T(s) \right] (t) = \frac{1}{2\pi{\bf j}} \,\int_{a{\bf j}\infty}^{a+{\bf j}\infty} T \left( \frac{1}{s} \right) s\,e^{st}\,{\text d}s = \sum\,\mbox{residues of } \left[ T \left( \frac{1}{s} \right) s\,e^{st} \right] . \] 
Error function 
The error function
\[
\mbox{erf} (z) = \frac{2}{\sqrt{\pi}}\,\int_0^z e^{t^2}\,{\text d}t .
\]

Euler, Leonhard  Leonhard Euler (17071783) was a mathematician, physicist, astronomer, geographer, logician and engineer who made important and influential discoveries in many branches of mathematics, such as infinitesimal calculus and graph theory, while also making pioneering contributions to several branches such as topology and analytic number theory. He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy and music theory. 
Explicit solution  If a differential equation \( F(x,y,y' ) =0 \) has a solution that is represented via a known smooth function y = ϕ(x) on some interval (𝑎,b), then this function ϕ is known as an explicit solution. 
Forcing term  The nonhomogeneous term f(x) in the differential equation \( a_n \, y^{(n)} + a_{n1} \, y^{(n1)} + \cdots + a_1 \, y' + a_0 \, y= f(x) \) is called the forcing term or driving function of nonhomogeneous term or driving function. 
Fresnel integral 
The Fresnel integral:
\[
S(x) = \int_0^x \sin \left( t^2 \right) {\text d}t .
\]

Friedmann equations  The Friedmann equations are a set of equations in physical cosmology that govern the expansion of space in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by the Russian and Soviet physicist Alexander Friedmann (18881925) in 1922 from Einstein's field equations. 
Fullwave rectifier 
The fullwave rectifier of a function f(t), defined on a finite interval 0≤t≤T, is a periodic function with period T that is equal to f(t) on the interval [0,T]. 
General solution  A general solution of an nth order ordinary differential equation (ODE) is a a family of solutions depending on n arbitrary constants from some domain. 
Geometrix series  A geometric series is the Maclaurin series \( \displaystyle \frac{1}{1x} = \sum_{k\ge 0} x^k . \) Its finite sum version reads \( \displaystyle \sum_{k= 0}^n x^k = \frac{1 x^{n+1}}{1x} . \) 
Gross–Pitaevskii equation  The Gross–Pitaevskii equation (GPE, named after Eugene P. Gross and Lev Petrovich Pitaevskii) describes the ground state of a quantum system of identical bosons using the Hartree–Fock approximation and the pseudopotential interaction model. 
Halflife  A halflife of the decaying quantity is time needed to decompose half of it. 
Halfwave rectifier 
The halfwave rectifier of a function f(t), defined on a finite interval 0≤t≤T, is a periodic function with period 2T that coincides with f(t) on the interval [0,T] and is identically zero on the interval [T,2T]. 
Halley method  The Halley's method is a rootfinding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley (16561742). \[ x_{k+1} = x_k  \frac{2\,f \left( x_k \right) f' \left( x_k \right)}{2\left( f' \left( x_k \right)\right)^2  f \left( x_k \right) f'' \left( x_k \right) } , \qquad k=0,1,2,\ldots . \] It is a rootfinding algorithm used for solving noninear equations f(f) = 0. 
Heaviside function  The Heaviside step function, or the unit step function, usually denoted by H(t): \[ H(t) = \begin{cases} 1, & \ \mbox{ for } 0 < t < \infty , \\ 1/2 , & \ \mbox{ for } t =0, \\ 0, & \ \mbox{ for } \infty < t < 0. \end{cases} \] The derivative of the Heaviside function in weak sense is the Dirac delta function. 
Heaviside, Oliver  Oliver Heaviside (1850 – 1925) was an English selftaught electrical engineer, mathematician, and physicist who adapted complex numbers to the study of electrical circuits, invented operational method in differential equations, called the Laplace transforms, reformulated Maxwell's field equations in terms of electric and magnetic forces and energy flux, and independently coformulated vector analysis. 
Hénon map 
The Hénon map, sometimes called HénonPomeau attractor/map, is a discretetime dynamical system. It is one of the most studied examples of dynamical systems that exhibit chaotic behavior. The Hénon map takes a point (x_{n}, y_{n}) in the plane and maps it to a new point
\[
\begin{cases}
x_{n+1} &= 1  a\,x_n^2 + y_n \\
y_{n+1} &= b\,x_n .
\end{cases}
\]
The map depends on two parameters, 𝑎 and b, which for the classical Hénon map have values of 𝑎 = 1.4 and b = 0.3.

Implicit function theorem  Consider a continuously differentiable function F: ℝ² → ℝ and a point (x_{0},y_{0}) ∈ ℝ² so that F(x_{0},y_{0}) = c, a constant. If \( \frac{\partial F}{\partial y} \left( x_0 , y_0 \right) \ne 0, \) then there is a neighborhood of (x_{0},y_{0}) so that whenever x is sufficiently close to x_{0}, there is a unique y so that F(x,y) = c. Moreover, this assignment makes y continous function of x. 
Implicit solution  A relation φ(x,y) = 0 is known as the implicit solution of the given differential equation if it defines at least one real function of the variable x on an interval (𝑎,b such that this function is an explicit solution of the differential equation on this interval. 
Initial value problem 
A differential equation
\[
F(x,y, y' ) =0
\]
together with the initial condition \( y\left( x_0 \right) = y_0 \) is called the initial value problem.

Intermediate Value Theorem  The Intermediate value theorem states that if f is a continuous function whose domain contains the interval [𝑎, b], then it takes on any given value between f(𝑎) and f(b) at some point within the interval. 
Intermittent function 
A function f is said to be piecewise continuous or intermittent on a finite closed interval [𝑎,b] if
the interval can be divided into finitely many subintervals so that f(t) is continuous on each subinterval and
approaches a finite limit at the end points of each subinterval from the interior.
A function is said to be piecewise continuous or intermittent on the infinite interval if it is piecewise continuous on every finite compact subinterval. See section 1.6i. 
Intrinsic rate  The intrinsic rate of increase, usually de`note by r, is the rate at which a population increases in size if there are no densitydependent forces regulating the population: \( {\text d}P/{\text d}t = r\,P(t) . \) 
Jerk  Jerk or jolt is the rate at which an object's acceleration changes with respect to time. 
Lagrange reminder 
For a smooth function f(x) on interval (𝑎,b), the Lagrange reminder is the error of approximation of the function by its Taylor's polynomial \( T_N (x) = f\left( x_0 \right) + f'\left( x_0 \right) \left( xx_0 \right) + \cdots + \frac{f^{(N)} (x_0 )}{N!} \left( x  x_0 \right)^{N} \) abount a point x_{0} ∈ (𝑎,b) of degree N:
\[
R_{N+1} = f(x)  T_N (x) = \frac{f^{(N+1)} (\xi )}{(N+1)!} \left( x  x_0 \right)^{N+1}
\]
for some (unknown) ξ in the interval 𝑎 ≤ ξ ≤ b.

Laplace transform  The Laplace transform of a function f(t) defined for notnegative t is denoted as f^{L} or \( {\cal L} \left[ f (t) \right] (\lambda ) \) and is defined by the integral (if it converges for Reλ > α): \[ f^{L} (\lambda ) = {\cal L} \left[ f (t) \right] (\lambda ) = \int_0^{\infty} f(t)\,e^{\lambda\,t}\,{\text d}t . \] The Laplace transform was invented by the English electrical engineer Oliver Heaviside (18501925) at the end of the 19th century. The inverse Laplace transform is defined by the Cauchy principal value integral: \[ f(t) = {\cal L}^{1} \left[ F(\lambda ) \right] (t) = \frac{1}{2\pi{\bf j}} \,\int_{a{\bf j}\infty}^{a+{\bf j}\infty} F(\lambda )\,e^{\lambda\,t}\,{\text d}\lambda = \lim_{T\to\infty} \, \frac{1}{2\pi{\bf j}} \,\int_{a{\bf j}T}^{a+{\bf j}T} F(\lambda )\,e^{\lambda\,t}\,{\text d}\lambda . \] 
Locus  Locus is a curve or other figure formed by all the points satisfying a particular equation of the relation between coordinates, or by a point, line, or surface moving according to mathematically defined conditions. 
Maclaurin series  A Maclaurin series is a Taylor series expansion of a function about 0, \[ f(x) = \sum_{n\ge 0} \frac{f^{(n)} \left( 0 \right)}{n!}\, x^n = f(0) + f' (0) \,x + f'' (0)\,x^2 + \cdots . \] Maclaurin series are named after the Scottish mathematician Colin Maclaurin (16981946). 
Mean value theorem  If f(x) is a differentiable function on the open interval (𝑎,b) and continuous on the closed interval [𝑎,b], then there is at least one point ξ in (𝑎,b) such that \( \left( ba \right) f(\xi ) = f(b)  f(a) . \) 
Michaelis–Menten model  In biochemistry, Michaelis–Menten model is one of the bestknown models of enzyme kinetics. It is named after German biochemist Leonor Michaelis and Canadian physician Maud Menten. The model takes the form of an equation describing the rate of enzymatic reactions. 
Neumann function  Neumann function N_{ν}(x) or the Bessel function of the second kind is a solution of the Bessel equation, which is defined by \( N_{\nu} (x) = \pi\,\frac{\cos \nu \pi \,J_{\nu} (x)  J_{\nu} (x)}{\sin \nu\pi} . \) 
Newton's method  The Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a rootfinding algorithm which produces successively better approximations to the roots (or zeroes) of a realvalued function. The most basic version starts with a singlevariable function f(x) defined for a real variable x, the function's derivative f', and an initial guess x_{0} for a root of f. If the function satisfies sufficient assumptions and the initial guess is close, then \[ x_{k+1} = x_k  \frac{f(x_k )}{f' (x_k )} , \qquad k=0,1,2,\ldots , \] until a sufficiently precise value is reached. 
Newton's second law  Newton's second law states that the mass of the object times acceleration is equal to the net force on the object. 
Normal form 
An ordinary differental equation of first order is said to be in the normal form when its highest derivative is isolated. For example, a first order differential equation is in normal form if it can be written as
\( y' (x) = f(x,y) . \) A second order linear differential equation is said to be in normal form when it does not contain the first derivative: \( y'' + q(x)\, y = f . \) 
Nullcline  A nullcline of the first order differential equation \( y' = f(x,y) \) is a set of points in the xy plane so that f(x,y) = 0. Geometrically, these are the points where the solutions go horizontally. 
Onestep numerical method 
An explicit onestep method for computation of an approximation y_{n+1} of the solution to the initial value problem y' = f(x,y), y(x_{0}) = y_{0}, on a grid of points x_{0} < x_{1} < ··· with step size h has the form
\[
y_{n+1} = y_n + h\,\Phi (x,y_n ,h), \qquad n=0,1,2,\ldots , \quad y_0 = y(x_0 ).
\]
Here Φ(·, ·, ·) is called incremental function of the onestep method. If this function depends also on y_{n+1}, the the method is called an implicit onesetp method.

Open method  Open methods are iterative methods that do not necessarily bracket a root. Open methods may diverge as the computation progresses, but when they do converge, they usually do so much faster than bracketing methods. 
Order of an ODE  The order of a differential equation is the order of the highest derivative of the unknown function that appear in the differential equation. 
Ordinate  Ordinate is the second coordinate (usually vertical) of a point in a coordinate system. 
Order of an ODE  The order of a differential equation is the order of the highest derivative of the unknown function that appear in the differential equation. 
Phase line  A phase line for an autonomous differential equation\( y' = f(y) \) is a line segment with labels sink, source or node, one for each root of f(y) = 0, i.e., each equilibrium. 
Phase portrait  A phase portrait for a differential equation \( y' = f(x,y) \) is a collection of typical trajectories that identify behavior of solutions, includeing including in the long run. 
Picard, Émile  A French mathematician CharlesÉmile Picard (18561941) made great contributions in analysis, algebraic geometry, and mechanics. 
PicardFuchs equation  The Picard–Fuchs equation is \( x^2 \left( 1x \right)^2 \frac{{\text d}^2 y}{{\text d} x^2} + x \left( 1x \right)^2 \frac{{\text d} y}{{\text d} x} + \frac{31x4}{144} \, y = 0 . \) 
Picard iteration  The solution to the initial value problem for the first order differential equation \( y' = f(x,y) , \quad y(x_0 ) = y_0 \) can be obtained by the iterative procedure: \[ y_{n+1} (x) = y_0 + \int_{x_0}^x f(s, y_n (s))\,{\text d} s , \qquad n=0,1,2, \ldots . \] 
Piecewise continuous function 
A function f is said to be "piecewise continuous or intermittent on a finite closed interval [𝑎,b] if
the interval can be divided into finitely many subintervals so that f(t) is continuous on each subinterval and
approaches a finite limit at the end points of each subinterval from the interior.
A function is said to be piecewise continuous or intermittent on the infinite interval if it is piecewise continuous on every finite compact subinterval. See section 1.6i. 
Raphson J.  Joseph Raphson (16481715) was an English mathematician known best for the Newton–Raphson method. 
Secant method  The secant method is defined by the recurrence relation \[ x_{k+1} = \frac{x_{k1} f \left( x_{k} \right)  x_{k} f \left( x_{k1} \right)}{f \left( x_k \right)  f \left( x_{k1} \right)} , \qquad k=0,1,2,\ldots . \] It is a rootfinding algorithm used for solving noninear equations f(f) = 0, where f is a function of one real variable. 
Shanks transformation  The Shanks transformation is a nonlinear series acceleration method to increase the rate of convergence of a sequence. This method is named after the American mathematician Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941. 
Singular point 
For the differential equation \( y' = f(x,y), \) a point of
the (x,y)plane is called a singular point if one or other of the
conditions necessary for establishment of the existence theorem ceases to
hold. If for the initial value pair \( (x_0 , y_0) \) the corresponding
solution

Snap  Snap is the second derivative of an object's acceleration changes with respect to time. 
Solution  A solution of an ordinary differential equation (ODE) is a relation between the variables (independent and dependent), which is free of derivatives of any order, and which satisfies the differential equation identically. 
Sturm comparison  Sturm's comparison Theorem: Let p(x) > q(x) > 0. Then between any two zeroes of a nontrivial solution of the equation \( \displaystyle \frac{{\text d}^2 y}{{\text d} x^2} + p(x)\,y(x) = 0 , \) there will be at least one zero of every nontrivial solution of \( \displaystyle \frac{{\text d}^2 y}{{\text d} x^2} + q(x)\,y(x) = 0 . \) 
Sturm separation  Sturm's separation Theorem: Let u(x) and v(x) be linearly independent solutions of the selfadjoint differential equation \( \displaystyle \frac{{\text d}}{{\text d} x} \left( p(x)\,\frac{{\text d}y}{{\text d} x} \right) + q(x)\,y(x) = 0 , \) where p(x) > 0 and p(x) and q(x) are continuous. If α and β are successive zeroes of u(x), then v(x) has exactly one zero in the interval (α, β). 
Sumudu transform  The Sumudu transform of a function f(t) was proposed by Watugala. It is an integral depending on parameter ν: \[ S\left[ f(t) \right] (\nu ) = \frac{1}{\nu}\, \int_0^{\infty} f(t)\, e^{t/\nu}\,{\text d}t = \nu^2 \int_0^{\infty} f(t\nu )\,e^{t}\,{\text d}t . \] The Elzaki and Sumudu transforms differ by a multiple, so they share similar propertie. 
Tautochrone  The problem of finding the curve down which a bead placed anywhere will fall to the bottom in the same amount of time is called the tautochrone problem. It was first solved by Christiaan Huygens in 1659, who proved that such curve must be a cycloid. 
Tumor model 
Karl Ludwig von Bertalanffy
proposed a model of
growth of tumors, based on the relationship between body size of the organism and the metabolic rate. His model is
\[
\frac{{\text d}V}{{\text d}t} = a\,V^{2/3}  b\,V , \qquad V(0) = V_0 ,
\]
where 𝑎 and b are positive constants.

Taylor polynomials  For a N times differentiable function f(x) on some open interval (𝑎,b), the Nth degree polynomial \( \displaystyle T_N (x) = \sum_{n= 0}^N c_n \left( x  x_0 \right)^n , \) where \( \displaystyle c_n = \frac{f^{(n)} \left( x_0 \right)}{n!} , \) and x_{0} ∈ (𝑎,b), is called the Taylor polynomial for the function f(x) centered at x_{0}. The error of approximation a function by its Taylor polinomial is \[ f(x)  T_N (x) = \frac{f^{(N+1)} (\xi )}{(N+1)!} \left( x  x_0 \right)^{N+1} = \frac{1}{N!} \int_{x_0}^x \left( xt \right)^N f^{(N+1)} (t)\,{\text d}t \] for some (unknown) ξ in the interval 𝑎 ≤ ξ ≤ b. 
Taylor series  The Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point: \( \displaystyle f(x) = \sum_{n\ge 0} c_n \left( x  x_0 \right)^n , \) where \( \displaystyle c_n = \frac{f^{(n)} \left( x_0 \right)}{n!} . \) For most common functions, the function and the sum of its Taylor series are equal near this point. This series was formulated by the Scottish mathematician and astronomer James Gregory in his book Geometriae Pars Universalis (1668) and formally introduced by the English mathematician Brook Taylor in 1715. However, Johann Bernoulli claimed his priority in derivation of Taylor's series. 
Volterra equation  The Volterra integral equations are a special type of integral equations with variable upper limit integrals. 
Volterra, Vito  Vito Volterra (18601940) was an Italian mathematician and physicist, known for his contributions to mathematical biology and integral equations, being one of the founders of functional analysis. 
Weak solution  A weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of the weak formulation, and the solutions to it are called weak solutions. 
Wilkinson's polynomial  The Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson (19191986) in 1963 to illustrate a difficulty when finding the root of a polynomial. 