Linear Systems of ODEs

The tutorial accompanies the textbook Applied Differential Equations. The Primary Course by Vladimir Dobrushkin, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043

We consider a system of nonlinear differential equations in normal form (when the derivatives are isolated):

\[ \begin{cases} {\text d}x_1 /{\text d}t &= g_1 (t, x_1 , x_2 , \ldots , x_n ), \\ {\text d}x_2 /{\text d}t &= g_2 (t, x_1 , x_2 , \ldots , x_n ), \\ \vdots & \qquad \vdots \\ {\text d}x_n /{\text d}t &= g_n (t, x_1 , x_2 , \ldots , x_n ), \end{cases} \]
where \( g_k (t, x_1 , x_2 , \ldots , x_n ) , \quad k=1,2,\ldots , n, \quad \) is a given function of \( n+1 \) variables. INstead of \( {\text d}x /{\text d}t \) we will use either of shorter notations \( x' \) or the more customary notation \( \dot{x} \) to denote a derivative of \( x(t) \) with respect to variable t associated with time. If the slope functions do \( g_k (t, x_1 , x_2 , \ldots , x_n ) , \quad k=1,2,\ldots , n, \quad \) do not depend on time variable t, the system of differential equations is referred to as autonomous:

\[ \begin{cases} {\text d}x_1 /{\text d}t &= g_1 (x_1 , x_2 , \ldots , x_n ), \\ {\text d}x_2 /{\text d}t &= g_2 (x_1 , x_2 , \ldots , x_n ), \\ \vdots & \qquad \vdots \\ {\text d}x_n /{\text d}t &= g_n (x_1 , x_2 , \ldots , x_n ), \end{cases} \]
Differential equations are usually sugject to the initial conditions:
\[ x_1 (t_0 ) = x_{10} , \quad x_2 (t_0 ) = x_{20} , \quad \ldots , \quad x_n (t_0 ) = x_{n0} , \]

where \( t_0 \) is a specified value of t and \( x_{10} , x_{20} , \ldots , x_{n0} \) are prerscribed constants. The problem of finding a solution to a system of differential equations that satisfies the givne initial conditions is called an initial value problem.

When the functions \( g_k (t, x_1 , x_2 , \ldots , x_n ) , \quad k=1,2,\ldots , n, \quad \) are linear functions with respect to n dependent variables \( x_1 (t), x_2 (t), \ldots , x_n (t), \) we obtain the general system of first order linear differential equations in normal form:

\[ \begin{cases} \dot{x}_1 &= p_{11} \,x_1 + p_{12} (t)\, x_2 + \cdots + p_{1n} (t)\, x_n + f_1 (t), \\ \dot{x}_2 &= p_{21} \,x_1 + p_{22} (t)\, x_2 + \cdots + p_{2n} (t)\, x_n +f_2 (t), \\ \vdots & \qquad \vdots \\ \dot{x}_n &= p_{n1} \,x_1 + p_{n2} (t)\, x_2 + \cdots + p_{nn} (t)\, x_n +f_n (t). \end{cases} \]

In this system of differential equations, the \( n^2 \) coefficients \( p_{11}, p_{12}, \ldots , p_{nn} \) and the n functions \( f_1 (t) , f_2 (t) , \ldots , f_n (t) \) are assumed to be known. If the coefficients \( p_{ij} \) are constants, we have a constant coefficient system of equations. Otherwise, we have a linear system of differential equations with variable coefficients. The system id said to homogeneous or undriven if\( f_1 (t) \equiv f_2 (t) \equiv \cdots f_n (t) \equiv 0. \)

The linear system of differential equations can be written in compact vector form:

\[ \dot{\bf x} = {\bf P} (t)\, {\bf x} + {\bf f}(t) , \]
where \( {\bf P} (t) \) denotes the following square matrix:
\[ {\bf P} (t) = \begin{bmatrix} p_{11} (t) & p_{12} (t) & \cdots & p_{1n} (t) \\ p_{21} (t) & p_{22} (t) & \cdots & p_{2n} (t) \\ \vdots & \vdots & \vdots & \vdots \\ p_{n1} (t) & p_{n2} (t) & \cdots & p_{nn} (t) \\ \end{bmatrix} \]
Here \( {\bf x} (t) \) and \( {\bf f} (t) \) are n-dimensional vector-functions that are assumed to be columns.

 

Fundamental matrices

 

Reduction to a single equation and vice versa

 

Constant coefficient systems of ODEs

 

Planar phase portraits

Consider a systems of linear differential equations \( \dot{\bf x} = {\bf A}\,{\bf x}. \) Its phase portrait is a geometric representation of the trajectories of a dynamical system in the phase plane. A sketch of a particular solution in the phase plane is called the trajectory of the solution. Its solutions are plotted as parametric curves (with t as the parameter) on the Cartesian plane tracing the path of each particular solution \( {\bf x} = ( x_1 (t) , x_2 (t) ), \ -\infty Similar to a direction field for a single differential equation, a phase portrait is a graphical tool to visualize how the solutions of a given system of differential equations would behave in the long run. Each set of initial conditions is represented by a different curve, or point. They consist of a plot of typical trajectories in the state space. This reveals information such as whether an attractor, a repellor or limit cycle is present for the chosen parameter value.

Recall that an equilibrium solution of the autonoumous system \( \dot{\bf x} = {\bf f} ({\bf x}) \) is a point \( {\bf x}^{\ast} = ( x_1^{\ast} , x_2^{\ast} ) \) where the derivative of \( {\bf x}(t) \) is zero. An equilibrium solution is a constant solution of the system, and is usually called a critical point. For a linear system \( \dot{\bf x} = {\bf A}\,{\bf x}, \) an equilibrium solution occurs at each solution of the system (of homogeneous algebraic equations) \( {\bf A}\,{\bf x} = {\bf 0} . \) As we have seen, such a system has exactly one solution, located at the origin, if \( \det{\bf A} \ne 0 .\) If \( \det{\bf A} = 0 , \) then there are infinitely many solutions. As a rulle, we will only consider systems of linear differential equations whose coefficient matrix A has nonzero determinant.

We are going to classify the critical points of various systems of first order linear differential equations by their stability. In addition, due to the truly two-dimensional nature of the parametric curves, we will also classify the type of those critical points by their shapes (or, rather, by the shape formed by the trajectories about each critical point). Their classification is based on eigenvalues of the coefficient matrix. Therefore, we consider different cases.

Case 1: Distinct real eigenvalues of the same sign. Then the general solution of the linear system \( \dot{\bf x} = {\bf A}\,{\bf x}, \) is

\[ {\bf x} (t) = c_1 \,{\bf \xi} \, e^{\lambda_1 t} + c_2 \,{\bf \eta} \, e^{\lambda_2 t} , \]

where \( \lambda_1 \) and \( \lambda_2 \) are distinct real eiegnvalues, \( {\bf \xi} \) and \( {\bf \eta} \) are corresponding eigenvectors, and \( c_1 , c_2 \) are arbitrary real constants.

When \( \lambda_1 \) and \( \lambda_2 \) are both positive, or are both negative, the phase portrait shows trajectories either moving away from the critical point to infinite-distant away (when \( \lambda >0 \) ), or moving directly toward, and converge to the critical point (when \( \lambda <0 . \) The trajectories that are the eigenvectors move in straight lines. The rest of the trajectories move, initially when near the critical point, roughly in the same direction as the eigenvector of the eigenvalue with the smaller absolute value. Then, farther away, they would bend toward the direction of the eigenvector of the eigenvalue with the larger absolute value The trajectories either move away from the critical point to infinite-distant away (when λ are both positive), or move toward from infinite-distant out and eventually converge to the critical point (when λ are both negative). This type of critical point is called a node. It is asymptotically stable if λ are both negative, unstable if λ are both positive.

Stability: It is unstable if both eigenvalues are positive; asymptotically stable if they are both negative.

Case 2: Distinct real eigenvalues are of opposite signs. In this type of phase portrait, the trajectories given by the eigenvectors
of the negative eigenvalue initially start at infinite-distant away, move toward and eventually converge at the critical point. The trajectories that represent the eigenvectors of the positive eigenvalue move in exactly the opposite way: start at the critical point then diverge to infinite-distant out. Every other trajectory starts at infinite-distant away, moves toward but never converges to the critical point, before changing direction and moves back to infinite-distant away. All the while it would roughly follow the 2 sets of eigenvectors. This type of critical point is called a saddle point. It is always unstable

Stability: It is always unstable.

Case 3: Repeated real eigenvalue. Then we have two subcases: either the eigenvalue is not defective or defective. In the latter case, there are two linearly independent eigenvectors \( {\bf \xi} \) and \( {\bf \eta} .\) Then the general solution is

\[ {\bf x} (t) = c_1 \,{\bf \xi} \, e^{\lambda\, t} + c_2 \,{\bf \eta} \, e^{\lambda\, t} , \]

where \( \lambda \) is the repeated eigenvalue and \( c_1 , c_2 \) are arbitrary real constants.

Every nonzero solution traces a straight-line trajectory, in the direction given by the vector \( c_1 \,{\bf \xi} + c_2 \,{\bf \eta} .\) The phase portrait thus has a distinct star-burst shape. The trajectories either move directly away from the critical point to infinite-distant away (when \( \lambda >0 ,\) or move directly toward, and converge to the critical point (when \( \lambda <0 .\) ) This type of critical point is called a proper node (or a star point). It is asymptotically stable if \( \lambda <0 ,\) unstable if \( \lambda >0 .\)

Stability: It is unstable if the eigenvalue is positive; asymptotically stable if the eigenvalue is negative.

Example. For \( 2 \times 2 \) systems of linear differential equations, this will occur if, and only if, when the coefficient matrix A is a constant multiple of the identity matrix:

\[ \alpha \, \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = \begin{bmatrix} \alpha &0 \\ 0&\alpha \end{bmatrix} \quad \mbox{ where } \alpha \quad \mbox{is an arbitrary nonzero constant} . \]

 

When there is only one linearly independent eigenvector \( {\bf \xi} , \) the eigenvalue λ is defective, and the general solution is

\[ {\bf x} (t) = c_1 \,{\bf \xi} \, e^{\lambda\, t} + c_2 \,e^{\lambda\, t} \left( t\,{\bf \xi} + {\bf \eta} \right) , \]

where \( {\bf \eta} \) is so called the generalized eigenvector. The phase portrait shares characteristics with that of a node. With only one eigenvector, it is a degenerated-looking node that is a cross between a node and a spiral point (see case 4 below). The trajectories either all diverge away from the critical point to infinite-distant away (when \( \lambda >0 ,\) ) or all converge to the critical point (when \( \lambda <0 .\) This type of critical point is called an improper node. It is asymptotically stable if \( \lambda <0 ,\) unstable if \( \lambda >0 .\)

Case 4: Complex conjugate eigenvalues. When the real part λ is zero, the trajectories neither converge to the critical point nor move to infinite-distant away. Rather, they stay in constant, elliptical (or, rarely, circular) orbits. This type of critical point is called a center. It has a unique stability classification shared by no other: stable (or neutrally stable).

When the real part λ is nonzero, the trajectories still retain the elliptical traces as in the previous case. However, with each revolution, their distances from the critical point grow/decay exponentially according to the term \( e^{\Re\lambda\,t} , \) where \( \Re\lambda \) is the real part of the complax λ. Therefore, the phase portrait shows trajectories that spiral away from the critical point to infinite-distant away (when \( \Re\lambda >0 \) ). Or trajectories that spiral toward, and converge to the critical point (when \( Re\lambda <0 \) ). This type of critical point is called a spiral point. It is asymptotically stable if \( \lambda <0 ,\) it is unstable if \( \Re\lambda >0 . \)

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1&2 \\ 2&1 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} 1&2 \\ 2&1 \end{bmatrix} \) has two distinct real eigenvalues \( \lambda_1 =3 \) and \( \lambda_2 =-1 . \) Therefore, the critical point, which is the origin, is a saddle point, unstable. We plot the corresponding phase portrait using the following code:

Code:

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1&-1 \\ 2&4 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} 1&-1 \\ 2&4 \end{bmatrix} \) has two distinct real eigenvalues \( \lambda_1 =3 \) and \( \lambda_2 =2 . \) Therefore, the critical point, which is the origin, is a node point, unstable. We plot the corresponding phase portrait using the following code:

Code:

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1&-4 \\ 2&-5 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} 1&-4 \\ 2&-5 \end{bmatrix} \) has two distinct real eigenvalues \( \lambda_1 =-3 \) and \( \lambda_2 =-1 . \) Therefore, the critical point, which is the origin, is a nodal sink, asymptotically stable. We plot the corresponding phase portrait using the following code:

Code:

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -5&3 \\ -3&1 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} -5&3\\ -3&1 \end{bmatrix} \) has q double real eigenvalue \( \lambda =-2 . \) Therefore, the critical point, which is the origin, is an improper node, asymptotically stable. We plot the corresponding phase portrait using the following code:

Code:

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1&-4 \\ 1&5 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} 1&-4 \\ 1&5 \end{bmatrix} \) has a double real eigenvalue \( \lambda =3 . \) Therefore, the critical point, which is the origin, is an improper node point, unstable. We plot the corresponding phase portrait using the following code:

Code:

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -4&5 \\ -2&2 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} -4&5 \\ -2&2 \end{bmatrix} \) has two complex conjugate eigenvalues \( \lambda_{1,2} = -1 \pm {\bf j} . \) Therefore, the critical point, which is the origin, is a spiral point, asymptotically stable. We plot the corresponding phase portrait using the following code:

Code:

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 2&-3 \\ 3&2 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} 2&-3 \\ 3&2 \end{bmatrix} \) has two complex conjugate eigenvalues \( \lambda_{1,2} = 2\pm 3{\bf j} . \) Therefore, the critical point, which is the origin, is a spiral point, unstable. We plot the corresponding phase portrait using the following code:

Code:

Example. Consider a system of ordinar differential equations

\[ \frac{{\text d}}{{\text d} t} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -2&5 \\ -4&2 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} . \]

The coefficient matrix \( {\bf A} = \begin{bmatrix} -2&5 \\ -4&2 \end{bmatrix} \) has two distinct real eigenvalues \( \lambda_1 =3 \) and \( \lambda_2 =2 . \) Therefore, the critical point, which is the origin, is a center, stable. We plot the corresponding phase portrait using the following code:

Code:

 

Variation of parameters

Suppose that we know a fundamental matrix \( {\bf X} (t) \) of the vector system of homogeneous linear differential equations:

\[ \dot{\bf x} = {\bf P} (t)\, {\bf x} + {\bf f}(t) , \]
Here \( {\bf P} (t) \) is \( n\times n \) matrix with continuous entries, \( {\bf x} (t) \) is an \( n- \) column vector of unknown functions to be determined, and \( {\bf f} (t) \) is given driven column vector.

The variation of parameters method suggests to represent a particular solution of the given nonhomogeneous system of differential equations in the form
\[ {\bf x}_p (t) = {\bf X} (t)\, {\bf u}(t) . \]
To determine the unknown column-vector \( {\bf u} (t), \) we substitute this form of solution into the driven vector equation to obtain
\[ \dot{\bf x}_p (t) = \dot{\bf X} (t)\,{\bf u}(t) + {\bf X} (t)\, \dot{\bf u}(t) = {\bf P} (t)\,{\bf X} (t)\, {\bf u}(t) + {\bf f} (t). \]
Since \( \dot{\bf X} (t) = {\bf P} (t)\, {\bf X}(t), \) we have
\[ {\bf X} (t)\, \dot{\bf u}(t) = {\bf f} (t) \qquad\mbox{or} \qquad \dot{\bf u}(t) = {\bf X}^{-1} (t)\, {\bf f} (t) . \]
Integration yields
\[ {\bf u} (t) = \int {\bf X}^{-1} (t)\,{\bf f}(t) \, {\text d}t + {\bf c} , \]
where \( {\bf c} \) is a column vector of arbitrary constants of integration. Then the general solution of the given driven system of differential equations becomes
\[ {\bf x} (t) = {\bf X} (t)\,\int {\bf X}^{-1} (t)\,{\bf f}(t) \, {\text d}t + {\bf X} (t)\,{\bf c} . \]

Here the term \( {\bf x}_h (t) = {\bf X} (t)\,{\bf c} \) is the general solution of the corresponding homogeneous vector equation, \( \dot{\bf x} (t) = {\bf P} (t)\,{\bf x}, \) and \( {\bf x}_p (t) = {\bf X} (t)\,\int {\bf X}^{-1} (t)\,{\bf f}(t) \, {\text d}t \) is a particular solution of the nonhomogeneous vector equation \( \dot{\bf x} (t) = {\bf P} (t)\,{\bf x} + {\bf f} (t). \)

Example 2.2.1:

 

In case when the matrix \( {\bf P} (t) \) does not depend on time variable t, we can construct explicitly. Now we consider the nonhomogeneous system of differential equations with constant coefficients:

\[ \dot{\bf y} = {\bf A} \, {\bf y} + {\bf f}(t) , \]

where \( {\bf A} \) is a constant \( n \times n \) square matrix.

The fundametal matrix for the corresponding homogeneous vector equation
\[ \dot{\bf y} = {\bf A} \, {\bf y} \]
is \( e^{{\bf A}\,t} . \) The the solution to the initial value problem for the driven vector equation
\[ \dot{\bf y} = {\bf A} \, {\bf y} + {\bf f}(t) , \qquad {\bf y} (t_0 ) = y_0 , \]
can be expressed as \[ {\bf y}(t) = e^{{\bf A} \,(t-t_0 )} \, {\bf y}_0 + \int_{t_0}^t e^{{\bf A} \,(t-s )} \, {\bf f}(s)\,{\text d} s = e^{{\bf A} \,(t-t_0 )} \, {\bf y}_0 + e^{{\bf A} \,t} \,\int_{t_0}^t e^{-{\bf A} \,s} \, {\bf f}(s)\,{\text d} s . \]

Example 2.2.1:

Method of undetermined coefficients

 

Laplace transform

 

Second order linear differential equations

 

 

Solving Algebraic Equations

Computing the solution of some given equation is one of the fundamental problems of numerical analysis. If a real-valued function is differentiable and the derivative is known, then Newton's method may be used to find an approximation to its roots. In Newton's method, an initial guess x_0 for a root of the differentiable function f: R -> R is made. The consecutive iterative steps are defined by

x_{k+1} = x_k - f(x_k) / f' f(x_k) , k=0,1,2,3,\ldots

The function newton_method is used to generate a list of the iteration points. Sage contains a preparser that makes it possible to use certain mathematical constructs such as
f ( x ) = f
that would be syntactically invalid in standard Python. In the program, the last iteration value is printed and the iteration steps are tabulated. The accuracy goal for the root 2h is reached when

f(x_n -h) f(x_n+h)<0

In order to avoid an infinite loop, the maximum number of iterations is limited by the parameter maxn

sage: def newton_method(f, c, maxn, h): 
f(x) = f
iterates = [c]
j = 1
while True:
c = c - f(c)/derivative(f(x))(x=c)
iterates.append(c)
if f(c-h)*f(c+h) < 0 or j == maxn:
break
j += 1
return c, iterate
sage: f(x) = x^2-3
h = 10^-5
initial = 2.0
maxn = 10
z, iterates = newton_method(f, initial, maxn, h/2.0)
print "Root =", z

 

Riemann and trapezoid sums for integrals

Regarding numerical approximation of \(\int_a^bf(x)\, dx\), where \(f\) is a piecewise defined function, can

  • compute (for plotting purposes) the piecewise linear function defined by the trapezoid rule for numerical integration based on a subdivision into \(N\) subintervals
  • the approximation given by the trapezoid rule,
  • compute (for plotting purposes) the piecewise constant function defined by the Riemann sums (left-hand, right-hand, or midpoint) in numerical integration based on a subdivision into \(N\) subintervals,
  • the approximation given by the Riemann sum approximation.
sage: f1(x) = x^2
sage: f2(x) = 5-x^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
sage: f.trapezoid(4)
Piecewise defined function with 4 parts, [[(0, 1/2), 1/2*x],
[(1/2, 1), 9/2*x - 2], [(1, 3/2), 1/2*x + 2],
[(3/2, 2), -7/2*x + 8]]
sage: f.riemann_sum_integral_approximation(6,mode="right")
19/6
sage: f.integral()
Piecewise defined function with 2 parts,
[[(0, 1), x |--> 1/3*x^3], [(1, 2), x |--> -1/3*x^3 + 5*x - 13/3]]
sage: f.integral(definite=True)
3

Euler Methods

Differentiation:

sage: var('x k w')
(x, k, w)
sage: f = x^3 * e^(k*x) * sin(w*x); f
x^3*e^(k*x)*sin(w*x)
sage: f.diff(x)
w*x^3*cos(w*x)*e^(k*x) + k*x^3*e^(k*x)*sin(w*x) + 3*x^2*e^(k*x)*sin(w*x)
sage: latex(f.diff(x))
w x^{3} \cos\left(w x\right) e^{\left(k x\right)} + k x^{3} e^{\left(k x\right)} \sin\left(w x\right) + 3 \, x^{2} e^{\left(k x\right)} \sin\left(w x\right)

If you type view(f.diff(x)) another window will open up displaying the compiled output. In the notebook, you can enter

var('x k w')
f = x^3 * e^(k*x) * sin(w*x)
show(f)
show(f.diff(x))

into a cell and press shift-enter for a similar result. You can also differentiate and integrate using the commands

R = PolynomialRing(QQ,"x")
x = R.gen()
p = x^2 + 1
show(p.derivative())
show(p.integral())

in a notebook cell, or

sage: R = PolynomialRing(QQ,"x")
sage: x = R.gen()
sage: p = x^2 + 1
sage: p.derivative()
2*x
sage: p.integral()
1/3*x^3 + x

on the command line. At this point you can also type view(p.derivative()) or view(p.integral()) to open a new window with output typeset by LaTeX.

Polynomial Approximations

You can find critical points of a piecewise defined function:

sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f1 = x^0
sage: f2 = 1-x
sage: f3 = 2*x
sage: f4 = 10*x-x^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2],[(2,3),f3],[(3,10),f4]])
sage: f.critical_points()
[5.0]

Runge--Kutta Methods

Taylor series:

sage: var('f0 k x')
(f0, k, x)
sage: g = f0/sinh(k*x)^4
sage: g.taylor(x, 0, 3)
-62/945*f0*k^2*x^2 + 11/45*f0 - 2/3*f0/(k^2*x^2) + f0/(k^4*x^4)
sage: maxima(g).powerseries('_SAGE_VAR_x',0)    # TODO: write this without maxima
16*_SAGE_VAR_f0*('sum((2^(2*i1-1)-1)*bern(2*i1)*_SAGE_VAR_k^(2*i1-1)*_SAGE_VAR_x^(2*i1-1)/factorial(2*i1),i1,0,inf))^4

Of course, you can view the LaTeX-ed version of this using view(g.powerseries('x',0)).

The Maclaurin and power series of \(\log({\frac{\sin(x)}{x}})\):

sage: f = log(sin(x)/x)
sage: f.taylor(x, 0, 10)
-1/467775*x^10 - 1/37800*x^8 - 1/2835*x^6 - 1/180*x^4 - 1/6*x^2
sage: [bernoulli(2*i) for i in range(1,7)]
[1/6, -1/30, 1/42, -1/30, 5/66, -691/2730]
sage: maxima(f).powerseries(x,0)    # TODO: write this without maxima
'sum((-1)^i3*2^(2*i3-1)*bern(2*i3)*_SAGE_VAR_x^(2*i3)/(i3*factorial(2*i3)),i3,1,inf)

Integration

Numerical integration is discussed in Riemann and trapezoid sums for integrals below.

Convolution

You can find the convolution of any piecewise defined function with another (off the domain of definition, they are assumed to be zero). Here is \(f\), \(f*f\), and \(f*f*f\), where \(f(x)=1\), \(0<x<1\):

sage: x = PolynomialRing(QQ, 'x').gen()
sage: f = Piecewise([[(0,1),1*x^0]])
sage: g = f.convolution(f)
sage: h = f.convolution(g)
sage: P = f.plot(); Q = g.plot(rgbcolor=(1,1,0)); R = h.plot(rgbcolor=(0,1,1))

To view this, type show(P+Q+R).