# Preface

This section presents the fasinated method, credited to Alexander Lyapunov (1857--1918).

Introduction to Linear Algebra with Mathematica

# Lyapunov Second Method

The Lyapunov second method was discovered by Alexander Lyapunov in 1892. It is also referred to as the direct method because no knowledge of the solution of the system of autonomous equations is required:
$$\label{EqLyapunov.1} \dot{\bf x} = {\bf f}\left( {\bf x} \right) ,$$
where overdot stands for the derivative with respect to time variable t, $$\dot{\bf x} = {\text d}{\bf x}/{\text d}t .$$ The vector x(t) defins the point in an n-dimensional state space for every t.
Alexander M. Lyapunov (1857--1918), a student of P.Chebyshev at St. Petersburg (Russia), taught at the University of Kharkov from 1885 to 1901, when he became an academician in applied mathematics at the St. Petersburg Academy of Science. His surname is variously romanized as Ljapunov, Liapunov, Liapounoff or Ljapunow. In 1917 he joined his brother's location in Odessa because of his wife's frail health. On day of his wife's death, Alexander shoot himself with a revolver. Three days later he passed away.    ✶

Preliminary terminology

Application of the direct method of A.Lyapunov to the autonomous system of differential equations \eqref{EqLyapunov.1} is based on utilization of a smooth function V(x) that possesses some properties.
The function V : ℝn → ℝ is called positive definite in S⊂ℝn, with respect to x, if V has continuous partials, V(x) = 0, and V(x) > 0 for all x in S, where xx.
The function V : ℝn → ℝ is called positive semi-definite in S⊂ℝn, with respect to x, if V has continuous partials, V(x) = 0, and V(x) ≥ 0 for all x in S, where xx.
Similarly, a function V(x) is called negative definite/negative semi-definite if −V(x) is positive definite/positive semi-definite.

Without any loss of generocity, we can assume that the isolated critical point for the vector equation \eqref{EqLyapunov.1} is the origin. A particular example of positive (or negative) definite functions constitute quadratic functions that can written in the form

$$\label{EqLyapunov.2} V\left( {\bf x} \right) = {\bf x}^{\mathrm T} {\bf P}\, {\bf x} ,$$
for some n×n real-valued constant matrix P.
A real-valued matrix A is called positive definite/positive semi-definite if all its eigenvalues are positive/not negative. Correspondingly, we call matrix B negative definite if −B is positive definite (so all eigenvalues of B are negative real numbers).
Note that our definition is slightly different that is used in mathematics---it is additionally required for matrix P to be self-adjoint (P = P). In case of real-valued matrices, the condition becomes P = PT, so matrix must be symmetric. The reason why mathematicians impose symemtry on matrix A is that the inegality
$$\label{EqLyapunov.3} V\left( {\bf x} \right) = {\bf x}^{\mathrm T} {\bf P}\, {\bf x} > 0 \qquad (\mbox{or} \quad{\bf x}^{\mathrm T} {\bf P}\, {\bf x} ≥ 0)$$
holds for these matrices. When matrix is not symmetric, but has positive eigenvalues, it may satisfy inequality \eqref{EqLyapunov.3} or may not. We relax condition of self-adjointy to be consistent with future applications (Sturm--Liouville problems).

Theorem 1: Let V(x) = xTPx, where P is a symmetric real-valued matrix, P = PT. Then V(x) is positive definite if and only if all eigenvalues of P are positive. Correspondingly, V(x) is positive semi-definite if and only if all eigenvalues of P are non-negative.

In order to apply Theorem 1 to an arbitrary matrix, we need to symmetrize the given matrix first:
${\bf P}^{\mathrm H} = \frac{1}{2} \left( {\bf P} + {\bf P}^{\mathrm T} \right)$
If symmetric matrix PH has positive eigenvalues, then matrix P generates a positive quadratic form V(x).

Example 1: Consider the following 3×3 matrix

${\bf P} = \left[ \begin{array}{ccc} \phantom{-}7&0&-4\\ -2&4&5 \\ \phantom{-}1&0&\phantom{-}2 \end{array} \right] .$
Obviously, this matrix is not symmetric. It has positive eigenvalues
Eigenvalues[{{7, 0, -4}, {-2, 4, 5}, {1, 0, 2}}]
{6, 4, 3}
So the eigenvalues are 4, 6, and 3, which are all positive, so our matrix is positive definite.

We calculate the inner (scalar) product

$\left( {\bf P}{\bf x}, {\bf x} \right)$
and show that it is positive for any nonzero vector x :  (Ax, x) > 0. Since
${\bf P} {\bf x} = \langle 7x_1 -4x_3 , -2x_1 + 4x_2 + 5x_3 , x_1 + 3x_3 \rangle^{\mathrm T},$
we have
\begin{align*} \left( {\bf P}{\bf x}, {\bf x} \right) &= 7x_1^2 + 4x_2^2 + 2x_3^2 - 2x_1 x_2 - 3x_1 x_3 + 5x_2 x_3 . \end{align*}
Also
$\left( \frac{3}{2}\,x_2 + x_3 - \frac{3}{2}\,x_1 \right)^2 = \frac{9}{4} \,x_2^2 + x_3^2 + \frac{9}{4} \,x_1^2 + 3x_2 x_3 -3x_1 x_3 - \frac{9}{2} \,x_1 x_2 .$
Therefore
\begin{align*} \left( {\bf P}{\bf x}, {\bf x} \right) &= \left( \frac{3}{2}\,x_2 + x_3 - \frac{3}{2}\,x_1 \right)^2 + \frac{19}{4}\,x_1^2 + \frac{7}{4}\,x_2^2 +x_3^2 + \frac{5}{2}\,x_1 x_2 + 2x_2 x_3 . \end{align*}
Since 2 x2 x3 = (x2 + x3)² − x22x32, we have
\begin{align*} \left( {\bf P}{\bf x}, {\bf x} \right) &= \left( \frac{3}{2}\,x_2 + x_3 - \frac{3}{2}\,x_1 \right)^2 + \frac{19}{4}\,x_1^2 + \frac{3}{4}\,x_2^2 + \frac{5}{2}\,x_1 x_2 + (x_2 + x_3 )^2 . \end{align*}
Similarly, from
$\frac{5}{2}\,x_1 x_2 = \frac{5}{8} \left( 2x_1 + x_2 \right)^2 - \frac{5}{2}\,x_1^2 - \frac{5}{8}\,x_2^2 ,$
we get
\begin{align*} \left( {\bf P}{\bf x}, {\bf x} \right) &= \left( \frac{3}{2}\,x_2 + x_3 - \frac{3}{2}\,x_1 \right)^2 + \frac{9}{4}\,x_1^2 + \frac{1}{8}\,x_2^2 + \frac{5}{8} \left( 2x_1 + x_2 \right)^2 + (x_2 + x_3 )^2 , \end{align*}
which is obviously positive for any vector x = ⟨ x1, x2, x3T.

Its symmetrization is

P = {{7, 0, -4}, {-2, 4, 5}, {1, 0, 2}}
PH = (P + Transpose[P])/2
{{7, -1, -(3/2)}, {-1, 4, 5/2}, {-(3/2), 5/2, 2}}
${\bf P}^{\mathrm H} = \frac{1}{2} \left( {\bf P} + {\bf P}^{\mathrm T} \right) = \frac{1}{2} \begin{bmatrix} 14& -2 -3 \\ -2&8&5 \\ -3&5&4 \end{bmatrix} .$
Its eigenvalues are all positive, as Mathematica confirmed. So this matrix P degerates the positive definite quadratic form \eqref{EqLyapunov.2}.
Eigenvalues[PH]; N[%]
{8.17922, 4.58759, 0.233191}
■

Corollary 1: The function V(x, y) = 𝑎 x² + bxy + cy² is positive definite if and only if

$a > 0 \qquad\mbox{and} \qquad 4\,ac - b^2 > 0;$
and is negative definite if and only if
$a < 0 \qquad\mbox{and} \qquad 4\,ac - b^2 < 0;$

Example 2: Consider the following 2×2 triangular matrix

${\bf A} = \begin{bmatrix} 1& -5 \\ 0&\phantom{-}1 \end{bmatrix}$
is positive definite since its eigenvalues are 1, 1. However, the associated quadratic form
$V({\bf x}) = \left\langle {\bf A}\,{\bf x}, {\bf x} \right\rangle = x_1^2 + x_2^2 - 5\,x_1 x_2 = \left( x_1 - x_2 \right)^2 - 3\, x_1 x_2$
is not positive definite. Indeed, V([1, 1]) = −3 < 0.

Symmetrizing matrix

${\bf A}^{\mathrm H} = \frac{1}{2} \left( {\bf A} + {\bf A}^{\mathrm T} \right) = \begin{bmatrix} 1&-2.5 \\ -2.5 & 1 \end{bmatrix}$
has eigenvalues of distinct sign, 7/2 and −3/2.
Eigenvalues[{{1, -5/2}, {-5/2, 1}}]
{7/2, -(3/2)}
Hence, the given 2×2 matrix does not generate a positive quadratic form.    ■

If x(t) is a solution curve of the autonomous equation \eqref{EqLyapunov.1}, then V(x(t)) represents the corresponding values of V along the trajectory. If we want to determine whether this function V(x(t)) increading or decreasing, we need to evaluate the derivative
$$\label{EqLyapunov.4} \dot{V} \left( {\bf x} \right) = \frac{{\text d}V({\bf x}(t))}{{\text d}t} = \nabla V({\bf x}(t)) \cdot \dot{\bf x} ,$$
where $\nabla V = \left[ \frac{\partial V}{\partial x_1} \ \frac{\partial V}{\partial x_2} \ \cdots \ \frac{\partial V}{\partial x_n} \right]$ is the gradient of V. The vector $${\bf T} = \dot{\bf x}(t)$$ is the tangent vector to the trajectory.
The derivative \eqref{EqLyapunov.4} is called the derivative of function V(x) with respect to solution of the differential equation dx/dt = f(x) or the Lyapunov derivative.
Using the chain rule, we compute the derivative of V(x(t)): $\dot{V({\bf x})} = \frac{\partial V}{\partial x_1} \cdot \dot{x}_1 + \cdots + \frac{\partial V}{\partial x_n} \cdot \dot{x}_n = \nabla V \cdot \dot{\bf x} = \nabla V \cdot {\bf f} .$ V=xTPx, where P=PT, is positive definite if and only if all eigenvalues of P are positiv V=xTPx, where P=PT, is positive semi-definite if and only if all eigenvalues of P are non-negative Example ${\bf P} = \begin{bmatrix} 2&-8 \\ 0&\phantom{-}3 \end{bmatrix}$

Theorem 2 (Lyapunov): The real matrix A is asymptotically stable, that is, all eigenvalues of A have negative real parts if and only if for any P, the solution Q of the continuous matrix Lyapunov equation ${\bf A}^{\mathrm T} {\bf P} + {\bf P}\,{\bf A} = -{\bf Q}$ is (symmetric) positive definite.

To use the Lyapunov theorem, select an arbitrary symmetric positive definite Q, for example, an identity matrix, I. Then solve the Lyapunov equation for symmetric matrix P = PT. If P is positive definite, the matrix A generates a positive definite quadratic form V(x) = xTPx, so A is asymptotically stable. If P is not positive definite, then A is not not a positive definite (not asymptotically stable). Example

Motovated example

Direct Method

If the origin is asymptotically stable, we seek for a smooth function that V(x) that provides a unique minimum with respect to all other points in some neighborhood of the equilibrium of interest. Along any trajectory of the system, the value of V never increases In order for V(x(t)) not to increase, we require $$\dot{V({\bf x}(t))} \le 0 .$$

Theorem (Lyapunov): Let x* be a fixed point for the vector differential equation

$\dot{\bf x} = {\bf f}\left( {\bf x} \right)$
and V(x, y) be a differentiable function defined on some neighborhood W of x* such that
1. V(x*) = 0 and V(x) > 0 if xx*;
2. $$\dot{V} ({\bf x}) \le 0$$ in W ∖ { x* }.
The the critical point is stable. If in addition, $$\dot{V} ({\bf x}) < 0$$ in W ∖ { x* }, then the critical point is asymptotically stable.    ▣

Lyapunov second method on the plane

If a trajectory is converging to xe=0, it should be possible to find a nested set of closed curves V(x1,x2)=c, c≥0, such that decreasing values of c yield level curves shrinking in on the equilibrium state xe=0
 el = Graphics[{Thick, Blue, Circle[{0, 0}, {2, 1}]}]; line1 = Graphics[{Dashed, Line[{{-1.99, 0.42}, {-0.5, 1.2}}]}]; ar1 = Graphics[{Thick, Black, Arrow[{{-1.45, 0.7}, {-1.85, 1.3}}]}]; ar2 = Graphics[{Thick, Black, Arrow[{{-1.45, 0.7}, {-1.45, 0.0}}]}]; f[s_Circle] := s /. Circle[a_, r_, {start_, end_}] :> ({s, Arrow[{# - r/10^6 {-Sin@end, Cos@end}, #}]} &[ a + r {Cos@end, Sin@end}]); angle = arcsWArrows[dsAndAngls]; txt1 = Graphics[ Text[Style["$Theta]", FontSize -> 18, Purple], {-1.77, 0.7}]]; txt2 = Graphics[ Text[Style["\[Del]V", FontSize -> 18, Black], {-1.47, 1.18}]]; txt3 = Graphics[ Text[Style["T", FontSize -> 18, Black], {-1.45, -0.18}]]; Show[line1, el, ar1, ar2, angle, txt1, txt2, txt3] Applications Many electrical circuits, complicated mechanical contrivances and black boxes in general are governed by a second order differential equation \[ \ddot{x} + p\left( x, \dot{x} \right) \dot{x} + q(x) = f(t),$
where dot stands for the derivative with respect to time variable t. The function f(t) is usually called input and x(t) the response.

It is usually important that x(t) remains bounded for all t≥0. To this end, we usually require q(x) ≥ 0 for x≥0, q(x) < 0 for x<0 and |q(x)| > &delta for all x large (mechanically speaking, we want q to be a restoring force). Further we require $$p \left( x, \dot{x} \right) < 0$$ (i.e., we want damping). If f(t) = 0 for t > 0 (i.e., if the system is undriven) these conditions by themselves are sufficient to prevent x(t) and its derivative from becoming unbounded as t ⟼ ∞.

Theorem: Suppose
1. p: ℝ² → ℝ is a continuous function with p(u,v) ≥ 0 for all u,v ∈ ℝ;
2. q: ℝ → ℝ is a continuous function with u q(u) ≥ 0 for all u ∈ ℝ and $$\int_0^y q(u)\,{\text d}u \to \infty \quad \mbox{as} \quad |y| \to \infty .$$
Then, if x: ℝ → ℝ is twice differentiable and satisfies $$\ddot{x} + p\left( x, \dot{x} \right) \dot{x} + q(x) = 0,$$ then there exists a positive constant K (depending only on the initial displacement and initial velocity) such that $$|x(t)|, \ |\dot{x}(t)| \le K ,$$ for all t≥0.    ▣
Its proof is based on the usage of the Lyapunov function
$V(t) = \frac{1}{2}\,\dot{x}^2 (t) + \int_0^{x(t)} q(u)\,{\text d}u .$

Example 1: Consider the motion of a particle of mass m attached to a spring of stiffness $$k \left( x + x^3 \right) ,$$ where x is displacement from the equilibrium position. The differential equation governing the system is

$m\,\ddot{x} + k \left( x + x^3 \right) = 0 \qquad \Longleftrightarrow \qquad \ddot{x} + x + \varepsilon\, x^3 = 0$
upon appropriate change the dependent variable. Transferring the second order equation into a system of first order equations, we get
$\begin{split} \dot{x} &= y , \\ \dot{y} &= - \frac{k}{m} \left( x + x^3 \right) . \end{split}$
The associated total energy of the spring system becomes
$E(x,y) = \frac{m}{2}\, y^2 + k \left( \frac{x^2}{2} + \frac{x^4}{4} \right) .$
This function is also a Lyapunov function for the system because E(0,0) = 0 at the unique equilibrium solution (x, y) = (0,0) and E(x, y) > 0 for (x, y) ≠ (0,0). Moreover, the derivative of the Lyapunov function with respect to the system of equations is
$\dot{E}(x,y) = \frac{{\text d}}{{\text d}t} \, E(x,y) = \frac{\partial E}{\partial x} \,\dot{x} + \frac{\partial E}{\partial y} \,\dot{y} \equiv 0 .$
Thus, the critical point (0,0) is neutrally stable.

If we add some damping α > 0 to the spring system, so that the differential equation of motion becomes

$\begin{split} \dot{x} &= y , \\ \dot{y} &= - \frac{k}{m} \left( x + x^3 \right) - \alpha\,y. \end{split}$
The same Lyapunov function yields
$\dot{E}(x,y) = - \alpha \,m\, y^2 ,$
which is not negative definite because (x, y) = (0,0) along x-axis. Therefore, we modify the above Lyapunov function slightly.
$V(x,y) = \frac{m}{2}\, y^2 + k \left( \frac{x^2}{2} + \frac{x^4}{4} \right) + \beta \left( xy + \alpha \,\frac{x^2}{2} \right) .$
Then its derivative with respect to the Duffing equation becomes
$\dot{V}(x,y) = - \beta \,\frac{k}{m} \left( x^2 + x^4 \right) - \left( \alpha m - \beta \right) y^2 .$
If we choose β sufficiently small, V remains positive definite and its derivative is strictly negative for all (x, y) ≠ (0,0). Hence, the origin is globally asymptotically stable for &akpha; > 0.    ■

Example 2:    ■

Example 3:

We present an example credited to the Uruguayan dissident and mathematician José Luis Massera (1915--2002).

Consider the linear differential equation

$\frac{{\text d}x}{{\text d}t} = \frac{g' (t)}{g(t)} \, x(t)$
where
$g(t) = \sum_{k\ge 1} \frac{1}{1 + k^4 \left( t - k \right)^2} .$
It is clear that g(t) is continuous and has a continuous derivative
$g'(t) = - \sum_{k\ge 1} \frac{2k^4 \left( t - k \right) }{\left( 1 + k^4 \left( t - k \right)^2 \right)^2} .$
because of uniform converges of the corresponding series. It is easy to see that g(m) > 1 for each integer m ≥ 1 and
$g(t) < 2 + \sum_{k\ge 1} k^{-4} = M , \qquad -\infty < t < \infty .$
The general solution of the given linear equation is
$x(t) = g(t)\,c,$
where c is an arbitrary constant.

Since g(m) > 1 for each integer m ≥ 1, the equilibrium solution x(t) ≡ 0 could not be asymptotically stable.

Define the Lyapunov function

$V(x,t) = \frac{x^2}{g^2 (t)} \left[ M^2 + \int_1^{\infty} g^2 (u)\,{\text d} u \right] .$
Since
$g(t) < M^2 + \int_1^{\infty} g^2 (u)\,{\text d} u ,$
we have
$V(x,t) \ge x^2 .$
Thus, V(x, t) is positive definite. Moreover, its derivative along the solution is
$\frac{\text d}{{\text d}t}\,V(x,t) = - x^2 .$
Therefore, V(x, t) is a (positive) Lyapunov function for the given differential equation with negative derivative. However, the origin is not asymptotically stable.    ■
1. Guckenheimer, J. and Holmes, P., Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag, New York, 2002; https://doi.org/10.1007/978-1-4612-1140-2
2. Pykh, Yu.A., Lyapunov Functions for Lotka-Volterra Systems: An Overview and Problems, IFAC Proceedings Volumes, 2001, Vol. 34, Issue 6, pp. 1549--1554. https://doi.org/10.1016/S1474-6670(17)35410-1
3. O'Regan, S.M., Kelly, T.C., Korobeinikov, A., O.Callaghan, M.J.A., POkrovskii, A.V., Lyapunov functions for SIR and SIRS epidemic models, Applied Mathematics Letters, 2010, Vol. 23, Issue 4, pp. 446--448. https://doi.org/10.1016/j.aml.2009.11.014
4. Saito, Y., Sugie, J., Lee, Y.-H., Global asymptotic stability for predator–prey models with environmental time-variations, Applied Mathematics Letters, 2011, Vol. 24, Issue 12, pp. 1973--1980. https://doi.org/10.1016/j.aml.2011.05.015