Systems of Nonlinear Differential Equations

This page supports the main stream of the web site by providing the basic information about system of nonlinear differential equations. We demonstrate capabilities of MuPad for this topic.

Systems of Autonomous Equations

An ordinary differential equation (ODE) is an equation of the form

\[ \dot{\bf x} = {\bf f} (t , {\bf x}, \lambda ) , \qquad (1.1) \]
where the dot denotes differentiation with respect to the independent variable t (usually a measure of time), the dependent variable x is a vector of state variables, and λ is a vector of parameters. As convenient terminology, especially when we are concerned with the components of a vector differential equation, we will say that equation (1.1) is a system of differential equations. Also, if we are interested in changes with respect to parameters, then the differential equation is called a family of differential equations.

In this context, the words “trajectory,” “phase curve,” “orbit,” and “integral curve” are also used to refer to solutions of the vector differential equation (1.1). However, it is useful to have a term that refers to the image of the solution in \( \mathbb{R}^n . \) Thus, we define the orbit of the solution φ to be the set {φ(t) ∈ U : t ∈ J0 }.

When a differential equation is used to model the evolution of a state variable for a physical process, a fundamental problem is to determine the future values of the state variable from its initial value. The mathematical model is then given by a pair of equations
\[ \dot{\bf x} = f(t, {\bf x}, \lambda ) , \qquad {\bf x}(t_0) = {\bf x}_0 \]
where the second equation is called an initial condition. If the differential equation is defined as equation (1.1) and (t0 , x0 ) ∈ J × U, then the pair of equations is called an initial value problem. Of course, a solution of this initial value problem is just a solution φ of the differential equation such that φ(t0 ) = x0 .

If we view the differential equation (1.1) as a family of differential equations depending on the parameter vector and perhaps also on the initial condition, then we can consider corresponding families of solutions---if they exist--by listing the variables under consideration as additional arguments.

The fundamental issues of the general theory of differential equations are the existence, uniqueness, extensibility, and continuity with respect to parameters of solutions of initial value problems. Fortunately, all of these issues are resolved by the following foundational results of the subject: Every initial value problem has a unique solution that is smooth with respect to initial conditions and parameters. Moreover, the solution of an initial value problem can be extended in time until it either reaches the domain of definition of the differential equation or blows up to infinity.

The existence and uniqueness theorem is so fundamental in science that it is sometimes called the “principle of determinism.” The idea is that if we know the initial conditions, then we can predict the future states of the system. The principle of determinism is of course validated by the proof of the existence and uniqueness theorem. However, the interpretation of this principle for physical systems is not as clear as it might seem. The problem is that solutions of differential equations can be very complicated. For example, the future state of the system might depend sensitively on the initial state of the system. Thus, if we do not know the initial state exactly, the final state may be very difficult (if not impossible) to predict.

An autonomous differential equation is given by
\[ \dot{\bf x} = f({\bf x}, \lambda ) \]
that is, the function f does not depend explicitly on the independent variable. If the function f does depend explicitly on t, then the corresponding differential equation is called nonautonomous.

In physical applications, we often encounter equations containing second, third, or higher order derivatives with respect to the independent variable. These are called second order differential equations, third order differential equations, and so on, where the the order of the equation refers to the order of the highest order derivative with respect to the independent variable that appears explicitly. In this hierarchy, a differential equation is called a first order differential equation.

Recall that Newton’s second law---the rate of change of the linear momentum acting on a body is equal to the sum of the forces acting on the body---involves the second derivative of the position of the body with respect to time. Thus, in many physical applications the most common differential equations used as mathematical models are second order differential equations. For example, the natural physical derivation of van der Pol’s equation leads to a second order differential equation of the form
\[ \ddot{u} +b \left( u^2 -1 \right) \dot{u} + \omega^2 u = a\,\cos \Omega t \]
An essential fact is that every differential equation is equivalent to a first order system. To illustrate, let us consider the conversion of van der Pol’s equation to a first order system. For this, we simply define a new variable v := u so that we obtain the following system:
\[ \dot{u} = v \dot{v} = -\omega^2 u + b \left( 1- u^2 \right) v + a\,\cos \Omega t \]
Clearly, this system is equivalent to the second order equation in the sense that every solution of the system determines a solution of the second or- der van der Pol equation, and every solution of the van der Pol equation determines a solution of this first order system. Let us note that there are many possibilities for the construction of equivalent first order systems---we are not required to define v := u. For ̇example, if we define v = au where a is a nonzero constant, and follow the same procedure used to obtain the above system, then we will obtain a family of equivalent first order systems. Of course, a differential equation of order m can be converted to an equivalent first order system by defining m - 1 new variables in the obvious manner. If our model differential equation is a nonautonomous differential equation of the form \( {\bf x} = {\bf f} (t, {\bf x}), \) where we have suppressed the possible dependence on parameters, then there is an “equivalent” autonomous system obtained by defining a new variable as follows:
\[ \dot{x} = f(t,x) \dot{t} =1 \]
In particular, every solution of the nonautonomous differential equation can be obtained from a solution of the autonomous system. We have just seen that all ordinary differential equations correspond to first order autonomous systems. As a result, we will pay special attention to the properties of autonomous systems. In most cases, the conversion of a higher order differential equation to a first order system is useful. On the other hand, the conversion of nonautonomous equations (or systems) to autonomous systems is not always wise. However, there is one notable exception. Indeed, if a nonautonomous system is given by \( {\bf x} = {\bf f} (t, {\bf x}) , \) where f is a periodic function of t, then, as we will see, the conversion to an autonomous system is very often the best way to analyze the system.