Differential Operators

Brown University, Applied Mathematics


Operators and Linear Independence

The Differential as an Operator

You will remember taking derivatives in calculus. From a first glance, taking a derivative is usually explained as a rate of change or as taking a line tangent to a point on a graph. You might have memorized that cosine's derivative was sine, and that the derivative of velocity is acceleration. When solving differential equations in APMA 33, however, we will be thinking of a derivative in a much more abstract sense.

Much how addition, subtraction, multiplication, and division are all "operators", we will too consider the derivative as an operator. In the notation for calculus, the operator and what is being derived are expressed in the same fraction, such as \( \frac{{\text d}X(t)}{{\text d}t} \) or \( \frac{{\text d}Y(x)}{{\text d}x} .\) When thinking about the differential as an operator, we are going to seperate the variable being acted upon by the differential operator from the operator itself. For example, \( \frac{{\text d}X(t)}{{\text d}t} \) would become \( \frac{\text d}{{\text d}t}\,X(t) \) . In this example, \( \frac{\text d}{{\text d}t} \) is our differential operator. If you have taken multivariable calculus, you may remember a similar notion when calculating the gradient, divergence, or curl of a function. It may seem strange at first to have a "dangling derivative" just hanging out on your page without anything to take the derivative of. This is okay - you will get more comfortable with it in time.

Linearity of the Differential

Another notation for the differential operator is the big \( \texttt{D} \) notation. This is yet another level of abstraction - we factor out what the derivative is being taken with respect to. \( \frac{\text d}{{\text d}t} \) becomes \( \texttt{D} .\) Thus we could similarly write \( \texttt{D}X(t) , \) which would be equal to \( \frac{\text d}{{\text d}t}\,X(t) \) . You may be wondering if this mathematics is sound - how can we just take a derivative without knowing what the independent variable is? We are going to be talking about taking derivatives in a very abstract, general, sense. Bare with the notation, and you will understand more soon.

It is important to note that the differential is a linear operator. This means that addition and multiplication by a constant will hold in the normal sense. For example, \( \texttt{D}(f+g) = \texttt{D}(f) + \texttt{D}(g), \) where \( f \) and \( g \) are differentiable functions. In that same fashion, \( \texttt{D}(af) = a\,\texttt{D}(f) , \) where a is a constant and f is a differentiable function. The differential operator is NOT commutative, however - \( \texttt{D}f \) is not equal to \( f\texttt{D} . \) The differential must always be written first. For higher order systems, will find that as the derivatives get bigger, so does the power of the big \( \texttt{D} \) operator. \( \frac{{\text d}^2}{{\text d}t^2} \) is equal to \( \texttt{D}^2 \) and similarly \( \frac{{\text d}^n}{{\text d}t^n} \) is equal to \( \texttt{D}^n . \)

Linear Independence

It is important to know if two equations are considered linearly independent from one another. In a nutshell, linear independence is simply determining whether or two equations are simply the same equation, but scaled by a constant factor. If they are only scaled by a constant factor, they are considered to NOT be linearly independent - they are linearly dependent. So \( 3X+7 \) and \( 6X + 14\) are linearly dependent and not linearly independent.

In the most general case, it can be hard to know whether or not two or more functions are linearly independent from each other when they get more complex. For two functions, it is quite easy to answer: two functions are linearly dependent if and only if one is a constant multiple of another. When functions are solutions of the same linear differential equation (not necessarily constant coefficients), there is an excellent tool, called the Wronskian. Is \( \sin(2x) \) linearly independent from \( \cos(2x) \) ? To figure this out, we will calculate something called the Wronskian, which will tell us more information about the linear dependence of a system of equations. If the result of applying the Wronskian to our functions gives us 0 as a result, we know that the functions are linearly dependent. We can compare more than two functions at once, and the only thing we have to make sure is that the functions we are using are differentiable.

Example. Find the Wronskian of the pair of functions: \( x, x*exp(-2x) \) using the Wronskian solver:

W := ode::wronskian([x, x*exp(-2*x)], x)

\[ x \left( e^{-2x} -2\,x\,e^{-2x} \right) -x\,e^{-2x} \]

 

To calculate the Wronskian, we will take the determinant of a matrix full of our functions in varying degrees of their derivatives. Like we mentioned earlier, if the result is 0, then two of the functions must be linearly dependent.

To set up the matrix for the determinant, we will need to write each function across the top - this will determine the size of our \( N \times N \) matrix, where N is the number of functions you have. Then you will need to differentiate each function N times, and write the result each time below the previous result to fill out the rest of the matrix. For example, if we are checking the linearity of 2x and \( \cos (x) \) we would write:
\[ \begin{bmatrix} 2x & \cos (x) \\ 2 & -\sin (x) \end{bmatrix} \]
So each function you are testing creates a new column, and you will derive it N times, down that same column. Taking the derivative by hand for matrices larger than 3 dimensions is a hassle (especially since you will have to keep deriving each function more and more), so we can use MuPAD to help us easily calculate the Wronskian. To define a matrix in MuPAD, use the following syntax:
A:= matrix([[2,-1,0],[0,-3,3],[0,2,2]])

Notice that there are TWO bracket sets. You will get an error message if you don't. This allows you to make a matrix of any dimensions. Just make sure that every row has the same dimension. In our case, we will want to write a matrix full of functions. Then we will want to differentiate them with the "diff" function and take the determinant by calling the "det" function in the linear algebra library (linalg). We could easily do it like this:

 

f1 := 2*x:

f2 := cos(x):

A:= matrix([[f1, f2],[diff(f1, x), diff(f2, x)]])

matrix([[2*x, cos(x)], [2, -sin(x)]])

Wronskian := linalg::det(A)

- 2*cos(x) - 2*x*sin(x)

 

In our case since the Wronskian is not zero, we know that 2x and \( \cos(2x) \) are linearly independent. Now that we have the tools we need, we can move on to solving differntial equations.


Home

Next >