This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0330
Return to Mathematica tutorial for the first course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part VI of the course APMA0330

Heaviside function

The Heaviside function was defined previously:

\[ H(t) = \begin{cases} 1, & \quad t > 0 , \\ 1/2, & \quad t=0, \\ 0, & \quad t < 0. \end{cases} \]

The objective of this section is to show how the Heaviside function can be used to determine the Laplace transforms of piecewise continuous functions. The main tool to achieve this is the shifted Heaviside function H(t-a), where a is arbitrary positive number. So first we plot this function:

a = Plot[HeavisideTheta[t - 3], {t, 0, 7}, PlotStyle -> Thick]
b = Graphics[{Blue, Arrowheads[0.07], Arrow[{{3.7, 1}, {3, 1}}]}]
c = Graphics[{Blue, Arrowheads[0.07], Arrow[{{2, 0}, {3, 0}}]}]
d = Graphics[{PointSize[Large], Point[{3, 1/2}]}]
a1 = Graphics[{Blue, Thick, Line[{{-1.99, 0}, {2.9, 0}}]}]
Show[a, a1, b, c, d, PlotRange -> {{-2.1, 7}, {-0.2, 1.2}}]
We present a property of the Heaviside function that is not immediately obvious:
\[ H\left( t^2 - a^2 \right) = 1- H(t-a) + H(t-a) = \begin{cases} 1, & \quad t < -a , \\ 0, & \quad -a < t < a, \\ 1, & \quad t > a > 0. \end{cases} \]
The most important property of shifted Heaviside functions is that their difference, W(a,b) = H(t-a) - H(t-b), is actually a window over interval (a,b); this means that their difference is 1 over this interval and zero outside closed interval [a,b]:
a = Plot[HeavisideTheta[t - 2] - HeavisideTheta[t - 5], {t, 0, 8}, PlotStyle -> Thick]
b = Graphics[{Blue, Arrowheads[0.07], Arrow[{{3.7, 1}, {2, 1}}], Arrow[{{3.7, 1}, {5, 1}}]}]
c = Graphics[{Blue, Arrowheads[0.07], Arrow[{{-1, 0}, {2, 0}}]}]
d = Graphics[{PointSize[Large], Point[{2, 1/2}], Point[{5, 1/2}]}]
a1 = Graphics[{Blue, Thick, Line[{{-1.99, 0}, {1.9, 0}}], Line[{{5.01, 0}, {8, 0}}]}]
c2 = Graphics[{Blue, Arrowheads[0.07], Arrow[{{7, 0}, {5, 0}}]}]
Show[a, a1, b, c, c2, d, PlotRange -> {{-2.1, 8}, {-0.2, 1.2}}]
Its Laplace transform is
\[ \left( {\cal L} \,H(t-a) \right) (\lambda ) = \int_a^{\infty}\,e^{-\lambda t} \,{\text d} t = \frac{1}{\lambda -a} \qquad\Longrightarrow \qquad \left( {\cal L} \,W(a,b) \right) (\lambda ) = \frac{1}{\lambda -a} - \frac{1}{\lambda -b} . \]

Example: Consider the piecewise continuous function

\[ f(t) = \begin{cases} 1, & \quad 0 < t < 1 , \\ t, & \quad 1 < t < 2 , \\ t^2 , & \quad 2 < t < 3 , \\ 0, & \quad 3 < t . \end{cases} \]
Of course, we can find its Laplace transform directly
\[ f^L (\lambda ) = \int_0^{\infty} f(t)\,e^{-\lambda \,t} \,{\text d} t = \int_0^{1} \,e^{-\lambda \,t} \,{\text d} t + \int_1^{2} t \,e^{-\lambda \,t} \,{\text d} t + \int_2^{3} t^2 \,e^{-\lambda \,t} \,{\text d} t . \]
However, we can also find its Laplace transform using the shift rule. First, we represent the given function f(t) as the sum
\[ f(t) = H(t) - H(t-1) + t \left[ H(t-1) - H(t-2) \right] + t^2 \left[ H(t-2) - H(t-3) \right] . \]
Each term in right-hand side we consider separately.
\begin{align*} t \, H(t-1) &= (t-1+1) \, H(t-1) = \left( t-1 \right) H(t-1) + H(t-1) \qquad\Longrightarrow \qquad {\cal L} \left[ t \, H(t-1) \right] = \frac{1}{\lambda^2} \, e^{-\lambda} + \frac{1}{\lambda}\, e^{-\lambda} , \\ t \, H(t-2) &= \left( t-2 +2 \right) H(t-2) = \left( t-2 \right) H(t-2) + 2\, H(t-2) \qquad\Longrightarrow \qquad {\cal L} \left[ t \, H(t-2) \right] = \frac{1}{\lambda^2} \, e^{-2\lambda} + \frac{2}{\lambda}\, e^{-2\lambda} , \\ t^2 \, H(t-2) &= \left( t-2 +2 \right)^2 H(t-2) = \left( t-2 \right)^2 H(t-2) + 4 \left( t-2 \right) H(t-2) + 4\, H(t-2) \qquad\Longrightarrow \qquad {\cal L} \left[ t^2 \, H(t-2) \right] = \frac{2}{\lambda^3} \, e^{-2\lambda} + \frac{4}{\lambda^2}\, e^{-2\lambda} + \frac{4}{\lambda}\, e^{-2\lambda} , \\ t^2 \, H(t-3) &= \left( t-3 +3 \right)^2 H(t-3) = \left( t-3 \right)^2 H(t-3) + 6 \left( t-3 \right) H(t-3) + 9\, H(t-3) \qquad\Longrightarrow \qquad {\cal L} \left[ t^2 \, H(t-3) \right] = \frac{2}{\lambda^3} \, e^{-3\lambda} + \frac{6}{\lambda^2}\, e^{-3\lambda} + \frac{9}{\lambda}\, e^{-3\lambda} . \end{align*}
Collecting all terms, we obtain
\[ f^L (\lambda ) = \frac{2}{\lambda^3} \left( e^{-2\lambda} - e^{-3\lambda} \right) + \frac{1}{\lambda^2} \left( e^{-\lambda} + 3\, e^{-2\lambda} -6\,e^{-3\lambda} \right) + \frac{1}{\lambda} \left( 1 + 2\,e^{-2\lambda} - 9\,e^{-3\lambda} \right) . \]


Dirac delta function

Paul Dirac.
Paul Adrien Maurice Dirac (1902--1984) was an English theoretical physicist who made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics. Paul Dirac was born in Bristol, England, to a Swiss father and an English mother. Paul admitted that he had an unhappy childhood, but did not mention it for 50 years; he learned to speak French, German, and Russian. He received his Ph.D. degree in 1926. Dirac's work concerned mathematical and theoretical aspects of quantum mechanics. He began work on the new quantum mechanics as soon as it was introduced by Heisenberg in 1925 -- independently producing a mathematical equivalent, which consisted essentially of a noncommutative algebra for calculating atomic properties -- and wrote a series of papers on the subject. Among other discoveries, he formulated the Dirac equation, which describes the behavior of fermions and predicted the existence of antimatter. Dirac shared the 1933 Nobel Prize in Physics with Erwin Schrödinger "for the discovery of new productive forms of atomic theory."
   Dirac had traveled extensively and studied at various foreign universities, including Copenhagen, Göttingen, Leyden, Wisconsin, Michigan, and Princeton. In 1937 he married Margit Wigner, of Budapest. Dirac was regarded by his friends and colleagues as unusual in character for his precise and taciturn nature. In a 1926 letter to Paul Ehrenfest, Albert Einstein wrote of Dirac, "This balancing on the dizzying path between genius and madness is awful." Dirac openly criticized the political purpose of religion. He said: "I cannot understand why we idle discussing religion. If we are honest---and scientists have to be---we must admit that religion is a jumble of false assertions, with no basis in reality." He spent the last decade of his life at Florida State University.
   The Dirac delta function was introduced as a "convenient notation" by Paul Dirac in his influential 1930 book, "The Principles of Quantum Mechanics," which was based on his most celebrated result on relativistic equation for electron, published in 1928. He called it the "delta function" since he used it as a continuous analogue of the discrete Kronecker delta \( \delta_{n,k} . \) Dirac predicted the existence of positron, which was first observed in 1932. Historically, Paul Dirac used δ-function for modeling the density of an idealized point mass or point charge, as a function that is equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. Dirac’s cautionary remarks (and the efficient simplicity of his idea) notwithstanding, some mathematically well-bred people did from the outset take strong exception to the δ-function. In the vanguard of this group was the American-Hungarian mathematician John von Neumann (was born in Jewish family as János Neumann, 1903--1957), who dismissed the δ-function as a “fiction."

   As there is no function that has these properties, the computations that were done by the theoretical physicists appeared to mathematicians as nonsense. It took a while for mathematicians to give strict definition of this phenomenon. In 1938, the Russian mathematician Sergey Sobolev (1908--1989) showed that the Dirac function is a derivative (in generalized sense) of the Heaviside function. To define derivatives of discontinuous functions, Sobolev introduced a new definition of differentiation and the corresponding set of generalized functions that were later called distributions. The French mathematician Laurent-Moïse Schwartz (1915--2002) further extended Sobolev's theory by pioneering the theory of distributions, and he was rewarded the Fields Medal in 1950 for his work. Because of his sympathy for Trotskyism, Schwartz encountered serious problems trying to enter the United States to receive the medal; however, he was ultimately successful. But it was news without major consequence, for Schwartz’ work remained inaccessible to all but the most determined of mathematical physicists.

Sergey Sobolev (left) and Laurent Schwartz (right).

   In 1955, the British applied mathematician George Frederick James Temple (1901--1992) published what he called a “less cumbersome vulgarization” of Schwartz’ theory based on Jan Geniusz Mikusınski's (1913--1987) sequential approach. However, the definition of δ-function can be traced back to the early 1820s due to the work of James Fourier on what we now know as the Fourier integrals. In 1828, the δ-function had intruded for a second time into a physical theory by George Green who noticed that the solution to the nonhomogeneous Poisson equation can be expressed through the solution of a special equation containing the delta function. The history of the theory of distributions can be found in "The Prehistory of the Theory of Distributions" by Jesper Lützen (University of Copenhagen, Denmark), Springer-Verlag, 1982.

    Outside of quantum mechanics the delta function is also known in engineering and signal processing as the unit impulse symbol. Mechanical systems and electrical circuits are often acted upon by an external force of large magnitude that acts only for a very short period of time. For example, all strike phenomenon (caused by either piano hammer or tennis racket) involve impulse functions. Also, it is useful to consider discontinuous idealizations, such as the mass density of a point mass, which has a finite amount of mass stuffed inside a single point of space. Therefore, the density must be infinite at that point and zero everywhere else. Delta function can be defined as the derivative of the Heaviside function, which (when formally evaluated) is zero for all \( t \ne 0 , \) and it is undefined at the origin. Now time comes to explain what a generalized function or distribution means.

In our everyday life, we all use functions that we learn from school as a map or transformation of one set (usually called input) into another set (called output, which is usually a set of numbers). For example, when we do our annual physical examinations, the medical staff measure our blood pressure, height, and weight, which all are functions that can be described as nondestructive testing. However, not all functions are as nice as previously mentioned. For instance, a biopsy is much less pleasant option and it is hard to call it a function, unless we label a destructive testing function. Before procedure, we consider a patient as a probe function, but after biopsy when some tissue has been taken from patient's body, we have a completely different person. Therefore, while we get biopsy laboratory results (usually represented in numeric digits), the biopsy represents destructive testing. Now let us turn to another example. Suppose you visit a store and want to purchase a soft drink, i.e. a bottle of soda. You observe that liquid levels in each bottle are different and you wonder whether they filled these bottles with different volumes of soda or the dimensions of each bottle differ from one another. So you decide to measure the volume of soda in a particular bottle. Of course, one can find outside dimensions of a bottle, but to measure the volume of soda inside, there is no other option but to open the bottle. In other words, you have to destroy (modify) the product by opening the bottle. The function of measuring the soda by opening the bottle could represent destructive testing

Now consider an electron. Nobody has ever seen it and we do not know exactly what it looks like. However, we can make some measurements regarding the electron. For example, we can determine its position by observing the point where electron strikes a screen. By doing this we destroy the electron as a particle and convert its energy into visible light to determine its position in space. Such operation would be another example of destructive testing function, because we actually transfer the electron into another matter, and we actually loose it as a particle. Therefore, in real world we have and use nondestructive testing functions that measure items without their termination or modification (as we can measure velocity or voltage). On the other hand, we can measure some items only by completely destroying them or transferring them into another options as destructive testing functions. Mathematically, such measurement could be done by integration (hope you remember the definition from calculus):

\[ \int_{-\infty}^{\infty} f(x)\,g(x)\,{\text d}x , \]
where f(x) is a nice (probe) function and g(x) can represent (bad or unpleasant) operation on our probe function. As a set of probe functions, it is convenient to choose smooth functions on the line with compact support (which means that they are zero outside some finite interval). As for electron, we don't know what the multiple g(x) looks like, all we know is the value of integral that represents a measurement. In this case, we say that g(x) acts on probe function and we call this operation the functional. Physicists denote it as
\[ \left. \left\vert g \right\vert f \right\rangle = \int_{-\infty}^{\infty} f(x)\,g(x)\,{\text d}x \qquad\mbox{or simply} \qquad \langle g, f \rangle . \]
(for simplicity, we consider only real-valued functions). Mathematicians also follow these notations; however, the integral on the right-hand side is mostly a show of respect to people who studied functions at school and it has no sense because we don't know what is the exact expression of g(x)---all we know or measure is the result of integration. Such objects as g(x) are now called distributions, or generalized functions, but actually they are all functionals: g acts on any probe function by mapping it into a number (real or complex). So strictly speaking, instead of the integral \( \int_{-\infty}^{\infty} f(x)\,g(x)\,{\text d}x \) we have to write the formula
\[ g\,:\, \mbox{set of probe functions } \mapsto \, \mbox{numbers}; \qquad g\,: \, f \, \mapsto \, \mathbb{R} \quad \mbox{or}\quad \mathbb{C} . \] Therefore, notation g(x) makes no sense because the value of g at any point x is undefined. So x is a dummy variable or invitation to consider functions depending on x. It is more appropriate to write \( g(f) \) because it is a number that is assigned to a probe function f by distribution g. Nevertheless, it is a custom to say that a generalized function g(x) is zero for x from some interval [a,b] if, for every probe function f that is zero outside the given interval,
\[ \langle g , f \rangle = \int_a^b f(x)\, g(x)\, {\text d} x =0. \]
However, it is completely inappropriate to say that a generalized function has a particular value at some point (recall that the integral does not care about a particular value of integrable function). Following Sobolev, we define a derivative g' of a distribution g by the equation
\[ \langle g' , f \rangle = -\int_a^b f'(x)\, g(x)\, {\text d} x , \]
which is valid for every smooth probe function f that is identically zero outside some finite interval. Now we define the derivative of the Heaviside function using new definition (because old calculus definition of derivative is useless).

Let f(x) be a continuous function that vanishes at infinity. We use integration by parts to evaluate the integral

\begin{align*} \int_{-\infty}^{\infty} f(x)\,\delta (x)\, {\text d} x &= \left[ f(x) \, H(x) \right]_{x=-\infty}^{x=\infty} - \int_{-\infty}^{\infty} f' (x) \, H(x) \, {\text d} x \\ &= - \int_0^{\infty} f' (x) \, {\text d} x = \left[ - f(x) \right]_{x=0}^{x=\infty} \\ &= f(0) . \end{align*}

The definition of the delta function can be extended to piecewise continuous functions:

\[ \int_a^b \delta (x-x_0 ) \, f (x)\,{\text d} x = \begin{cases} \frac{1}{2} \left[ f(x_0 +0) + f(x_0 -0) \right] , & \ \mbox{ if } x_0 \in (a,b) , \\ \frac{1}{2}\, f(x_0 +0) , & \ \mbox{ if } x_0 =a, \\ \frac{1}{2}\, f(x_0 -0) , & \ \mbox{ if } x_0 = b, \\ 0 , & \ \mbox{ if } x_0 \notin [a,b] . \end{cases} \]

To understand the behavior of Dirac delta function, we introduce the rectangular pulse function

\[ \delta_h (x,a) = \begin{cases} h, & \ \mbox{ if } \ a- \frac{1}{2h} < x < a+ \frac{1}{2h} , \\ 0, & \ \mbox{ otherwise. } \end{cases} \]
We plot the pulse function with the following Mathematica command
f[x_] = Piecewise[{{1, 2 < x < 3}}]
Labeled[Plot[f[x], {x, 0, 7}, Exclusions -> {False}, PlotStyle -> Thick,
Ticks -> {{{2, "a-1/2h"}, {3, "a+1/2h"}}, {Automatic, {1, "h"}}}], "The pulse function"]

As it can be seen from figure, the amplitude of pulse becomes very large and its width becomes very small as \( h \to \infty . \) Therefore, for any value of h, the integral of the rectangular pulse

\[ \int_{\alpha}^{\beta} \delta_h (x,a)\, {\text d} x = 1 \]
if the interval of definition \( \left( a- \frac{1}{2h} , a+ \frac{1}{2h} \right) \) lies in the interval (α , β), and zero if the range of integration does not contain the pulse. Now we can define the delta function located at the point x=a as the limit (in generalized sense):
\[ \delta (x-a) = \lim_{h\to \infty} \delta_h (x,a) . \]

Instead of large parameter h, one can choose a small one:

\[ \delta (x) = \lim_{\epsilon \to 0} \delta (x, \epsilon ) , \qquad\mbox{where} \quad \delta (x, \epsilon ) = \begin{cases} 0 , & \ \mbox{ for } \ |x| > \epsilon /2 , \\ \epsilon^{-1} , & \ \mbox{ for } \ |x| < \epsilon /2 . \end{cases} \]
This means that for every probe function (that is smooth and is zero outside some finite interval) f, we have
\[ \left. \left\vert \delta \right\vert f \right\rangle = \int_{-\infty}^{\infty} \delta (x)\,f(x) \,{\text d}x = \lim_{\epsilon \to 0} \left. \left\vert \delta (x, \epsilon ) \right\vert f \right\rangle = \lim_{\epsilon \to 0} \int_{-\infty}^{\infty} \delta (x, \epsilon )\,f(x) \,{\text d}x . \]
Let f(x) be a continuous function and let F'(x) = f(x). We compute the integral
\begin{align*} \int_{-\infty}^{\infty} f(x)\,\delta (x)\, {\text d} x &= \lim_{\epsilon \to 0} \,\frac{1}{\epsilon} \, \int_{-\epsilon /2}^{\epsilon /2} f(x)\, {\text d} x \\ &= \lim_{\epsilon \to 0} \,\frac{1}{\epsilon} \left[ F(x) \right]_{x= -\epsilon /2}^{x=\epsilon /2} \\ &= \lim_{\epsilon \to 0} \,\frac{F(\epsilon /2) - F(-\epsilon /2)}{\epsilon} \\ &= F' (0) = f(0) . \end{align*}
The delta function has many representations as limits (of course, in generalized sense) of regular functions; one may want to use another approximation:
\[ \delta (x, \epsilon ) = \frac{1}{\sqrt{2\pi\epsilon}} \, e^{-x^2 /(2\epsilon )} \qquad \mbox{or} \qquad \delta (x, \epsilon ) = \frac{1}{\pi x} \,\sin \left( \frac{x}{\epsilon} \right) . \]
In all choices of \( \delta (x, \epsilon ), \) we will have
\begin{align*} \int_{-\infty}^{\infty} \delta (x, \epsilon ) \,{\text d}x &= 1, \\ \lim_{\epsilon \to 0} \,\int_{-\infty}^{\infty} \delta (x-a, \epsilon ) \,f(x) \,{\text d}x &= f(a) , \end{align*}
for any smooth integrable function f(x). The latter limit could be written more precisely
\[ \lim_{n \to \infty} \, \sqrt{\frac{n}{\pi}} \, \int_{-\infty}^{\infty} e^{-n(x-a)^2} \, f(x) \, {\text d} x = \frac{1}{2} \,f(a+0) + \frac{1}{2} \, f(a-0) . \]

Although the delta function is a distribution (which is a functional on a set of probe functions) and the notation \( \delta (x) \) makes no sense from a mathematician point of view, it is a custom to manipulate the delta function \( \delta (x) \) as with a regular function, keeping in mind that it should be applied to a probe function. Dirac remarks that “There are a number of elementary equations which one can write down about δ-functions. These equations are essentially rules of manipulation for algebraic work involving δ-functions. The meaning of any of these equations is that its two sides give equivalent results [when used] as factors in an integrand.'' Examples of such equations are

\begin{align*} \delta (-x) &= \delta (x) , \\ x^n \delta (x) &= 0 \qquad\mbox{for any positive integer } n, \\ \delta (ax) &= a^{-1} \delta (x) , \qquad a > 0, \\ \delta \left( x^2 - a^2 \right) &= \frac{1}{2a} \left[ \delta (x-a) + \delta (x+a) \right] , \qquad a > 0, \\ \int \delta (a-x)\, {\text d} x \, \delta (x-b) &= \delta (a-b) , \\ f(x)\,\delta (x) &= f(a)\, \delta (x-a) , \\ \delta \left( g(x) \right) &= \sum_n \frac{\delta (x - x_n )}{| g' (x_n )|} , \end{align*}
where summation is extended over all simple roots of the equation \( g(x_n ) =0 . \) Note that the above formula is valid subject that \( g' (x_n ) \ne 0 . \) Of course, the Heaviside function and the δ-function stand in a close relationship supplied by the calculus:
\[ H (t-a) = \int_{-\infty}^t \delta (x-a)\,{\text d} x \qquad \Longleftrightarrow \qquad \frac{{\text d}}{{\text d} t}\,H(t-a) = \delta (t-a) . \]

Theorem: The convolution of a delta function with a continuous function:

\[ f(t) * \delta (t) = \int_{-\infty}^{\infty} f(\tau )\, \delta (t-\tau ) \, {\text d}\tau = \delta (t) * f(t) = \int_{-\infty}^{\infty} f(t-\tau )\, \delta (\tau ) \, {\text d}\tau = f(t) . \]

Theorem: The Laplace transform of the Dirac delta function:

\[ {\cal L} \left[ \delta (t-a)\right] = \int_0^{\infty} e^{-\lambda\,t} \delta (t-a) \, {\text d}t = e^{\lambda\,a} , \qquad a \ge 0. \qquad ■ \]

Example: Find the Laplace transform of the convolution of the function \( f(t) = t^2 -1 \) with shifted delta function \( \delta (t-3) . \)

According to definition of convolution,

\[ f(t) * \delta (t-3) = \int_{0}^{\infty} f(\tau )\, \delta (t-3 -\tau ) \, {\text d}\tau = \int_{-\infty}^{\infty} (\tau^2 -1 )\, \delta (t-3 -\tau ) \, {\text d}\tau = f(t-3) = (t-3)^2 -1 . \]
Actually, we have to multiply f(t-3) by a shifted Heaviside function, so the correct answer would be \( f(t-3)\, H(t-3) \) because the original function was \( \left[ t^2 -1 \right] H(t) . \) Now we apply the Laplace transform:
\begin{align*} {\cal L} \left[ f(t) * \delta (t-3) \right] &= {\cal L} \left[ f(t) \right] \cdot {\cal L} \left[ \delta (t-3) \right] = \left( \frac{2}{\lambda^3} -\frac{1}{\lambda} \right) e^{-3\lambda} \\ &= {\cal L} \left[ f(t-3)\, H(t-3) \right] = {\cal L} \left[ f(t) \right] e^{-3\lambda} = \frac{2 - \lambda^2}{\lambda^3} \, e^{-3\lambda} . \end{align*}
We check the answer with Mathematica:
LaplaceTransform[ Integrate[(tau^2 - 1)*DiracDelta[t - 3 - tau], {tau, 0, t}], t, s]
-((E^(-3 s) (-2 + s^2))/s^3)

Example: A spring-mass system with mass 1, damping 2, and spring constant 10 is subject to a hammer blow at time t = 0. The blow imparts a total impulse of 1 to the system, which is initially at rest. Find the response of the system. The situation is modeled by

\[ y'' +2\, y' +10\,y = \delta (t), \qquad y(0) =0, \quad y' (0) =0 . \]
Application of the Laplace transform to the both sides utilizing the initial conditions yields
\[ \lambda^2 y^L +2\,\lambda \, y^L +10\,y^L = 1 , \]
where \( y^L = {\cal L} \left[ y(t) \right] = \int_0^{\infty} e^{-\lambda\, t} y(t) \,{text d}t \) is the Laplace transform of the unknown function. Solving for yL, we obtain
\[ y^L (\lambda ) = \frac{1}{\lambda^2 + 2\lambda + 10} , \]
We can use the formula from the table to determine the system response
\[ y (t ) = {\cal L}^{-1} \left[ \frac{1}{\lambda^2 + 2\lambda + 10} \right] = \frac{1}{3}\, e^{-t}\, \sin (3t) \, H(t) , \]
where H(t) is the Heaviside function.



Return to Mathematica page

Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)