Preface


This section is about reducing nonlinear differential equations to lower order equations.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part IV of the course APMA0330

Reduction Higher Order ODEs


Sometimes higher order differential equations can be reduced to lower and first order equations. We consider two classes of equations when this is possible.

 

Dependent Variable Missing


For a differential equation of the form \( y^{(n)} = f(x, y^{(n-1)} ) , \) the substitution \( p = y^{(n-1)} , \ p' = y^{(n)} \) leads to a first order equation of the form \( p' = f(x,p) . \) If this equation can be solved for p, then y can be obtained by integrating \( {\text d}^{n-1} y/{\text d}x^{n-1} = p . \) Note that one arbitrary constant is obtained in solving the first order equation for p, and n-1 constants are introduced in the integration for y.

Example: Consider the differential equation with dependent variable missing:

\[ x^2 \frac{{\text d}^ 2 y}{{\text d}x^2} + x\,\frac{{\text d} y}{{\text d}x} = 3x^2 +1 , \qquad x> 0. \]

Upon setting p=y', we reduce the given differential equation to a linear differential equation of first order
\[ x^2 p' + x\,p = 3x^2 +1 . \]
We seek its solution using Bernoulli method in the form: p = uv, where u is a solution of the homogeneous equation
\[ x^2 u' + x\,u = 0 \qquad \Longrightarrow \qquad \frac{{\text d}u}{u} = -2\,\frac{{\text d}x}{x} . \]
Integration yields \( u = x^{-2} . \) Then for v we get
\[ x^2 u\, v' = 3x^2 +1 \qquad \Longrightarrow \qquad v = x^3 + x + C_1 , \]
where C1 is a constant of integration. Then we solve for p and obtain
\[ y' = p = x^{-2} v = x + x^{-1} + C)1 x^{-2} \qquad \Longrightarrow \qquad y = \frac{x^2}{2} + \ln x + C_1 x^{-1} + C_2 , \]

 

Independent Variable Missing


Consider second order differential equation of the form \( y'' = f(y,y') , \) in which the independent variable t does not appear explicitly. If we let \( v = y' = \dot{y} , \) then we obtain \( {\text d} v/{\text d}t = f(y,v) . \) Since the right side of this equation depends on y and v, rather than on t and v, this equation contains too many variables. However, if we think of y as the independent variable, then by chain rule, \( {\text d} v/{\text d}t = \left( {\text d} v/{\text d}y \right) \left( {\text d} y/{\text d}t \right) = v \left( {\text d} v/{\text d}y \right) . \) Hence the original differential equation can be written as \( v \left( {\text d} v/{\text d}y \right) = f(y,v). \) Provided that this first order equation can be solved, we obtain v as a function of y. A relation between y and t results from solving \( {\text d} y/{\text d}t = v(y) , \) which is a separable equation. Again, there are two arbitrary constants in the final answer.

Example: Consider the differential equation \( y'' + y \left( y' \right)^3 =0 . \) Upon setting \( y' =v , \) we reduce the given differential equation of the second order to another one: \( v \, \frac{{\text d}v}{{\text d}y} + y \left( v \right)^3 =0 . \) Assuming that \( v \ne 0 , \) we reduce it to the first order equation
\[ \frac{{\text d}v}{{\text d}y} + y \, v^2 \qquad \Longrightarrow \qquad - \frac{{\text d}v}{v^2} = y\,{\text d}y . \]
Integration yields
\[ \frac{1}{v} = \frac{y^2}{2} + C_1 \qquad \Longrightarrow \qquad v = \frac{2}{y^2 + C_1} = \frac{{\text d}y}{{\text d}t} . \]
Next separation of variable leads to
\[ 2\,{\text d}t = \left( y^2 + C_1 \right) {\text d}y \qquad \Longrightarrow \qquad 2t + C_2 = \frac{y^3}{3} + C_1 y , \]
which is the general solution of the given differential equation in implicit form.

Example: Dr. D.H. Parsons derived in 1963 the following differential equation for the potential V in a planar vacuum diod
\[ \frac{{\text d}^2 V}{{\text d} x^2} = \frac{a}{\sqrt{V + \varepsilon^2}} \quad (0 < x<\ell ), \qquad V(0) =0, \quad V(\ell ) = V_{\ell} , \]
where \( \displaystyle a = \frac{4\pi I}{\left( 2e/m \right)^{1/2}} ; \) here m is the mass and e is the magnitude of an electron; I is the current per unit plate area.

 The above second order differential equation admits two integrating factors, one less obvious than the other:

\[ \mu (x) = - 2\, \frac{{\text d}V}{{\text d}x} \qquad \mbox{and} \qquad \mu (x) = 3\left[ \left( \frac{{\text d}V}{{\text d}x} \right)^2 - 2\,\sqrt{V= \varepsilon^2} \right] . \]
Multiplication by each integrating factor will reduce the given second order differential equation to an exact equation. Then simple integration yields
\begin{eqnarray*} 4a\,\sqrt{V + \varepsilon^2} - \left( \frac{{\text d}V}{{\text d}x} \right)^2 &=& A >0, \\ \left( \frac{{\text d}V}{{\text d}x} \right)^3 - 6a \,\frac{{\text d}V}{{\text d}x}\,\sqrt{V + \varepsilon^2} + 3a^2 a &=& B , \end{eqnarray*}
where A and B are arbitrary constants.
Example: A famous problem of finding the planar curve on which a body subjected only to the force of gravity will slide (without friction) between two points in the least possible time was first posted by Galileo in 1638. He proved that the time along two chords AC, CB, where C is any point of a circle including A and B, is less than the time along the single chord AB. As we increase the number of chords by choosing more points on the arc, time of travel becomes lesser and in the limit of the number of chords tending to infinity, the time becomes the least. He concluded that the quickest path along the arc of a circle with one of its ends being the endpoint of a vertical diameter of a circle. Galileo, in his letter to his friend Guidobaldo del Monte on 29 November 1602, described the descent of heavy bodies along the arcs of circles and the chords subtended by them. Now we know that Galileo's conclusion about the curve being a circle is wrong.

Brachistochrone compared to other curves

However, in June 1696 the Swiss brilliant scientist Johann Bernoulli (1667--1748) challenged mathematicians with the problem published in Acta Eruditorum. He called it the brachistochrone problem, which gave birth to the calculus of variations. The word ``brachistochrone'' is derived from two Greek words: brachistos (βραχιστος), meaning the ``shortest,'' and chronos (χρονος), meaning ``time.'' The problem he posed was the following:

Given two points A and B in a vertical plane, with B below and to the right of A, what is the curve traced out by a point acted on only by constant gravity, which starts at A and reaches B in the shortest time.
Besides him. five other mathematicians independently solved the brachistochrone problem: Johann's oldest brother Jakob Bernoulli, Gottfried Leibniz, Isaac Newton, Ehrenfried Walther von Tschirnhaus, and Guillaume de l'Hopital (his solution was unknown till 1988).

Imagine a metal bead with a wire threaded through a hole in it, so that the bead can slide with no friction along the wire. How can one choose the shape of the wire so that the time of descent under gravity (from rest) is smallest possible? Upon choosing the Cartesian coordinates with downward y-axis and x-axis passes through the initial point, we are looking for a function y(x) such that y(0) = 0, and for which the time T of descent is minimized.

    Brachistochrone curves
cycloid[a_, t_] := {a*t - a*Sin[t], -a + a*Cos[t]};
c1 = ParametricPlot[cycloid[1, t], {t, 0, 1*Pi}, PlotStyle -> Thick, Axes -> False];
FindRoot[t - Sin[t] == Pi*4/3, {t, 3}]
c2 = ParametricPlot[cycloid[2, t], {t, 0, 2.3098814600100575}, PlotStyle -> Thick, Axes -> False];
c3 = ParametricPlot[cycloid[1/2, t], {t, 0, 6.283173937949478}, PlotStyle -> Thick, Axes -> False];
c4 = ParametricPlot[cycloid[3/4, t], {t, 0, 3.6778596325786834`}, PlotStyle -> Thick, Axes -> False];
line = Graphics[Line[{{Pi, 0}, {Pi, -2.5}}]];
txt1 = Graphics[Text[Style["A", FontSize -> 16], {0.25, -0.1}]];
txt2 = Graphics[Text[Style["B", FontSize -> 16], {3.25, -1.4}]];
Show[c1, c2, c3, c4, line, txt1, txt2]
The steeper the curve, the faster it will move; however, it must convert some of this speed into motion in the x-direction to travel distance b because y(b) must be equal to B, the final destination. The travel time along a path is given by the integral of the ratio of the arclength to the particle's speed. The movement of the bead is conservative, so we get
\[ \frac{m}{2}\, v^2 - mg\,y = 0 \qquad \Longrightarrow \qquad v = \sqrt{2g\,y} , \]
where m is the mass of the bead, v is its velocity, and g is the gravitational constant. Newton’s Law implies that for a mass m traveling on the arc s with angle θ from the downward direction
\[ m\,\frac{{\text d}^2 s}{{\text d}t^2} = mg\,\cos \theta = mg\,\frac{{\text d}y}{{\text d}s} . \]
The mass cancels and
\[ \frac{1}{2}\,\frac{\text d}{{\text d}t} \left( \frac{{\text d}s}{{\text d}t} \right)^2 = \frac{{\text d}^2 s}{{\text d} t^2} \,\frac{{\text d}s}{{\text d}t} = g\,\frac{{\text d}y}{{\text d}t} , \]
so we conclude that for some constant C,
\[ \left( \frac{{\text d}s}{{\text d}t} \right)^2 = 2g\,y + C. \]
But at t = 0, both the speed and displacement are zero; thus, C = 0, and we have
\[ \frac{{\text d}s}{{\text d}t} = \sqrt{2g\,y} . \]
The arc length element is
\[ {\text d}s = \sqrt{{\text d}x^2 + {\text d}y^2} = \sqrt{1 + \left( y' \right)^2} \,{\text d}x . \]
The time of descent T is therefore given by
\[ T(y) = \int_0^T {\text d}t = \int_0^S {\text d}s = \frac{1}{\sqrt{2g}} \, \int_0^X \frac{\sqrt{1 + \left( y'(x) \right)^2}}{\sqrt{y(x)}} \,{\text d}x . \]
This is a functional (mapping a function into a number) with the Lagrangian
\[ L = \frac{1}{\sqrt{2g}} \,\sqrt{\frac{1 + (y' )^2}{y- y_0}} , \]
where for simplicity, we set the initial point at the origin: y0 = 0. Therefore, the above functional acts on the functions yC¹0,B([0,b]), the set of continuously differentiable functions vanishing at x = 0 and y(b) = B. Observe that the Lagrangian density does not depend explicitly on the independent variable x. So we apply the Euler--Lagrange equation that gives a stationary curve. This curve, in general, not even provide a local maximum or minimum. So we get
\[ y' \,\frac{\partial L}{\partial y'} - L = c, \]
for some constant c. This can be simplified to
\[ y'^2 \left( 1 + y'^2 \right)^{-1/2} y^{-1/2} - \left( 1 + y'^2 \right)^{1/2} y^{-1/2} = c_1 = c\,\sqrt{2g} \]
because
\[ \frac{\partial L}{\partial y'} = \frac{1}{\sqrt{2g}} \, y' \left( 1 + y'^2 \right)^{-1/2} y^{-1/2} . \]
Next, we express the derivative in implicit form
\[ c_1 \left( 1 + y'^2 \right)^{1/2} y^{1/2} = -1. \]
For there to be a solution we must have c1 < 0. Let c1 = -1/α, and we have
\[ \left( 1 + y'^2 \right)^{1/2} y^{1/2} = \alpha . \]
Expressing the derivative explicitly, we obtain the brachistochrone equation
\[ \frac{{\text d}y}{{\text d}x} = \sqrt{\frac{\alpha^2 - y}{y}} , \]
where we have used the positive root. Although the brachistochrone equation has a singular solution, it is out of our interest because either the singular solution does not satisfy the boundary condition or, when it does, the descent time along it is not minimum. The above equation is a separable one, so we have
\[ \int \sqrt{\frac{y}{\alpha^2 - y}} \, {\text d}y = \int {\text d}x . \]
Of course, we can integrate with the aid of Mathematica
Integrate[Sqrt[y/(a^2 - y)], y]
\[ \left( \alpha^2 - y \right) \sqrt{\frac{y}{\alpha^2 - y}} - \alpha^2 \arcsin \frac{\sqrt{y}}{\alpha} = x+C , \]
but solution looks ugly. Instead, we make a substitution:
\[ y = \alpha^2 \sin^2 \frac{\theta}{2} \qquad \Longrightarrow \qquad {\text d}y = \alpha^2 \sin \frac{\theta}{2} \,\cos \frac{\theta}{2} \,{\text d}\theta . \]
Then
\[ \int \sqrt{\frac{y}{\alpha^2 - y}} \, {\text d}y = \int \tan \frac{\theta}{2} \, \alpha^2 \sin \frac{\theta}{2} \,\cos \frac{\theta}{2} \,{\text d}\theta . \]
because
\[ \frac{y}{\alpha^2 - y} = \tan^2 \frac{\theta}{2} . \]
Simplify[(Sin[x/2])^2 /(1 - (Sin[x/2])^2)]
Tan[x/2]^2
So
\[ x+c = \alpha^2 \int {\text d}\theta \,\sin^2 \frac{\theta}{2} = \frac{\alpha^2}{2} \left( \theta - \sin \theta \right) . \]
Integrate[(Sin[y/2])^2, y]
y/2 - Sin[y]/2
We know that dy/dx is a reciprocal of dx/dy, so
\[ \frac{{\text d}y}{{\text d}x} = \sqrt{\frac{\alpha^2 - y}{y}} \qquad\mbox{and} \qquad \frac{{\text d}x}{{\text d}y} = \sqrt{\frac{y}{\alpha^2 - y}} . \]
From the relation
\[ \frac{y}{\alpha^2 - y} = \tan^2 \frac{\theta}{2} \qquad \Longrightarrow \qquad y\,\cos^2 \frac{\theta}{2} = \left( \alpha^2 - y \right) \sin^2 \frac{\theta}{2} , \]
we get
\[ y = \alpha^2 \sin^2 \frac{\theta}{2} = \frac{\alpha^2}{2} \left( 1 - \cos \theta \right) . \]
Therefore, our solution becomes
\[ \begin{split} x&= r \left( \theta - \sin \theta \right) , \\ y&= r \left( 1 - \cos \theta \right) . \end{split} \]
This is a parametric representation of a curve called a cycloid, where r is the radius of the "wheel" that generates the cycloid and 0 ≤ θ ≤ Θ, provided that Θ is the unique solution of
\[ \frac{\theta - \sin\theta}{1-\cos\theta} = \frac{b}{B} \qquad\mbox{and}\qquad r = \frac{B}{1-\cos\Theta} . \]
One arch is produced by letting θ take on the values from 0 to 2π. The bottom point of this arch occurs when θ = π and has coordinates (rπ,2r). Thus, the slope m = B/b of the line from the cusp at the origin to the bottom point is 2r/rπ = 2/π. One can show that there is a unique cycloid (the first half of the arch of the cycloid joining the origin with the point B is invertible) passing through any pair (0,0), (b,B) with 0 < B and that the unique cycloid is a global minimum for the functional T(y).
cycloid[a_, t_] := {a*t - a*Sin[t], -a + a*Cos[t]}; c1 = ParametricPlot[cycloid[1, t], {t, 0, 0.6*Pi}, PlotStyle -> Thick, Axes -> False]; l1 = Graphics[{Thick, Line[{{0, 0}, {0.6*Pi - Sin[0.6*Pi], -1 + Cos[0.6*Pi]}}]}] txt1 = Graphics[ Text[Style["A", FontSize -> 16, FontColor -> Purple, Bold], {0.08, -0.02}]]; txt2 = Graphics[ Text[Style["B", FontSize -> 16, FontColor -> Purple, Bold], {0.97, -1.28}]]; t1a = Graphics[ Text[Style["Shortest", FontSize -> 16, FontColor -> Purple, Bold], {0.58, -0.52}]]; t1b = Graphics[ Text[Style["distance", FontSize -> 16, FontColor -> Purple, Bold], {0.68, -0.62}]]; Show[c1, l1, txt1, txt2, t1a, t1b]
c1 = ParametricPlot[cycloid[1, t], {t, 0, Pi}, PlotStyle -> Thick, Axes -> False]; l1 = Graphics[{Thick, Line[{{0, 0}, {Pi, -1 + Cos[Pi]}}]}]; txt1 = Graphics[ Text[Style["A", FontSize -> 16, FontColor -> Purple, Bold], {0.28, -0.02}]]; txt2 = Graphics[ Text[Style["B", FontSize -> 16, FontColor -> Purple, Bold], {3.15, -1.8}]]; t1a = Graphics[ Text[Style["Shortest", FontSize -> 16, FontColor -> Purple, Bold], {1.9, -0.8}]]; t1b = Graphics[ Text[Style["distance", FontSize -> 16, FontColor -> Purple, Bold], {2.1, -0.97}]]; Show[c1, l1, txt1, txt2, t1a, t1b]
c1 = ParametricPlot[cycloid[1, t], {t, 0, 1.3*Pi}, PlotStyle -> Thick, Axes -> False]; l1 = Graphics[{Thick, Line[{{0, 0}, {1.3*Pi - Sin[1.3*Pi], -1 + Cos[1.3*Pi]}}]}]; txt1 = Graphics[ Text[Style["A", FontSize -> 16, FontColor -> Purple, Bold], {0.38, -0.02}]]; txt2 = Graphics[ Text[Style["B", FontSize -> 16, FontColor -> Purple, Bold], {4.9, -1.8}]]; t1a = Graphics[ Text[Style["Shortest", FontSize -> 16, FontColor -> Purple, Bold], {2.9, -0.5}]]; t1b = Graphics[ Text[Style["distance", FontSize -> 16, FontColor -> Purple, Bold], {3.5, -0.85}]]; Show[c1, l1, txt1, txt2, t1a, t1b]
    m < 2/π
    m = 2/π
    m > 2/π

 

Fermat's Principle


Johann Bernoulli gave a brilliant insight to the brachistochrone problem by relating it to geometric optics, the branch of optics that describes ray propagation. When a light beam (or ray) strikes a smooth interface (a surface separating two transparent materials such as air and glass or water and glass), the wave is in general partly reflected and partly refracted (transmitted) into the second material, as shown in the figure below. For simplicity we draw only one ray.
Fermat's Principle of Least Time: light will follow the path for which the time of travel is a minimum. The time T needed for light to get point B from A is determined by \[ T= \frac{\sqrt{a^2 + x^2}}{v_1} + \frac{\sqrt{b^2 + (c-x)^2}}{v_2} , \] where v1 and v2 are the speeds of light in two different materials. Since T is the minimum time to travel from A to B, the derivative of T with respect to x must be zero, and we have \[ \frac{x}{v_1 \sqrt{a^2 + x^2}} = \frac{c-x}{v_2 \sqrt{b^2 + (c-x)^2}} \qquad\mbox{or}\qquad \frac{\sin \alpha_1}{v_1} = \frac{\sin \alpha_2}{v_2} . \] This equation is called the law of refraction, or Snell's law, after Dutch's astronomer and mathematician Willebrord Snell (in Latin: Snellius) (1580 -- 1626).
    Refractions of rays.
l1 = Graphics[{Thickness[0.01], Line[{{-1, 0.5}, {0, 0}, {1, -1}, {1.5, -2}}]}];
l2 = Graphics[{Thickness[0.005], Line[{{-1, 0}, {1, 0}}]}];
l3 = Graphics[{Thickness[0.005], Line[{{-1, -1}, {1.5, -1}}]}];;
l4 = Graphics[{Dashed, Line[{{0, -0.5}, {0, 1}}]}];;
l5 = Graphics[{Dashed, Line[{{1, -2}, {1, 0}}]}];;
Show[l1, l2, l3, l4, l5]
Note that the Fermat principle can be derived from the Euler equation applied to the functional
\[ T(x,y) = \int_0^1 {\text d}t\, n \left( x(t), y(t) \right) \sqrt{x'^2 (t) + y'^2 (t)} = \int_0^1 {\text d}t\, n \| {\bf x}' \| , \]
where n is the coefficient describing the inhomogeneity of the medium. The corresponding Lagrangian is \( L = n\,\| {\bf x}'\| , \) where x = [x, y] in planar case. Let us introduce path-length as a new parameter
\[ s(t) = \int_{t_0}^t {\text d}\tau\, \| {\bf x}' (\tau )\| = \int_{t_0}^t {\text d}\tau\, \sqrt{x'^2 (\tau ) + y'^2 (\tau )}. \]
Using this parameter and the chain rule, we obtain
\[ \frac{\text d}{{\text d}t} = \| {\bf x}' \| \,\frac{\text d}{{\text d}s} . \]
This allows us to rewrite two Euler equations in a compact way (the fundamental equation for ray optics):
\[ \frac{\text d}{{\text d}s} \left( n\,\frac{{\text d}{\bf x}}{{\text d}s} \right) = \nabla n . \]
Let us consider the multilayer medium shown in Figure above. As the incident ray passes from one material to another, it is always bent toward or away from the normal according to Snell's law; namely, \[ \frac{\sin \alpha_1}{v_1} = \frac{\sin \alpha_2}{v_2} = \frac{\sin \alpha_3}{v_3} = \frac{\sin \alpha_4}{v_4} = \cdots . \] Johann Bernoulli recognized the similarity between the continuously increasing speed of light ray and the increasing speed of a heavy mass falling in a uniform gravitational field. Therefore, he argued that by following a law similar to that followed by a refracting light ray, the falling mass must take minimum time. He substitutes for the speed of light, v, in Snell’s law, the speed v acquired by a falling mass in a uniform gravitational field. The latter could easily be obtained by applying the law of conservation of energy that states that the sum of potential energy and kinetic energy of a falling mass must remain constant. In particular, if the falling mass starts from rest, then the sum must be zero for each point along the path. The proof is simple and elegant, combining fields of geometry and physics.

Suppose that the width of the layers approaches zero as a number of layers increase to infinity. Then, as a limit, we have \[ \frac{\sin \alpha}{v} = \mbox{constant}. \] Let us return to the brachistochrone problem and express v and α in terms of the height y and the slope y'. Suppose that a bead (like a light ray) can choose the path to slide in a such a way that the time of descent from the point A to the point B is a minimum. Therefore, the above equation is valid.

The potential energy Π of a bead due to gravity is Π = mgy, where y is the vertical coordinate of the bead. From the principle of conservation of energy, it follows that \[ mgy = \frac{m v^2}{2} \qquad \mbox{or} \qquad v= \sqrt{2 g y}. \] Note that the positive y-axis is increasing downward. On the other hand, \[ \sin\alpha = %\cos\beta = \frac{1}{\sec\beta} = \frac{1}{\sqrt{1 + \tan^2 \beta}} = \frac{1}{\sqrt{1+ ( y' )^2}} , \] where β = π/2 -α is the complementary angle to α. Substitution of this expression for sinα and \( v=\sqrt{2gy} \) into the equation \( \frac{\sin \alpha}{v} = \mbox{constant} \) yields \[ y\left[ 1+ ( y' )^2 \right] = C , \] where C is an arbitrary constant. This equation can also be obtained by applying the Euler equation to the total time T needed to slide down to the terminal position: \[ T (y) = \int \frac{\mbox{distance}}{\mbox{rate}} = \int \frac{{\text d}s}{v} = \int \frac{\sqrt{\dot{x}^2 + \dot{y}^2}}{\sqrt{2gy}}\,{\text d}t = \int_0^b \frac{\sqrt{1+ y'^2}}{\sqrt{2gy}}\,{\text d}x \, , \] where x = x(t), y = y(t) is the parametric equation of the slide curve. Minimization of this functional T(y) leads us to \( y\left[ 1+ ( y' )^2 \right] = C , \) which upon differentiating gives another form of the swiftest descent: \[ y' \left[ 1 + ( y' )^2 + 2y y'' \right] =0 . \] Note that the general solution of the brachistochrone equation contains two arbitrary constants that should be chosen to satisfy the boundary conditions: the solution must go through the end points A (which we choose as the origin for simplicity) and B. While the above equation has a singular solution, it is out of our interest because either the singular solution does not satisfy the boundary condition or, when it does, the descent time along it is not minimum. From the brachistochrone equation, it follows that \( \displaystyle y' = \frac{{\text d}y}{{\text d}x} = \left( \frac{C}{y} -1 \right)^{1/2} . \) Separating variables leads to \[ {\text d}x = \left( \frac{y}{C-y} \right)^{1/2} \,{\text d}y . \] We change the variable y by setting \( \displaystyle \left[ \frac{y}{C-y} \right]^{1/2} = \tan\varphi . \) Then since y = Csin² φ, we have

\begin{align*} {\text d}y &= 2C\, \sin\varphi\cos\varphi\,{\text d}\varphi = C\, \sin 2\varphi \,{\text d}\varphi , \\ {\text d}x &= \tan\varphi \,{\text d}y = 2C\,\sin^2 \varphi\,{\text d}\varphi = C(1-\cos 2\varphi )\,{\text d}\varphi . \end{align*}
Integration yields \[ x=\frac{C}{2}\,(2\varphi - \sin 2\varphi ) + C_1 , \] where C1 is an arbitrary constant. We assume that the bead starts its sliding from the origin, thus φ = 0 at x=0, y=0, and, hence, C1 = 0. As a result, we have \[ x=\frac{C}{2}\,(2\varphi - \sin 2\varphi ) = C(\varphi - \sin \varphi\, \cos\varphi ) , \qquad y= C\,\sin^2 \varphi = \frac{C}{2} \,(1-\cos 2\varphi ) . \] Setting C/2 = r and 2φ = θ, we see that the curve that solves the brachistochrone problem is just the cycloid. So this curve gives a trade off between steepness versus path length.    ■
Example: The Friedmann equations are a set of equations in physical cosmology that govern the expansion of space in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by the Russian and Soviet physicist Alexander Friedmann (1888--1925) in 1922 from Einstein's field equations.

Let us begin with Newton’s mechanics, even though the General Relativity Theory (GRT) must certainly be involved. However, as will be seen, the GRT will come into play through a surprisingly simple artifice. For the sake of simplicity, we consider a spherical universe. However, it should be noted that the result holds in general.

Let r0 be the radius of the universe and ρ0 be the density of the masses at the present time t0. The radius as a function of time is

\[ r(t) = a(t) \cdot r_0 , \]
where the scale factor 𝑎 is dimensionless.

Assuming that the total mass of the universe is constant, we get

\[ \rho (t)\,\frac{4\pi}{3}\,r^3 (t) = \rho_0 \,\frac{4\pi}{3}\,r^3_0 . \]
Substituting into \( r(t) = a(t) \cdot r_0 , \) we obtain \( \displaystyle \rho (t) = \rho_0 / a^3 (t) , \) Because of the homogeneity of the universe, we may assume that the total mass \( \displaystyle M = \rho_0 \cdot \frac{4\pi}{3} \,r_0^3 \) of the universe is concentrated at its center. Then by Newton’s law of motion, the gravitational force acting on point mass m at distance r from the center of the universe is
\[ m \cdot \ddot{r} = - \frac{G m M}{r^2} = - \frac{G m}{r^2} \,\frac{4\pi}{3} \cdot r_0^3 \cdot \rho_0 . \]
Using \( r(t) = a(t) \cdot r_0 , \) and dividing by m yields
\[ \ddot{a} = - \frac{4\pi G \rho_0}{3} \cdot \frac{1}{a^2} . \]
Multiplying by the integrating factor \( \displaystyle 2\dot{a} , \) we get
\[ \frac{\text d}{{\text d}t}\,\dot{a}^2 = \frac{8\pi G \rho_0}{3} \cdot \frac{\text d}{{\text d}t}\,a^{-1} . \]
Integrating both sides with respect to t yields the Friedmann equation
\[ \dot{a}^2 = \frac{8\pi G \rho_0}{3} \,a^{-1} - K\,c^2 . \]
We interpreter the integration constant as K c², where c is the velocity of light and K is the curvature of the four-dimensional space-time universe (dimension [K] = m-2).    ■

 

 

Dumped harmonic oscillator


In many practical problems, a second order differential equation can be reduced to an exact equation. For example, consider the equation of motion for the damped harmonic oscillator:

\[ \ddot{y} + 2\mu\,\dot{y} + \omega_0^2 y =0 , \]
where μ and ω0 are positive constants. Let
\[ \lambda_1 = \mu + \sqrt{\mu^2 - \omega_0^2} > \lambda_2 = \mu - \sqrt{\mu^2 - \omega_0^2} \]
be the roots of the characteristic equation \( \lambda^2 + 2\mu\, \lambda + \omega_0^2 =0 , \) that is, \( \lambda_1 + \lambda_2 = 2\mu \) and \( \lambda_1 \lambda_2 = \omega_0^2 . \) Multiplying both sides of the harmonic oscillator equation by \( \mu (y,t) = e^{2\mu t} \left( \dot{y} + \mu y \right) , \) we obtain an exact equation:
\[ \frac{1}{2} \, \frac{{\text d} A}{{\text d} t} =0 , \qquad\mbox{where} \quad A = e^{2\mu t} \left[ \left( \dot{y} + \mu\, y \right)^2 + \left( \omega_0^2 - \mu^2 \right) y^2 \right] . \]
Therefore, A is a constant of motion (which is also called the first integral of the equation). Note that A is similar to the energy integral for a conservative system. The connection is easily seen if μ = 0, corresponding to the undamped oscillator. Following Bohlin, it can be shown that the second order differential equation for damped oscillator admits another first integral
\[ \frac{\left( \dot{y} + \lambda_1 y \right)^{\lambda_1}}{\left( \dot{y} + \lambda_2 y \right)^{\lambda_2}} =B >0 \]
or
\[ b = \ln B = \lambda_1 \ln\left( \dot{y} + \lambda_1 y \right) - \lambda_2 \ln \left( \dot{y} + \lambda_2 y \right) . \]
This result is easily established by considering
\[ \frac{{\text d} b}{{\text d} t} = \frac{\left( \lambda_1 - \lambda_2 \right) \dot{y}}{\left( \dot{y} + \lambda_1 y \right) \left( \dot{y} + \lambda_2 y \right)} \left[ \ddot{y} + \left( \lambda_1 + \lambda_2 \right) \dot{y} + \lambda_1 \lambda_2 y \right] , \]
which vanishes by virtue of the equation of motion.

 The equation of motion for the damped harmonic oscillator may be derived from the Lagrangian

\[ {\cal L} (q, \dot{q},t) = e^{2\mu\,t} \, \frac{1}{2} \left[ \dot{q}^2 - \omega_0^2 q^2 \right] , \]
as one can easily verify. The canonical momentum associated with q is
\[ p = \frac{\partial {\cal L}}{\partial \dot{q}} = \dot{q} \, e^{2\mu\,t} . \]
The Hamiltonian is then
\[ H(q,p,t) = p\dot{q} - {\cal L} = \frac{1}{2}\, e^{-2\mu\,t} \,p^2 + \frac{\omega_0^2}{2} \,q^2 e^{2\mu\,t} . \]
Since this Hamiltonian depends explicitly on time, it does not represent a conserved quantity. This is just what we should expect because we are dealing with a dissipative system.

 Let us simplify the Hamiltonian with the transformation

\[ Q = q \,e^{2\mu\,t} , \qquad P= p\,e^{-2\mu\,t} . \]
The transformed Hamiltonian is
\[ K(Q,P,t) = H(q,p,t) + \frac{\partial}{\partial t} \,q \,P\,e^{2\mu\,t} . \]
Upon some simplifications, we get
\[ K(Q,P,t) = \frac{P^2}{2} + \frac{\omega_0^2}{2} \,Q^2 + \mu\,P\,Q . \]
This formula is rather remarkable because the new Hamiltonian does not depend explicitly on time, and so is conservative. If we express the new Hamiltonian in old variables, we realize that the expression
\[ e^{-2\mu\,t} \, \frac{p^2}{2} + \frac{\omega_0^2}{2} \, q^2 e^{2\mu\,t} + \mu\,pq \]
is an integral of the motion.

 

  1. Benson, D.C., An elementary solution of the brachistochrone problem, The American Mathematical Monthly, 1969, Vol. 76, No. 8, pp. 890--894.
  2. Bohlin, K., Integrationsmethode fur lineare Differential-gleichungen, Astron. Iakttag. Undersokn. Stockholms Observ., Volume 9, 3--6, 1908.
  3. Brookfield, G., Yet another elementary solution of the Brachistochrone problem, Mathematics magazine, 2010, Vol. 83, No. 1, pp. 59--63.
  4. Haws, L., Kiser, T., Exploring the brachistochrone problem, The American mathematical Monthly, 1995, Vol. 102, No. 4, pp. 328--336.
  5. Johnson, N., The brachistochrone problem, The College Mathematics Journal, 2004, Vol. 35, No. 3, pp. 192--197.
  6. Kimball, W.S., History of the Brachistochrone, Pi Mu Epsilon Journal, 1955, Vol. 2, No. 2, pp. 57--77.
  7. Nahin, P.J., When Least is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible, Princeton, Princeton University Press, 2007.
  8. Parsons, D.H., Exact Integration of the Space-Charge Equation for a Plane Diode: A Simplified Theory, Nature, 1963, Volume 200, October, pp. 126--127.
  9. Zeng, J., A note on the brachistochrone problem, The College Mathematics Journal, 1996, Vol. 27, No. 3, pp. 206--208.

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)