Preface


This section provides an introduction to the Floquet--Lyapunov theory.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part I of the course APMA0340
Introduction to Linear Algebra with Mathematica

Magnus approximations

The outstanding mathematician Wilhelm Magnus (1907-1990) did important contributions to a wide variety of fields in mathematics and mathematical physics [1]. Among them one can mention combinatorial group theory [159] and his collaboration in the Bateman project on higher transcendental functions and integral transforms. Magnus expansion (ME for short) was introduced as a tool to solve non-autonomous linear differential equations for linear operators. It is interesting to observe that in his seminal paper of 1954 [158], although it is essentially mathematical in nature, Magnus recognizes that his work was stimulated by results of K.O. Friedrichs on the theory of linear operators in Quantum Mechanics [88]. Furthermore, as the first antecedent of his proposal he quotes a paper by R.P. Feynman in the Physical Review.

Given the linear differential equation

\begin{equation} \label{EqMagnus.1} \mathbf{y}' (t) = \mathbf{A}(t)\, \mathbf{y} , \qquad \mathbf{y}(0) = \mathbf{y}_0 , \end{equation}
where A(t) is an n × n matrix and y(t) is the column vector. It will be assumed that matrix A(t) does not commute with itself, that is, its commutator [A(t), A(s)] = A(t)A(s) − A(s)A(t) ≠ 0. Here as usual the prime denotes derivative with respect to the real independent variable which we take as time t, although much of what will be said applies also for a complex independent variable.

If matrix A(t) commutes with itself, so the coommutator [A(t), A(s)] = 0. That is, A(t)A(s) = A(s)A(t), then the solution to the initial value problem \eqref{EqMagnus.1} is

\[ \mathbf{y}(t) = \exp \left\{ \int_0^t \mathbf{A}(s)\,{\text d} s \right\} \mathbf{y}_0 . \]
   
Example 1:    ■
End of Example 1

We denote by U(t) the propagatoe of the initial value problem \eqref{EqMagnus.1}. Then its solution becomes y(t) = U(t)y0. The propagator (or evolution operator) is a solution of the matrix equation

\begin{equation} \label{EqMagnus.2} \mathbf{U}' = \mathbf{A}\,{\bf u} , \qquad \mathbf{U}(0) = \mathbf{I} , \end{equation}
where I is the identity matrix. Magnus proposed that the propagator has the exponential form
\begin{equation} \label{EqMagnus.3} \mathbf{U}(t) = \exp \Omega (t) , \qquad \Omega (0) = 0 , \end{equation}
and a series expansion for the matrix in the exponent
\begin{equation} \label{EqMagnus.4} \Omega (t) = \sum_{k\ge 1} \Omega_k (t) . \end{equation}
This series is called Magnus expansion. The first few such terms are:
\begin{align*} \Omega_1 (t) &= \int_0^t A( t_1 )\,{\text d}t_1 , \\ \Omega_2 (t) &= \frac{1}{2} \int_0^t {\text d} t_1 \int_0^{t_1} {\text d} t_2 \left[ A(t_1 ) , A(t_2 ) \right] , \\ \Omega_3 (t) &= \frac{1}{6} \int_0^t {\text d} t_1 \int_0^{t_1} {\text d} t_2 \int_0^{t_2} {\text d} t_3 \left( \left[ A_1 , \left[ A_2 , A_3 \right] \right] + \left[ A_3 , \left[ A_2 , A_1 \right] \right] \right) , \\ \Omega_4 (t) &= \frac{1}{12} \int_0^t {\text d}t_1 \int_0^{t_1} {\text d} t_2 \int_0^{t_2} {\text d} t_3 \int_0^{t_3} {\text d} t_4 \left( \left[ \left[ \left[ A_1 , A_2 \right] , A_3 \right] , A_4 \right] \right. \\ & \quad + \left[ A_1 , \left[ \left[ A_2 , A_3 \right] , A_4 \right] \right] + \left[ A_1 , \left[ A_2 , \left[ A_3 ,A_4 \right] \right] \right] \\ & \quad \left. \left[ A_2 , \left[ A_3 , \left[ A_4 , A_1 \right] \right] \right] \right) . \end{align*}
The interpretation of these equations seems clear: Ω₁(t) coinsides exactly with the exponent in \( \displaystyle \quad \mathbf{y}(t) = \exp \left\{ \int_0^t A(s)\,{\text d}s \right\} \mathbf{y}_0 \quad \) However, this equation cannot give the whole solution. So, if one insists in having an exponential solution the exponent has to be corrected. The rest of the ME in (4) gives that correction necessary to keep the exponential form of the solution.

  1. Ana Arnal, Fernando Casas1 and Cristina Chiralt, A general formula for the Magnus expansion in terms of iterated integrals of right-nested commutators, Journal of Physics Communications, 2018, 2 035024
  2. S. Blanes, F. Casas, J. A. Oteo, and J. Ros, The Magnus expansion and some of its applications, 30 Oct 2008. https://arxiv.org/pdf/0810.5488
  3. W. Magnus. Algebraic aspects in the theory of systems of linear differential equations. Technical Report BR-3, New York University, June 1953.
  4. W. Magnus. On the exponential solution of differential equations for a linear operator. Comm. Pure and Appl. Math., VII:649–673, 1954.

Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions