# Preface

This tutorial was made solely for the purpose of education and it was designed for students taking Applied Math 0340. It is primarily for students who have some experience using Mathematica. If you have never used Mathematica before and would like to learn more of the basics for this computer algebra system, it is strongly recommended looking at the APMA 0330 tutorial. As a friendly reminder, don't forget to clear variables in use and/or the kernel. The Mathematica commands in this tutorial are all written in bold black font, while Mathematica output is in normal font.

Finally, you can copy and paste all commands into your Mathematica notebook, change the parameters, and run them because the tutorial is under the terms of the GNU General Public License (GPL). You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. The tutorial accompanies the textbook Applied Differential Equations. The Primary Course by Vladimir Dobrushkin, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part II of the course APMA0340
Introduction to Linear Algebra with Mathematica

# Fundamental Matrices

A fundamental matrix of a system of n homogeneous linear ordinary differential equations
$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) , \qquad {\bf x}(t) = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix}, \quad {\bf P}(t) = \begin{pmatrix} p_{11} (t) & p_{12} (t) & \cdots & p_{1n} (t) \\ p_{21} (t) & p_{22} (t) & \cdots & p_{2n} (t) \\ \vdots & \vdots & \ddots & \vdots \\ p_{1n} (t) & p_{2n} (t) & \cdots & p_{nn} (t) \end{pmatrix} ,$
is any nonsingular (its determinant is not zero for all t) matrix-function Ψ(t) that satisfies the matrix differential equation:
$\dot{\bf \Psi} (t) = {\bf P}(t)\,{\bf \Psi}(t) .$
Here dot stands for the derivative with respect to time variable t. In other words, a fundamental matrix has n linearly independent columns, each of them is a solution of the homogeneous vector equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) .$$ Once a fundamental matrix is determined, every solution to the system can be written as $${\bf x} (t) = {\bf \Psi}(t)\,{\bf c} ,$$ for some constant vector c (written as a column vector of height n). A product of a fundamental matrix and a nonsingular constant matrix is again a fundamental matrix. Therefore, a fundamental matrix is not unique.
Theorem: If X(t) is a solution of the n × n matrix differential equation $$\dot{\bf X} (t) = {\bf P}(t)\,{\bf X}(t) ,$$ then for any constant column-vector c, the n-vector u = X(t) c is a solution of the vector equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) .$$

Theorem: If an n × n matrix P(t) has continuous entries on an open interval, then the vector differential equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t)$$ has an n × n fundamental matrix $${\bf X} (t) = \left\{ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right\}$$ on the same interval. Every solution x(t) to this system can be written as a linear combination of the column vectors of the fundamental matrix in a unique way:
${\bf x} (t) = c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t) \qquad\mbox{or in matrix form} \quad {\bf x} (t) = {\bf X} (t)\, {\bf c}$
for appropriate constants c1, c2, ... , cn, where $${\bf c} = \left\langle c_1 , c_2 , \ldots , c_n \right\rangle^{\mathrm T}$$ is a column vector of these constants.

The above representation of a solutions as a linear combination of linearly independent function-vectors is referred to as the general solution to the homogeneous vector differential equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) .$$

Theorem: The general solution of a nonhomogeneous linear vector equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t)$$ is the sum of the general solution of the complement homogeneous equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t)$$ and a particular solution of the inhomogeneous equation. That is, every solution to $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t)$$ is of the form
${\bf x} (t) = c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t) + {\bf x}_p (t)$
for some constants c1, c2, ... , cn, where
${\bf x}_h (t) = c_1 {\bf x}_1 (t) + c_2 {\bf x}_2 (t) + \cdots + c_n {\bf x}_n (t)$
is the general solution of the homogeneous linear equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t)$$ and xp (t) is a particular solution of the nonhomogeneosu equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t) .$$

Theorem: Superposition Principle for inhomogeneous equations Let P(t) be an n × n matrix function that is continuous on an interval [a,b], and let x1(t) and x2(t) be two vector solutions of the nonhomogeneous equations
$\dot{\bf x}_1 (t) = {\bf P}(t)\,{\bf x}_1 (t) + {\bf f}_1 (t) , \qquad \dot{\bf x}_2 (t) = {\bf P}(t)\,{\bf x}_2 (t) + {\bf f}_2 (t) , \quad t \in [a,b] ,$
respectively. Then their sum $${\bf x} (t) = {\bf x}_1 (t) + {\bf x}_2 (t)$$

Corollary: The difference between any two solutions of the nonhomogeneous vector equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf f} (t)$$ is a solution of the complementary homogeneous equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) .$$

It is not hard to verify that the vector functions

${\bf x}_1 (t) = \begin{bmatrix} 1 \\ t \end{bmatrix} , \qquad {\bf x}_2 (t) = \begin{bmatrix} t^2 \\ t \end{bmatrix}$
are two linearly independent solutions to the following homogeneous vector differential equation
$\dot{\bf x} (t) = {\bf P} (t)\, {\bf x} (t) , \qquad {\bf P} (t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} -2t^2 & 2t \\ 0 & 1- t^2 \end{bmatrix} .$
Therefore, the corresponding fundamental matrix is
${\bf X} (t) = \begin{bmatrix} 1 & t^2 \\ t & t \end{bmatrix}, \qquad \det {\bf X} (t) = t - t^3 = t \left( 1 - t^2 \right) .$
The determinant $$W(t) = \det\,{\bf X}(t)$$ of a square matrix $${\bf X}(t) = \left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right]$$ formed from the set of n vector functions x1, x2, ... , xn, is called the Wronskian of these column vectors x1, x2, ... , xn.
Theorem: [N. Abel] Let P(t) be an n × n matrix function with entries pij(t) (i,j = 1,2, ... ,n) that are continuous functions on some interval. Let xk(t), k = 1,2, ... , n, be n solutions to the homogeneous vector differential equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) .$$ Then the Wronskian of the set of vector solutions is
$W(t) = \det {\bf X}(t) = W(t_0 ) \,\exp \left\{ \int_{t_0}^t \mbox{tr}\,{\bf P}(t)\,{\text d}t \right\} ,$
with t0 being a point within an interval where the trace tr P(t) = p11 + p22 + ... + pnn is continuous. ■

Corollary: Let x1(t), x2(t), ... , xn(t) be column solutions of the homogeneous vector equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t)$$ on some interval |a,b|, where n × n matrix function P(t) is continuous. Then the corresponding matrix $${\bf X}(t) = \left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right]$$ of these column vectors is either a singular matrix for all t ∈ |a,b| or else nonsingular. In other words, det X(t) is either identically zero or it never vanishes on the interval |a,b|.

Corollary: Let P(t) be an n × n matrix function that is continuous on an interval |a,b|. If $$\left\{ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right\}$$ is a linearly independent set of solutions to the homogeneous differential equation $$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t)$$ on |a,b|, then the Wronskian
$W(t) = \det {\bf X}(t) = \det \left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right]$
is not zero at every point t in |a,b|. ■

The matrix

${\bf P} (t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} -2t^2 & 2t \\ 0 & 1- t^2 \end{bmatrix} \qquad (t \ne 0,1,-1)$
has the trace tr $${\bf P} = \frac{1 - 3\,t^2}{t \left( 1 - t^2 \right)} .$$ Integrating the latter, we get
$\int {\bf P} (t) \,{\text d} t = \int \frac{1 - 3\,t^2}{t \left( 1 - t^2 \right)} \,{\text d} t = \ln \left( t- t^3 \right) .$
From Abel's theorem, it follows that the Wronskian is
$W(t) = C\, e^{\int \mbox{tr}\,{\bf P}(t)\,{\text d}t} = C\, \left( t- t^3 \right) .$
On the other hand, direct calculations show that the Wronskian of the given functions x1(t) and x2(t) is
$W(t) = \det \begin{bmatrix} 1& t^2 \\ t&t \end{bmatrix} = t - t^3 \ne 0 \quad \mbox{for} \quad t \ne 0, 1, -1 .$

Let us consider the initial value problem

$\frac{{\text d}{\bf x}}{{\text d}t} = {\bf P}(t)\, {\bf x} (t) , \qquad {\bf x} (t_0 ) = {\bf x}_0 .$
The general solution of the homogeneous equation is
${\bf x} (t) = {\bf X} (t)\, {\bf c} ,$
where $${\bf c} = \left\langle c_1 , c_2 , \ldots , c_n \right\rangle^{\mathrm T}$$ is the column vector of arbitrary constants. To satisfy the initial condition, we set
${\bf X} (t_0 )\, {\bf c} = {\bf x}_0 \qquad\mbox{or} \qquad {\bf c} = {\bf X}^{-1} (t_0 )\, {\bf x}_0 .$
Therefore, the solution to the initial value problem becomes
${\bf x}(t) = {\bf \Phi} (t, t_0 )\,{\bf x}_0 = {\bf X} (t)\,{\bf X}^{-1} (t_0 )\, {\bf x}_0 .$
The square matrix $${\bf \Phi} (t, s) = {\bf X} (t)\, {\bf X}^{-1} (s)$$ is usually referred to as a propagator matrix.
Theorem: Let X(t) be a fundamental matrix for the homogeneous linear system $$\dot{\bf x} = {\bf P}(t)\,{\bf x} (t) ,$$ meaning that X(t) is a solution of the matrix equation $$\dot{\bf X} = {\bf P}(t)\,{\bf X} (t)$$ and det X(t) ≠ 0. Then the unique solution of the initial value problem
$\dot{\bf x}(t) = {\bf P}(t)\,{\bf x} (t) , \qquad {\bf x} (t_0 ) = {\bf x}_0$
is given by $${\bf x}(t) = {\bf \Phi} (t, t_0 )\,{\bf x}_0 .$$

Corollary: For a fundamental matrix X(t), the propagator matrix Φ(t, t0) is the unique solution of the following matrix initial value problem
$\frac{\text d}{{\text d}t}\,{\bf \Phi} \left( t, t_0 \right) = {\bf P}(t)\, {\bf \Phi} \left( t, t_0 \right) , \qquad {\bf \Phi} \left( t_0 , t_0 \right) = {\bf I} ,$
where I is the identity matrix. Hence, Φ(t, t0) is a fundamental matrix of the homogeneous vector differential equation $$\dot{\bf x} = {\bf P}(t)\,{\bf x} (t) .$$

Corollary: Let X(t) and Y(t) be two fundamental matrices of the homogeneous vector equation $$\dot{\bf x} = {\bf P}(t)\,{\bf x} (t) .$$ Then there exists a nonsingular constant square matrix C such that $${\bf X} (t) = {\bf Y} (t)\, {\bf C} , \ \det{\bf C} \ne 0 .$$ This means that the solution space of the matrix equation $$\dot{\bf X} = {\bf P}(t)\,{\bf X} (t)$$ is 1. ■

Consider the initial value problem

$\dot{\bf x} (t) = {\bf P}(t)\,{\bf x} (t) , \quad {\bf x} (2) = {\bf x}_0 \qquad\mbox{where} \quad {\bf P}(t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} -2t^2 & 2t \\ 0 & 1- t^2 \end{bmatrix} \quad {\bf x}_0 = \begin{bmatrix} 2 \\ 1 \end{bmatrix} .$
We know from the previous example, that a fundamental matrix for corresponding homogeneous vector equation is
${\bf X} (t) = \begin{bmatrix} 1 & t^2 \\ t & t \end{bmatrix} .$
Since
${\bf X}^{-1} (t) = \frac{1}{t \left( 1 - t^2 \right)} \begin{bmatrix} t & -t^2 \\ -t & 1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf X}^{-1} (2) = \frac{1}{6} \begin{bmatrix} -2&4 \\ 2& -1 \end{bmatrix} ,$
we get the propagator matrix
${\bf \Phi} (t,2) = {\bf X} (t) {\bf X}^{-1} (2) = \frac{1}{6} \begin{bmatrix} 2t^2 -2 & 4- t^2 \\ 0 & 3t \end{bmatrix} .$
Then the solution of the given initial value problem becomes
${\bf x} (t) = {\bf \Phi} (t,2) \, {\bf x}_0 = \frac{1}{6} \begin{bmatrix} 2t^2 -2 & 4- t^2 \\ 0 & 3t \end{bmatrix} \begin{bmatrix} 2 \\ 1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} t^2 \\ t \end{bmatrix} .$
X[t_] = {{1, t^2}, {t, t}}
P[t_] = {{-((2 t^2)/(t - t^3)), (2 t)/(t - t^3)}, {0, 1/(t - t^3) - t^2/(t - t^3)}}
Simplify[D[X[t], t] - P[t].X[t]]
Inverse[{{1, t^2}, {t, t}}] /. t -> 2
X[t].{{-(1/3), 2/3}, {1/3, -(1/6)}}
Simplify[%]
{{t^2/2}, {t/2}}

# Exponential Matrices

Consider autonomous vector linear differential equation of the form

$\dot{\bf y} (t) = {\bf A}\, {\bf y} (t) ,$
where A is a square n × n matrix and y(t) is an (n × 1)-column vector of n unknown functions. Here we use dot to represent the derivative with respect to t. A solution of the above equation is a curve in n-dimensional space; it is called an integral curve, a trajectory, a streamline, or an orbit. When the independent variable t is associated with time (which is usually the case), we can call a solution y(t) the state of the system at time t. Since a constant matrix A is continuous on any interval, all solutions of the system $$\dot{\bf y} (t) = {\bf A} \, {\bf y} (t)$$ are determined on ( -∞ , ∞ ). Therefore, when we speak of solutions to the vector equation $$\dot{\bf y} (t) = {\bf A} \, {\bf y} (t) ,$$ we consider solutions on the real axis.

A fundamental matrix is a constant multiple of the exponential matrix:

${\bf \Phi}(t) = e^{{\bf A}\,t} .$
Then the propagator is expressed as
${\bf \Phi}(t,s) = e^{{\bf A}\,(t-s)} .$

Mathematica has a couple of options to determine a fundamental matrix. It has a build-in command MatrixExp[A t] that determined a fundamental matrix for any square matrix A. Another way to find the fundamental matrix is to use two lines approach:

{roots,vectors} = Eigensystem[A]
Phi[t_] = Transpose[Exp[roots t]*vectors]

Consider a linear system of differential equations

$\dot{\bf y}(t) = {\bf A}\,{\bf y}(t), \qquad\mbox{where}\quad {\bf y}(t) = \begin{bmatrix} y_1 (t) \\ y_2 (t) \end{bmatrix}, \quad {\bf A} = \begin{bmatrix} 2&1 \\ 6&3 \end{bmatrix} .$
First, we find eigenvalues and eigenvectors:
Eigenvalues[{{2, 1}, {6, 3}}]
{5, 0}
Eigenvectors[{{2, 1}, {6, 3}}]
{{1, 3}, {-1, 2}}
We check with Mathematica that vectors v1 = [1, 3] and v1 = [-1, 2] are eigenvectors corresponding eigenvalues λ1 = 5 and λ1 = 0, respectively.
A = {{2, 1}, {6, 3}};
v1 = {1, 3};
v2 = {-1, 2};
A.v1 - 5*v1
{0, 0}
A.v2
{0, 0}
Now we use Mathematica to determine the fundamental matrix. First, we define two linearly independent solutions:
lambda1=5; lambda2=0; y1[t_] = Exp[lambda1*t]*v1
Out[13]= {E^(5 t), 3 E^(5 t)}
y2[t_] = Exp[lambda2*t]*v2
Out[14]= {-1, 2}
The general solution:
y[t_] = c1*y1[t]+c2*y2[t]
Out[15]= {-c2 + c1 E^(5 t), 2 c2 + 3 c1 E^(5 t)}
( * check *)
Simplify[y'[t]-A.y[t]=={0,0}]
Out[16]= True
To find the fundamental matrix:
W[t_]=Transpose[{y1[t],y2[t]}]
Out[17]= {{E^(5 t), -1}, {3 E^(5 t), 2}}
Det[W[t]]
Out[18]= 5 E^(5 t)
Simplify[W'[t]-A.W[t]==0,Trig->False]
Out[19]= {{0, 0}, {0, 0}} == 0
Simplify[W'[t] - A.W[t] == {{0, 0}, {0, 0}}]
Out[20]= True

The general solution

A:= {{3, 2, 4}, {2, 0, 2}, {4, 2, 3}};
Eigenvalues[A]
Out[2]= {8, -1, -1}
MatrixExp[A t]
Out[3]= {{1/9 E^-t (5 + 4 E^(9 t)), 2/9 E^-t (-1 + E^(9 t)),
4/9 E^-t (-1 + E^(9 t))}, {2/9 E^-t (-1 + E^(9 t)),
1/9 E^-t (8 + E^(9 t)),
2/9 E^-t (-1 + E^(9 t))}, {4/9 E^-t (-1 + E^(9 t)),
2/9 E^-t (-1 + E^(9 t)), 1/9 E^-t (5 + 4 E^(9 t))}}
Dt[MatrixExp[A t], t];
Simplify[%]
Out[5]= {{1/9 E^-t (-5 + 32 E^(9 t)), 2/9 E^-t (1 + 8 E^(9 t)),
4/9 E^-t (1 + 8 E^(9 t))}, {2/9 E^-t (1 + 8 E^(9 t)),
8/9 E^-t (-1 + E^(9 t)),
2/9 E^-t (1 + 8 E^(9 t))}, {4/9 E^-t (1 + 8 E^(9 t)),
2/9 E^-t (1 + 8 E^(9 t)), 1/9 E^-t (-5 + 32 E^(9 t))}}
To check that the exponential matrix is the solution of the matrix differential equation:
Simplify[Dt[MatrixExp[A t], t] - A.MatrixExp[A t]]
Out[6]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
subject to the initialconditions:
Print[MatrixExp[A 0]]
Out[7]= {{1,0,0},{0,1,0},{0,0,1}}
Note that instead of Dt, we can use the partial derivative operator: D[function,t]

The general solution:

CC := {c1, c2, c3} (* vector of arbitrary constants *)
(* note that the upper case letter C is prohibited to use *)
MatrixExp[A t].CC
Out[9]= {-c1 E^(2 t) (-1 + t) + c2 E^(2 t) t +
c3 E^(2 t) t, -(1/2) c1 E^(2 t) (-4 + t) t +
1/2 c3 E^(2 t) (-2 + t) t + 1/2 c2 E^(2 t) (2 - 2 t + t^2),
1/2 c1 E^(2 t) (-6 + t) t - 1/2 c2 E^(2 t) (-4 + t) t -
1/2 c3 E^(2 t) (-2 - 4 t + t^2)}

A = {{3, 2, 1}, {5, 4, 5}, {3, 5, 5}};
B = IdentityMatrix[3]*\[Lambda]
Out[2]= {{\[Lambda], 0, 0}, {0, \[Lambda], 0}, {0, 0, \[Lambda]}}
Resolvent = Inverse[B - A]
Out[3]=
{{(-5 - 9 \[Lambda] + \[Lambda]^2)/(
22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3), (-5 +
2 \[Lambda])/(22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3), (
6 + \[Lambda])/(
22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3)}, {(-10 +
5 \[Lambda])/(22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3), (
12 - 8 \[Lambda] + \[Lambda]^2)/(
22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3), (-10 +
5 \[Lambda])/(22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3)}, {(
13 + 3 \[Lambda])/(
22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3), (-9 +
5 \[Lambda])/(22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3), (
2 - 7 \[Lambda] + \[Lambda]^2)/(
22 + 9 \[Lambda] - 12 \[Lambda]^2 + \[Lambda]^3)}}
Eigenvalues[A]
Out[4]= {11, 2, -1}
eig = Eigenvalues[A];
\[Lambda]1 = eig[[1]]
Out[6]= 11
\[Lambda]2 = eig[[2]]
Out[7]= 2
\[Lambda]3 = eig[[3]]
Out[8]= -1

Sylvester auxiliary matrices:

Z1 = ((A - eig[[2]]*IdentityMatrix[3]) (A - eig[[3]]*IdentityMatrix[3]))/((eig[[1]] - eig[[2]]) (eig[[1]] - eig[[3]]))
Out[9]=
{{1/27, 1/27, 1/108}, {25/108, 5/54, 25/108}, {1/12, 25/108, 1/6}}

Z2 = (A - eig[[1]]*IdentityMatrix[3]) (A - eig[[3]]*IdentityMatrix[3]))/((eig[[2]] - eig[[1]]) (eig[[2]] - eig[[3]]))
Out[10]=
{{32/27, -(4/27), -(1/27)}, {-(25/27), 35/ 27, -(25/27)}, {-(1/3), -(25/27), 4/3}}

Z3 = ((A - eig[[1]]*IdentityMatrix[3]) (A - eig[[2]]*IdentityMatrix[3]))/((eig[[3]] - eig[[1]]) (eig[[3]] - eig[[2]]))
Out[11]=
{{-(2/9), 1/9, 1/36}, {25/36, -(7/18), 25/36}, {1/4, 25/36, -(1/2)}}

Exponential matrix:

\[CapitalPhi] = e^(11 t)*Z1 + e^(2 t)*Z2 + e^(-t)*Z3

Out[12]=
{{-((2 e^-t)/9) + (32 e^(2 t))/27 + e^(11 t)/27,
e^-t/9 - (4 e^(2 t))/27 + e^(11 t)/27,
e^-t/36 - e^(2 t)/27 + e^(11 t)/108}, {(25 e^-t)/36 - (25 e^(2 t))/
27 + (25 e^(11 t))/108, -((7 e^-t)/18) + (35 e^(2 t))/27 + (
5 e^(11 t))/54, (25 e^-t)/36 - (25 e^(2 t))/27 + (25 e^(11 t))/
108}, {e^-t/4 - e^(2 t)/3 + e^(11 t)/12, (25 e^-t)/36 - (
25 e^(2 t))/27 + (25 e^(11 t))/108, -(e^-t/2) + (4 e^(2 t))/3 + e^(
11 t)/6}}

Simplify[%]
Out[13]=
{{1/27 e^-t (-6 + 32 e^(3 t) + e^(12 t)),
1/27 e^-t (3 - 4 e^(3 t) + e^(12 t)),
1/108 e^-t (3 - 4 e^(3 t) + e^(12 t))}, {25/
108 e^-t (3 - 4 e^(3 t) + e^(12 t)),
1/54 e^-t (-21 + 70 e^(3 t) + 5 e^(12 t)),
25/108 e^-t (3 - 4 e^(3 t) + e^(12 t))}, {1/
12 e^-t (3 - 4 e^(3 t) + e^(12 t)),
25/108 e^-t (3 - 4 e^(3 t) + e^(12 t)),
1/6 e^-t (-3 + 8 e^(3 t) + e^(12 t))}}

Finding the exponential matrix using diagonalization procedure:

A = {{3, 2, 1}, {5, 4, 5}, {3, 5, 5}};
eig = Eigenvalues[A]
Out[2]= {11, 2, -1}
D1t = DiagonalMatrix[Exp[eig*t]]
Out[3]= {{E^(11 t), 0, 0}, {0, E^(2 t), 0}, {0, 0, E^-t}}
vec = Eigenvectors[A]
Out[4]= {{17, 45, 46}, {-1, 0, 1}, {1, -3, 2}}
S = Transpose[vec]
Out[5]= {{17, -1, 1}, {45, 0, -3}, {46, 1, 2}}

Now we are ready to define the exponential matrix:

S.D1t.Inverse[S]
Out[6]=
{{(5 E^-t)/36 + (19 E^(2 t))/27 + (17 E^(11 t))/108, -((7 E^-t)/36) +
E^(2 t)/27 + (17 E^(11 t))/108, (5 E^-t)/36 - (8 E^(2 t))/27 + (
17 E^(11 t))/108}, {-((5 E^-t)/12) + (5 E^(11 t))/12, (7 E^-t)/
12 + (5 E^(11 t))/12, -((5 E^-t)/12) + (5 E^(11 t))/12}, {(5 E^-t)/
18 - (19 E^(2 t))/27 + (23 E^(11 t))/54, -((7 E^-t)/18) - E^(2 t)/
27 + (23 E^(11 t))/54, (5 E^-t)/18 + (8 E^(2 t))/27 + (
23 E^(11 t))/54}}

MatrixExp[A*t]
Out[7]=
{{1/108 E^-t (15 + 76 E^(3 t) + 17 E^(12 t)),
1/108 E^-t (-21 + 4 E^(3 t) + 17 E^(12 t)),
1/108 E^-t (15 - 32 E^(3 t) + 17 E^(12 t))}, {5/
12 E^-t (-1 + E^(12 t)), 1/12 E^-t (7 + 5 E^(12 t)),
5/12 E^-t (-1 + E^(12 t))}, {1/
54 E^-t (15 - 38 E^(3 t) + 23 E^(12 t)),
1/54 E^-t (-21 - 2 E^(3 t) + 23 E^(12 t)),
1/54 E^-t (15 + 16 E^(3 t) + 23 E^(12 t))}}

To check the answer, we type:

Simplify[MatrixExp[A*t] - S.D1t.Inverse[S]]
Out[8]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

A := {{0, 1}, {-1, 0}}
Eigenvalues[A]
Out[2]= {I, -I}
Simplify[ComplexExpand[MatrixExp[A t]]]
Out[3]= {{Cos[t], Sin[t]}, {-Sin[t], Cos[t]}}
diag = DiagonalMatrix[{2, -1, 4}]
Out[4]= {{2, 0, 0}, {0, -1, 0}, {0, 0, 4}}
y[t_] = MatrixExp[diag t]
Out[5]= {{E^(2 t), 0, 0}, {0, E^-t, 0}, {0, 0, E^(4 t)}}
y[0]
Out[6]= {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
y'[t] - diag.y[t]
Out[7]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
DiagonalMatrix[{2, 3}, -1]
Out[8]= {{0, 0, 0}, {2, 0, 0}, {0, 3, 0}}
DiagonalMatrix[{2, 3}, 1] // MatrixForm
out[9]/MatrixForm=
{"0", "2", "0"},
{"0", "0", "3"},
{"0", "0", "0"}

1. Chi-Tsong Chen (1998). Linear System Theory and Design (3rd ed.). New York: Oxford University Press,. ISBN 978-0195117776.
2. Vladimir Dobrushkin, Applied Differential Equations. The Primary Course, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043.
3. Milk