This first part of the tutorial is devoted to matrix algebra---topis that will be used to solve and analyze linear and nonlinear systems of ordinary differential equations. Now we are going to explore topics from Linear Algebra. To begin, we will need to load the linear algebra package. Actually, Maple has two such packages: linalg and LinearAlgebra. The linalg package is older and considered obsolete; it was replaced by LinearAlgebra in 1999. The LinearAlgebra package is more powerful and it is recommended to use. However, for our needs, linalg package is sufficient.
The commands to invoke a linear algebra package are either with(linalg): or with(LinearAlgebra): If you remove columns after invoking a package, you will see all names of subroutines used in the package---usually this information is not needed. When a corresponding package is not loaded, then either a typed command will not be recognized, or a different command with the same name will be used.

Since the commands for both packages are slightly different, we present them separately. Note that the LinearAlgebra package uses commands that start with an upper-ase initial letters, similar to Mathematica, whereas the older package uses lower case letters.

with(linalg):
A := matrix(2,2,[5,4,6,3])
B := matrix([[-1,2],[2,-3]])

\[ A = \begin{bmatrix} 5 & 4 \\ 6&3 \end{bmatrix} \]
                
with(LinearAlgebra):
A := Matrix(2,2,[5,4,6,3])
B := Matrix([[-1,2],[2,-3]])
                
            

The objective of this chapter is to define a function of a square matrix. Specifically, suppose that A is a square matrix and f(λ) is an admissible function (this term will be clarified later) of real or complex variable λ. Although Mathematica has a deficated command MatrixFunction, this chapter presents several approaches to define a matrix function f(A). For example, the following script can be used to build a matrix function for \( F(\lambda ) = \sin \left( \sqrt{\lambda}\,t \right) / \sqrt{\lambda} \) and 3-by-3 matrix:

 

The topics to be covered in this chapter:

1.1: How to define vectors

This section presents basic definitions and operations with vectors, including dot product, inner product, and vector product.

1.2: How to define matrices

Understanding matrices is crucial for almost all applications, especially for computer modeling. Indeed, a laptop or smartphone monitor is the most common example of a matrix filled with pixels. A wide range of applications includes the numerical solution of a set of linear algebraic equations. All numerical algorithms for solving differential equations are based on utilization of solution to the algebraic matrix problems. Every matrix can be considered as an array or vector with entries being again arrays. So a matrix is next generalization of a vector. In this section, you will learn how to define matrices with Mathematica as well as some other manipulation tools.

1.3: Basic operations with matrices

In this section, you will learn how to execute the basic arithmetic operations (addition, subtraction, and multiplication) with matrices as well as some other matrix manipulation tools.

1.4: Linear systems of equations

This auxiliary section reminds the reader when system of linear algebraic equations has a solution. Such systems are called consistent.

1.5: Determinants and inverses

The determinant is a special scalar-valued function defined on the set of square matrices. Although it still has a place in many areas of mathematics and physics, our primary application of determinants is to define eigenvalues and characteristic polynomials for a square matrix A. Because of difficulties with motivation, intuitiveness, and simple definition, there is a tendency in exposition of linear algebra without classical involvement of determinants.

1.6: Special matrices

This section presents a collection of special matrices that are very usefule in many approcations.

1.7: Eigenvalues and Eigenvectors

An eigenvector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it. This scalar factor is called the eigenvalue. Every nontrivial matrix has an eigenvalue and corresponding eigenvector.

1.8: Diagonalization procedure

This section presents a classical approach to define a functions of a matrix via diagonalization procedure. If a matrix is diagonalizable, all operations with it can be reduced to corresponding operations with its eigenvalues.

1.9: Sylvester formula

The most efficient formula to define a matrix function using Sylvester's auxiliary matrices. This section deals only with diagonalizable matrices.

1.10: The resolvent method

The most powerful method to define a matrix function independently whether the matrix is diagonalizable or not.

1.11: Polynomial interpolation

Another powerful technique for defining matrix functions based on knowledge of its minimal or characteristic polynomial.

1.12: Positive matrices

This section presents positive definite matrices in slightly different way from classical approach.

1.13: Roots

Four previously introduced methods to define matrix functions (diagonalization, Sylvester's formula, the resolvent method, and polynomial interpolation) are applied to find square roots of matrices.
(a*b)/c+13*d
ur code
another line
\[ {\frac {ab}{c}}+13\,d \]