// Remove fullscreen button from SageCell. MATLAB TUTORIAL, part 2.1: Vectors

MATLAB TUTORIAL, part 2.1: Vectors

How to define Vectors

A vector is a quantity that has magnitude and direction and that is commonly represented by a directed line segment whose length represents the magnitude and whose orientation in space represents the direction. In mathematics, it is always assumed that vectors can be added or subtracted, and multiplied by a scalar (real or complex numbers). It is also assumed that there exists a unique zero vector (of zero magnitude and no direction), which can be added/subtracted from any vector without changing the outcome. The zero vector is not zero. Wind, for example, has both a speed and a direction and, hence, is conveniently expressed as a vector. The same can be said of moving objects, momentum, forces, electromagnetic fields, and weight. (Weight is the force produced by the acceleration of gravity acting on a mass.)

The first thing we need to know is how to define a vector so it will clear to everyone. Today more than ever, information technologies are an integral part of our everyday lives. That is why we need a tool to model vectors on computers. One of the common ways to do this is to introduce a system of coordinates, either Cartesian or any other, which includes unit vectors in each direction, usually referred to as an ordered basis. In engineering, we traditionally use the Cartesian coordinate system that specifies any point with a string of digits. Each coordinate measures a distance from a point to its perpendicular projections onto the mutually perpendicular hyperplanes. The invention of Cartesian coordinates in 1649 by René Descartes (Latinized name: Cartesius) revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra.

Let us start with our familiar three dimensional space in which the Cartesian coordinate system consists of an ordered triplet of lines (the axes) that go through a common point (the origin), and are pair-wise perpendicular; it also includes an orientation for each axis and a single unit of length for all three axes. Every point is asigned distances to three mutually perpendicular planes, called coordinates. The reverse construction determines the point given its three coordinates. Each pair of axes defines a coordinate plane. These planes divide space into eight trihedra, called octants. The coordinates are usually written as three numbers (or algebraic formulas) surrounded by parentheses and separated by commas, as in (-2.1,0.5,7). Thus, the origin has coordinates (0,0,0), and the unit points on the three axes are (1,0,0), (0,1,0), and (0,0,1).

There are no universal names for the coordinates in the three axes. However, the horizontal axis is traditionally called abscissa borrowed from New Latin (short for linea abscissa, literally, "cut-off line"), and usually denoted by x. The next axis is called ordinate, which came from New Latin (linea), literally, line applied in an orderly manner; we will usually label it by y. The last axis is called applicate and usually denoted by z. Correspondingly, the unit vectors are denoted by i (abscissa), j (ordinate), and k (applicate), called the basis. Once rectangular coordinates (or a basis) are set up, any vector can be expanded through these unit vectors. In the three dimensional case, every vector can be expanded as \( {\bf v} = v_1 {\bf i} + v_2 {\bf j} + v_3 {\bf k} ,\) where \( v_1, v_2 , v_3 \) are called the coordinates of the vector v. Coordinates are always specified relative to an ordered basis. When a basis has been chosen, a vector can be expanded with respect to the basis vectors and it can be identified with an ordered n-tupple of n real (or complex) numbers or coordinates. The set of all real (or complex) ordered numbers is denoted by ℝn (or ℂn). In general, a vector in infinite dimensional space is identified by a sequence of numbers. Finite dimensional coordinate vectors can be represented by either a column vector (which is usually the case) or a row vector. We will denote column-vectors by lower case letters in bold font, and row-vectors by lower case letters with arrow above.

matlab automatically distinguishes row vectors from column vectors. All vectors are written withing brackets; if entries are separated by semi-columns, then you have a column vector. The entries in row vector are separated either by coma or space.

In mathematics and applications, it is a custom to distinguish column vectors

\[ {\bf v} = \left( \begin{array}{c} v_1 \\ v_2 \\ \vdots \\ v_m \end{array} \right) \qquad \mbox{also written as } \qquad {\bf v} = \left[ \begin{array}{c} v_1 \\ v_2 \\ \vdots \\ v_m \end{array} \right] , \]
for which we use lowercase letters in boldface type, from row vectors (ordered n-tuple)
\[ \vec{v} = \left[ v_1 , v_2 , \ldots , v_n \right] . \]
Here entries \( v_i \) are known as the component of the vector. The column vectors and the row vectors can be defined using matrix command as an example of an \( n\times 1 \) matrix and \( 1\times n \) matrix, respectively.

A set of vectors is usually called a vector space (also a linear space), which is an abstract definition in mathematics. Historically, the first ideas leading to vector spaces can be traced back as far as the 17th century; however, the idea crystallized with the work of the German mathematician Hermann Günther Grassmann (1809--1877), who published a paper in 1862. A vector space is a collection of objects called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars in this context. Scalars are often taken to be real numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers, or generally any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms (they could be found on the web page).

I. Basic properties

Start up matlab, and for now just type in the Command Window. The variables we define show up on the Workplace on the right of your screen. If you omit the ; at the end of the lines you type they also show up in the Command Window. Matlab, like the typical programming language, uses variables. As its name shows, its staple diet is matrices. So the first thing to learn is how to define matrices (and their subset, vectors) in matlab. Let’s define the simple matrix---called the row-vector:

Is there an easier way to define such a long vector? Indeed, there is. Try this:
In this case, [a:b:c] defines a vector from a to c with steps of size b. What does [0:5:200] define then? To define a column-vector, matlab offers some options. One can define a row-vector (commas are optional)
and then take transposition
or type directly: [5,6,7,8]. Also
You can drop the period if your vector has only real valued entries. If the vector contains complex entries, then the command b' provides complex conjugate column-vector. There is a dedicated command for transposition:
or type directly: transpose([5,6,7,8]).

Colomn-vector with three components can be defined as

v=[3;-1;2]

Using the symbolic toolkit in matlab, you can define variables in the vectors.
syms x
v= [1,2^6, sin(x)]
v =

[ 1, 64, sin(x)]
Actually, it is a \( 3 \times 1 \) matrix (see next secton). If you try to annex a \( 4 \times 1 \) matrix (wich is 4-vector) into the \( 3 \times 1 \) matrix v, matlab will reject it squarely, giving you an error message

v = [v ones(4,1)]
Error using horzcat
Dimensions of arrays being concatenated are not consistent.

All matrices on a row in the bracketed expression must have
   the same number of rows

If the specified element index of an array exceeds the defined range, matlab will reject it.

v(4)
Index exceeds the number of array elements (3).

matlab does not accept nonpositive or noniteger indices

v(0)=1
Array indices must be positive integers or logical values.
matlab starts counting indeces at 1
v(1.5)
Array indices must be positive integers or logical values.

matlab enables us to handle vector operations in almost the same way as scalar operations. However, we must make sure of the dimensional compatability between vectors, and we must put a dot (.) in front of the operator for termwise (element-by-element) operations.

Example: Write the three-dimensional vecor \( {\bf a}=2{\bf i}+3{\bf j}-4{\bf k} \) as the sum of two vectors, one parallel, and one perpendicular to \( {\bf b}=2{\bf i}-{\bf j}-3{\bf k} .\) Here i, j, and k are unit vectors along abscissa, ordinate, and applicate, respectively.

	a=[2 3 -4]
a=
     2 3 -4
	b=[2 -1 -3]
b=
      2 -1 3
parallel=
(((a(1)*b(1))+(a(2)*b(2))+(a(3)*b(3)))/(((b(1)).^2)+((b(2)).^2)+((b(3)).^2)))*b
parallel=
      1.8571 -0.9286 -2.7857
perpendicular= a-parallel

perpendicular =
       0.1429   3.9286   -1.2143

You can find the parallel vector directly, using dot product:

p=dot((dot (a,b)\abs(b).^2),b)
Example: Let a = [1,3,-4] and b = [-1,1,-2], find the dot product of the vector a with the cross product of a and b: \( a \cdot (a \times b) . \)

This answer holds for any arbitrary two vectors.     ■
Let S be a set of vectors \( {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_k .\) A vector v is said to be a linear combination of the vectors from S if and only if there are scalars (not all zeroes) \( c_1 , \ c_2 , \ \ldots , \ c_k , \) such that \( {\bf v} = c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_k {\bf v}_k .\) That is, a linear combination of vectors from S is a sum of scalar multiples of those vectors. Let S be a nonempty subset of a vector space V. Then the span of S in V is the set of all possible (finite) linear combinations of the vectors in S (including zero vector). It is usually denoted by span(S). In other words, a span of a set of vectors in a vector space is the intersection of all subspaces containing that set.

Example. The vector [-2, 8, 5, 0] is a linear combination of the vectors [3, 1, -2, 2], [1, 0, 3, -1], and [4, -2, 1 0], because

\[ 2\,[3,\, 1,\, -2,\,2] + 4\,[1,\,0,\,3,\,-1] -3\,[4,\,-2,\, 1,\, 0] = [-2,\,8,\, 5,\, 0] . \qquad ■ \]

Let S be a subset of a vector space V.

(1) S is a linearly independent subset of V if and only if no vector in S can be expressed as a linear combination of the other vectors in S.
(2) S is a linearly dependent subset of V if and only if some vector v in S can be expressed as a linear combination of the other vectors in S.

Theorem: A nonempty set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) in a vector space V is linearly independent if and only if the only coefficients satisfying the vector equation

\[ k_1 {\bf v}_1 + k_2 {\bf v}_2 + \cdots + k_r {\bf v}_r = {\bf 0} \]
are \( k_1 =0, \ k_2 =0, \ \ldots , \ k_r =0 . \)

Theorem: A nonempty set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) in a vector space V is linearly independent if and only if the matrix of the column-vectors from S has rank r. ■

If \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_n \} \) is a set of vectors in a finite-dimensional vector space V, then S is called basis for V if:

  • S spans V;
  • S is linearly independent. ■

Sage has three multiplication commands for vectors: the dot and outer products (for arbitrary vectors), and the cross product (for three dimensional vectors).

The dot product of two vectors of the same size \( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and \( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) (independently whether they are columns or rows) is the number, denoted either by \( {\bf x} \cdot {\bf y} \) or \( \left\langle {\bf x} , {\bf y} \right\rangle ,\)

\[ \left\langle {\bf x} , {\bf y} \right\rangle = {\bf x} \cdot {\bf y} = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n , \]
when entries are real, or
\[ \left\langle {\bf x} , {\bf y} \right\rangle = {\bf x} \cdot {\bf y} = \overline{x_1} y_1 + \overline{x_2} y_2 + \cdots + \overline{x_n} y_n , \]

when entries are complex. The dot product was first introduced by the American physicist and mathematician Josiah Willard Gibbs (1839--1903) in the 1880s. An outer product is the tensor product of two coordinate vectors \( {\bf u} = \left[ u_1 , u_2 , \ldots , u_m \right] \) and \( {\bf v} = \left[ v_1 , v_2 , \ldots , v_n \right] , \) denoted \( {\bf u} \otimes {\bf v} , \) is an m-by-n matrix W such that its coordinates satisfy \( w_{i,j} = u_i v_j . \) The outer product \( {\bf u} \otimes {\bf v} , \) is equivalent to a matrix multiplication \( {\bf u} \, {\bf v}^{\ast} , \) (or \( {\bf u} \, {\bf v}^{\mathrm T} , \) if vectors are real) provided that u is represented as a column \( m \times 1 \) vector, and v as a column \( n \times 1 \) vector. Here \( {\bf v}^{\ast} = \overline{{\bf v}^{\mathrm T}} . \)

For three dimensional vectors \( {\bf a} = a_1 \,{\bf i} + a_2 \,{\bf j} + a_3 \,{\bf k} = \left[ a_1 , a_2 , a_3 \right] \) and \( {\bf b} = b_1 \,{\bf i} + b_2 \,{\bf j} + b_3 \,{\bf k} = \left[ b_1 , b_2 , b_3 \right] \) , it is possible to define special multiplication, called cross-product:
\[ {\bf a} \times {\bf b} = \det \left[ \begin{array}{ccc} {\bf i} & {\bf j} & {\bf k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array} \right] = {\bf i} \left( a_2 b_3 - b_2 a_3 \right) - {\bf j} \left( a_1 b_3 - b_1 a_3 \right) + {\bf k} \left( a_1 b_2 - a_2 b_1 \right) . \]

Example. For instance, if m = 4 and n = 3, then

\[ {\bf u} \otimes {\bf v} = {\bf u} \, {\bf v}^{\mathrm T} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix} = \begin{bmatrix} u_1 v_1 & u_1 v_2 & u_1 v_3 \\ u_2 v_1 & u_2 v_2 & u_2 v_3 \\ u_3 v_1 & u_3 v_2 & u_3 v_3 \\ u_4 v_1 & u_4 v_2 & u_4 v_3 \end{bmatrix} . \]

An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:

  • \( \left\langle {\bf v}+{\bf u} , {\bf w} \right\rangle = \left\langle {\bf v} , {\bf w} \right\rangle + \left\langle {\bf u} , {\bf w} \right\rangle . \)
  • \( \left\langle {\bf v} , \alpha {\bf u} \right\rangle = \alpha \left\langle {\bf v} , {\bf u} \right\rangle \) for any scalar α.
  • \( \left\langle {\bf v} , {\bf u} \right\rangle = \overline{\left\langle {\bf u} , {\bf v} \right\rangle} , \) where overline means complex conjugate.
  • \( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if \( {\bf v} = {\bf 0} . \)

The fourth condition in the list above is known as the positive-definite condition. A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by

\[ \| {\bf u} \| = \sqrt{\left\langle {\bf u} , {\bf u} \right\rangle} . \]
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero: \( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \) If A is an \( n \times n \) positive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf v} = {\bf u} \cdot {\bf A}^{\ast} {\bf v} \qquad\mbox{and} \qquad {\bf u} \cdot {\bf A} {\bf v} = {\bf A}^{\ast} {\bf u} \cdot {\bf v} . \]
In particular, if w1, w2, ... , wn are positive real numbers, which are called weights, and if u = ( u1, u2, ... , un) and v = ( v1, v2, ... , vn) are vectors in \( \mathbb{R}^n , \) then the formular
\[ \left\langle {\bf u} , {\bf v} \right\rangle = w_1 u_1 v_1 + w_2 u_2 v_2 + \cdots + w_n u_n v_n \]
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights w1, w2, ... , wn.

Example. The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \) for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an invertible n-by-n matrix. Then the formula

\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf A} {\bf v} = {\bf v}^{\mathrm T} {\bf A}^{\mathrm T} {\bf A} {\bf u} \]
defines an inner product generated by A.

Example. In the set of integrable functions on an interval [a,b], we can define the inner product of two functions f and g as

\[ \left\langle f , g \right\rangle = \int_a^b \overline{f} (x)\, g(x) \, {\text d}x \qquad\mbox{or} \qquad \left\langle f , g \right\rangle = \int_a^b f(x)\,\overline{g} (x) \, {\text d}x . \]
Then the norm \( \| f \| \) (also called the 2-norm) becomes the square root of
\[ \| f \|^2 = \left\langle f , f \right\rangle = \int_a^b \left\vert f(x) \right\vert^2 \, {\text d}x . \]
In particular, the 2-norm of the function \( f(x) = 5x^2 +2x -1 \) on the interval [0,1] is
\[ \| 2 x^2 +2x -1 \| = \sqrt{\int_0^1 \left( 5x^2 +2x -1 \right)^2 {\text d}x } = \sqrt{7} . \]

Example. Consider a set of polynomials of degree n. If

\[ {\bf p} = p(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_n x^n \quad\mbox{and} \quad {\bf q} = q(x) = q_0 + q_1 x + q_2 x^2 + \cdots + q_n x^n \]
are two polynomials, and if \( x_0 , x_1 , \ldots , x_n \) are distinct real numbers (called sample points), then the formula
\[ \left\langle {\bf p} , {\bf q} \right\rangle = p(x_0 ) q(x_0 ) + p_1 (x_1 )q(x_1 ) + \cdots + p(x_n ) q(x_n ) \]
defines an inner product, which is called the evaluation inner product at \( x_0 , x_1 , \ldots , x_n . \)

With dot product, we can assign a length of a vector, which is also called the Euclidean norm or 2-norm:

\[ \| {\bf x} \| = \sqrt{ {\bf x}\cdot {\bf x}} = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2} . \]
In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space—save for the zero vector, which is assigned a length of zero. On an n-dimensional complex space \( \mathbb{C}^n ,\) the most common norm is
\[ \| {\bf z} \| = \sqrt{ {\bf z}\cdot {\bf z}} = \sqrt{\overline{z_1} \,z_1 + \overline{z_2}\,z_2 + \cdots + \overline{z_n}\,z_n} = \sqrt{|z_1|^2 + |z_2 |^2 + \cdots + |z_n |^2} . \]
A unit vector u is a vector whose length equals one: \( {\bf u} \cdot {\bf u} =1 . \) We say that two vectors x and y are perpendicular if their dot product is zero. There are known many other norms, from which we mention Taxicab norm or Manhattan norm, which is also called 1-norm:
\[ \| {\bf x} \| = \sum_{k=1}^n | x_k | = |x_1 | + |x_2 | + \cdots + |x_n | . \]
For any norm, the Cauchy--Bunyakovsky--Schwarz (or simply CBS) inequality holds:
\[ | {\bf x} \cdot {\bf y} | \le \| {\bf x} \| \, \| {\bf y} \| . \]
The inequality for sums was published by Augustin-Louis Cauchy (1789--1857) in 1821, while the corresponding inequality for integrals was first proved by Viktor Yakovlevich Bunyakovsky (1804--1889) in 1859. The modern proof (which is actually a repetition of the Bunyakovsky's one) of the integral inequality was given by Hermann Amandus Schwarz (1843--1921) in 1888. With Euclidean norm, we can define the dot product as
\[ {\bf x} \cdot {\bf y} = \| {\bf x} \| \, \| {\bf y} \| \, \cos \theta , \]
where \( \theta \) is the angle between two vectors. ■

 


Complete


Complete

Applications