Before we dance with vector products, we remind the basic product that is not related to any vector structure. For any two sets A and B, their Cartesian product consists of all ordered pairs (𝑎, b) such that 𝑎 ∈ A and bB,
\[ A \times B = \left\{ (a,b)\,:\ a \in A, \quad b\in B \right\} . \]
If sets A and B carry some algebraic structure, as in our case, they are vector spaces, then we can define a suitable structure on the product set as well. So, direct product is like Cartesian product, but with some additional structure. In our case, we equip it with addition operation
\[ \left( a_1 , b_1 \right) + \left( a_2 , b_2 \right) = \left( a_1 + a_2 , b_1 + b_2 \right) \]
and scalar multiplication
\[ k \left( a , b \right) = \left( k\,a , k\,b \right) , \qquad k \in \mathbb{F}. \]
With these operations (addition and scalar multiplication) the direct product of two vector spaces becomes a vector space that is isomorphic to their direct product: V × WVW. However, the direct product or sum is not linear in its arguments. The tensor product VW fixes this deficiency.

Here 𝔽 is a field of scalars (either ℚ, rational numbers, or ℝ, real numbers, or ℂ, complex numbers). It is a custom to denote the direct product of two or more scalar fields as 𝔽² or, in case of n multiples, as 𝔽n.

In 1844, Hermann Grassmann (1809--1877) published (from his own pocket) a book on geometric algebra not tied to dimension two or three. Grassmann develops several products, including a cross product represented then by brackets.

In 1877, to emphasize the fact that the result of a dot product is a scalar while the result of a cross product is a vector, William Kingdon Clifford (1845--1879) coined the alternative names scalar product and vector product for the two operations. These alternative names are still widely used in the literature.

In 1881, Josiah Willard Gibbs (1839--1903), and independently Oliver Heaviside (1850--1925), introduced the notation for both the dot product and the cross product using a period (ab) and an "×" (a × b), respectively, to denote them.

The German physicist Woldemar Voigt (1850--1919), a student of Ernst Neumann, used tensors for a description of stress and strain on crystals in 1898 (Die fundamentalen physikalischen Eigenschaften der Krystalle in elementarer Darstellung, Verlag von Veit & Comp., Leipzig, 1898). Tensor comes from the Latin tendere, which means “to stretch”.

The Italian mathematician Gregorio Ricci-Curbastro (1853--1925) and his student Tullio Levi-Civita (1873--1941) are credited for invention and popularization of tensor calculus. One of Albert Einstein's most-notable contributions to the world of mathematics is his application of tensors in general relativity theory. Abstract mathematical formulation was done in the middle of twentieth century by Alexander Grothendieck (1928--2014).

The wedge product symbol took several years to mature from Hermann Grassmann's work (The Theory of Linear Extension, a New Branch of Mathematics, 1844) and Élie Cartan’s book on differential forms published in 1945. The wedge symbol ∧ seems to have originated with Claude Chevalley (1909--1984) sometime between 1951 and 1954 and gained widespread use after that.

Vector Products

For high-dimensional mathematics and physics, it is important to have the right tools and symbols with which to work. This section provides an introduction for constructing a large variety of vector spaces from known spaces. Besides direct products, we consider other versions of its generalizations.

Let V be a vector space over the field 𝔽, where 𝔽 is either ℚ (rational numbers) or ℝ (real numbers) or ℂ (complex numbers). The bilinear functions from V × V into 𝔽 were considered in sections regarding dot product and inner product. In this section, we consider two important vector products, known as tensor product and cross product, as well as its generalization wedge product (also known as exterior product). Our exposition is an attempt to bridge the gap between the elementary and advanced understandings of tensor product, including wedge product.

 

Tensor product


The tensor product is an operation combining two smaller vector spaces into one larger vector space. Elements of this larger space are called tensors. Unfortunately, mathematicians and physicists often present tensors and the tensor product in very different ways, sometimes making it difficult for a reader to see that authors in different fields are talking about the same thing. This short subsection cannot embrace a diversity of tensor applications, so it follows more abstract approach, which closer related to to the linear algebra course. A section in Part 7 presents a particular application of asymmetric tensors to vector calculus and Stokes theorem.

Let V and U be two finite dimensional spaces over the same field of scalars 𝔽. Let α = { v1, v2, … , vn } and β = { u1, u2, … , um } be their bases, respectively. Then the tensor product of vector spaces V and U, denoted by VU, is spanned on the basis { viuj : i = 1, 2, … , n; j = 1, 2,… , m }. Elements of VU are called tensors that are linear combinations of nm of basis vectors \[ \sum_{i,j} \tau^{i,j} v_i \otimes u_j , \qquad \tau^{i,j} \in \mathbb{F} , \] satisfying the following two (bilinear) axioms: \begin{equation} \label{EqTensor.1} \begin{split} c\left( {\bf v} \otimes {\bf u} \right) = \left( c\, {\bf v} \right) \otimes {\bf u} = {\bf v} \otimes \left( c\,{\bf u} \right) , \qquad c\in \mathbb{F}, \end{split} \end{equation} and \begin{equation} \label{EqTensor.2} {\bf a} \otimes {\bf u} + {\bf b} \otimes {\bf u} = \left( {\bf a} + {\bf b} \right) \otimes {\bf u} , \\ {\bf v} \otimes {\bf u} + {\bf v} \otimes {\bf w} = {\bf v} \otimes \left( {\bf u} + {\bf w} \right) . \end{equation}

This definition does not specify the structure of basis elements viuj, but gives a universal mapping property involving. Therefore, tensor product can be applied to a great variety of objects and structures, including vectors, matrices, tensors, vector spaces, algebras, topological vector spaces, and modules among others. The most familiar case is, perhaps, when A = ℝm and B = ℝn. Then, ℝm⊗ℝn ≌ ℝmn (see Example 4).

From a mathematical viewpoint it is, of course, not a good idea to introduce a new concept in a basis-dependent way. However, there is known a basis-independent definition of VW, as the space of bi-linear maps V* × W* ⇾ 𝔽 of the direct product of duals spaces into the scalar field. So dim(VW) = dim(V) dim(W).

There are known two interpretations of tensor product, and both are widely used. Since the tensor product of n-dimensional vector \( \displaystyle {\bf a} = a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_n {\bf u}_n \) and m-dimensional vector \( \displaystyle {\bf b} = b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_m {\bf v}_m \) has dimension nm, it is natural to represent this product in matrix form:

\begin{equation} \label{EqTensor.3} {\bf a} \otimes {\bf b} = {\bf a} \,\overline{{\bf b}^{\mathrm{T}}} = \begin{bmatrix} a_1 b_1^{\ast} & a_1 b_2^{\ast} & \cdots & a_1 b_m^{\ast} \\ a_2 b_1^{\ast} & a_2 b_2^{\ast} & \cdots & a_2 b_m^{\ast} \\ \vdots & \vdots & \ddots & \vdots \\ a_n b_1^{\ast} & a_n b_2^{\ast} & \cdots & a_n b_m^{\ast} \end{bmatrix} \in \mathbb{F}^{n,m} . \end{equation}
Here \( \displaystyle \overline{b} = b^{\ast} = x -{\bf j}\,y \) is complex conjugate of complex number b = x + jy and j is the imaginary unit on complex plane ℂ, so j² = −1.

This approach is common, for instance, in digital image processing where images are represented by rectangular matrices. Hence, matrix representation \eqref{EqTensor.3} of the tensor product deserves a special name:

An outer product is the tensor product of two vectors \( {\bf u} = \left[ u_1 , u_2 , \ldots , u_m \right] \) and \( {\bf v} = \left[ v_1 , v_2 , \ldots , v_n \right] , \) denoted by \( {\bf u} \otimes {\bf v} , \) is an m-by-n matrix W such that its coordinates satisfy \( w_{i,j} = u_i v_j^{\ast} . \) The outer product \( {\bf u} \otimes {\bf v} , \) is equivalent to a matrix multiplication \( {\bf u} \, {\bf v}^{\ast} , \) (or \( {\bf u} \, {\bf v}^{\mathrm T} , \) if vectors are real) provided that u is represented as a column \( m \times 1 \) vector, and v as a column \( n \times 1 \) vector. Here \( {\bf v}^{\ast} = \overline{{\bf v}^{\mathrm T}} . \)

In particular, uv is a matrix of rank 1, which means that most matrices cannot be written as tensor products of two vectors. The special case constitute elements eiej that form the matrix which is 1 at (i, j) and 0 elsewhere, and the set of all such matrices forms a basis for the set of m × n -matrices, denoted by ℝm,n. Note that we use the field of real numbers as an example.

On the other hand, in quantum mechanics, it is a custom to represent tensor product of two vectors as a long (column) vector of length n + m, similarly to an element from the direct product. For example, the state of two-particle system can be described by something called a density matrix ρ on the tensor product of their respective spaces ℂn⊗ℂn. A density matrix is a generalization of a unit vector—it accounts for interactions between the two particles.

The vector representation of the tensor product itself captures all ways that basic things can "interact" with each other! We can visualize this situation in the following picture showing interactions of basis elements of the tensor product of 2- and 3-vectors.

   ℝ2⊗ℝ3 ≌ ℝ6
     
Print["\nCenters of Large Circles"] lgPts = Join[Transpose[{{1, 1}, {1.5, 2}}], Transpose[{{2, 2}, {1.5, 2}}]] Print["\nCenters of Small Circles"] smPts = Join[Transpose[{{1, 1, 1}, {0, 1, .5}}], Transpose[{{2, 2, 2}, {0, 1, .5}}]] lgCir = Graphics[{Red, Circle[#, .05]} & /@ lgPts]; smCir = Graphics[{Blue, Circle[#, .02]} & /@ smPts]; lines = Graphics[{ Line[{lgPts[[1]], smPts[[4]]}], Line[{lgPts[[1]], smPts[[5]]}], Line[{lgPts[[1]], smPts[[6]]}], Line[{smPts[[2]], lgPts[[4]]}], Line[{smPts[[2]], lgPts[[3]]}]}]; Show[lgCir, smCir, lines, Axes -> False, ImageSize -> 100]

It is possible to represent a tensor product of two vectors of length m and n as a single vector of length mn. However, this approach, known as the Kronecker product, is less efficient than vector representation embedded from the direct product. Therefore, the Kronecker product is described in matrix area where it originated.
Example 1: Let us consider two vectors
\[ {\bf v} = \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} \qquad \mbox{and} \qquad {\bf u} = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \end{pmatrix} \]
Then their tensor product in matrix form is
\[ {\bf v} \otimes {\bf u} = \begin{bmatrix} 3 & 6 & 9 & 12 \\ 2 & 4 & 6 & 8 \\ 1 & 2 & 3 & 4 \end{bmatrix} . \]
Example 2: Let us consider two vectors
\[ {\bf v} = \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} \qquad \mbox{and} \qquad {\bf u} = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \end{pmatrix} \]
For ℝ³, we have the standard basis:
\[ {\bf e}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} , \qquad {\bf e}_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} , \qquad {\bf e}_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} ; \]
So the given vector v can be expanded as
\[ {\bf v} = 3\,{\bf e}_1 + 2\,{\bf e}_2 + {\bf e}_3 . \]
For ℝ4, we have a similar standard basis:
\[ {\bf e}^1 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} , \qquad {\bf e}^2 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix} , \qquad {\bf e}^3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix} , \qquad {\bf e}^4 = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} . \]
This allows us to represent vector u as
\[ {\bf u} = {\bf e}^1 + 2\,{\bf e}^2 + 3\,{\bf e}^3 + 4\, {\bf e}^4 . \]
Of these basis vectors for ℝ³ and ℝ4, we build 12 basis vectors for the tensor product by concatenating or stacking these two vectors:
\[ {\bf e}_1 \otimes {\bf e}^1 = \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ \hline 1 \\ 0 \\ 0 \\ 0 \end{array} \right) , \quad {\bf e}_2 \otimes {\bf e}^1 = \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ \hline 1 \\ 0 \\ 0 \\ 0 \end{array} \right) , \quad {\bf e}_3 \otimes {\bf e}^1 = \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ \hline 1 \\ 0 \\ 0 \\ 0 \end{array} \right) , \]
and
\[ {\bf e}_1 \otimes {\bf e}^2 = \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ \hline 0 \\ 1 \\ 0 \\ 0 \end{array} \right) , \quad {\bf e}_2 \otimes {\bf e}^2 = \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ \hline 0 \\ 1 \\ 0 \\ 0 \end{array} \right) , \quad {\bf e}_3 \otimes {\bf e}^2 = \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ \hline 0 \\ 1 \\ 0 \\ 0 \end{array} \right) , \]
and
\[ {\bf e}_1 \otimes {\bf e}^3 = \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ \hline 0 \\ 0 \\ 1 \\ 0 \end{array} \right) , \quad {\bf e}_2 \otimes {\bf e}^3 = \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ \hline 0 \\ 0 \\ 1 \\ 0 \end{array} \right) , \quad {\bf e}_3 \otimes {\bf e}^3 = \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ \hline 0 \\ 0 \\ 1 \\ 0 \end{array} \right) , \]
and
\[ {\bf e}_1 \otimes {\bf e}^4 = \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ \hline 0 \\ 0 \\ 0 \\ 1 \end{array} \right) , \quad {\bf e}_2 \otimes {\bf e}^4 = \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ \hline 0 \\ 0 \\ 0 \\ 1 \end{array} \right) , \quad {\bf e}_3 \otimes {\bf e}^4 = \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ \hline 0 \\ 0 \\ 0 \\ 1 \end{array} \right) , \]
Then we can express the tensor product vu through these basis vectors:
\begin{align*} {\bf v} \otimes {\bf u} &= \left( 3\,{\bf e}_1 + 2\,{\bf e}_2 + {\bf e}_3 \right) \otimes \left( {\bf e}^1 + 2\,{\bf e}^2 + 3\,{\bf e}^3 + 4\, {\bf e}^4 \right) \\ &= 3\,{\bf e}_1 \otimes {\bf e}^1 + 6\, {\bf e}_1 \otimes {\bf e}^2 + 9\,{\bf e}_1 \otimes {\bf e}^3 + 12\,{\bf e}_1 \otimes {\bf e}^4 \\ &\quad + 2\,{\bf e}_2 \otimes {\bf e}^1 + 4\, {\bf e}_2 \otimes {\bf e}^2 + 6\,{\bf e}_2 \otimes {\bf e}^3 + 8\,{\bf e}_2 \otimes {\bf e}^4 \\ &\quad + {\bf e}_3 \otimes {\bf e}^1 + 2\, {\bf e}_3 \otimes {\bf e}^2 + 3\,{\bf e}_3 \otimes {\bf e}^3 + 4\,{\bf e}_3 \otimes {\bf e}^4 . \end{align*}
If you rewrite this tensor as a single 7-dimensional vector, most information about the tensor product will be lost:
\[ {\bf v} \otimes {\bf u} = \mbox{wrong } \left( \begin{array}{c} 30 \\ 20 \\ 10 \\ \hline 6 \\ 12 \\ 18 \\ 24 \end{array} \right) . \]
End of Example 2
There is a similarity between the direct product VU and the tensor product VU: the space VU consists of pairs (v, u) with vV and uU, while VU is built from vectors vu. Multiplication by scalars in tensor product are defined according to the rule (axiom) in Eq.(1) and addition is formulated in Eq.\eqref{EqTensor.2}. So we see that these two operations completely different from the addition and multiplication in the direct product.

This seemingly innocuous changes clearly have huge implications on the structure of the tensor space. It can be shown that the tensor product VU is a vector space because it is a quotient space of the product space (we wouldn’t prove it, see, for instance, Phil Lucht works).

Example 3: Let us simplify
\[ (2, 1) \otimes (1, 4) + (2, -3) \otimes (-2, 3) , \]
Let us introduce the basis vectors x = (1, 0) and y = (0, 1), then
\begin{align*} {\bf u} \otimes {\bf v} &= (2, 1) \otimes (1, 4) + (2, -3) \otimes (-2, 3) \\ &= \left( 2x + y \right) \otimes \left( x + 4y \right) + \left( 2x -3y \right) \otimes \left( -2x + 3y \right) \\ &= 2 x \otimes x + 8 x \otimes y + y \otimes x + 4 y \otimes y - 4 x \otimes x + 6 x \otimes y + 6 y \otimes x - 9 y \otimes y \\ &= - 2 x \otimes x + 14 x \otimes y + 7 y \otimes x - 5 y \otimes y . \end{align*}
Example 4: First, we consider a simple case of m = n, and find ℝ>⊗ℝ. It is spanned on the single basis vector ij, where i and j are unit vectors on ℝ (hence, they are the same). Hence, ℝ⊗ℝ can be visualized by the line (x, x) drawn on the plane ℝ².

In general, we have ℝm,1⊗ℝ1,n ≌ ℝm,n with the product defined to be uv = uv, the matrix product of a column and a row vector. Of course, these two spaces 𝔽m,n and 𝔽n,m are naturally isomorphic to each other, 𝔽m,n ≌ 𝔽n,m, so in that sense they are the same, but we would never write vu when we mean uv.

End of Example 4
Example 5: For instance, if m = 4 and n = 3, then
\[ {\bf u} \otimes {\bf v} = {\bf u} \, {\bf v}^{\mathrm T} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix} = \begin{bmatrix} u_1 v_1 & u_1 v_2 & u_1 v_3 \\ u_2 v_1 & u_2 v_2 & u_2 v_3 \\ u_3 v_1 & u_3 v_2 & u_3 v_3 \\ u_4 v_1 & u_4 v_2 & u_4 v_3 \end{bmatrix} . \]
In Mathematica, the outer product has a special command:
Outer[Times, {1, 2, 3, 4}, {a, b, c}]
Out[1]= {{a, b, c}, {2 a, 2 b, 2 c}, {3 a, 3 b, 3 c}, {4 a, 4 b, 4 c}}
If we take two complex-valued vectors \( {\bf u} = [1 + {\bf j}, 2, -1 -2{\bf j}, 2 -{\bf j}] \) and \( {\bf v} = [3 + {\bf j}, -1 + {\bf j}, 2 -{\bf j}] , \) then their outer product becomes
\[ {\bf u} \otimes {\bf v} = {\bf u} \, \overline{{\bf v}^{\mathrm T}} = \begin{bmatrix} 4+2{\bf j} & - 2{\bf j}&1 + 3{\bf j} \\ 6-2{\bf j} & -2-2{\bf j} & 4+2{\bf j} \\ -5 -5{\bf j} & -1 + 3{\bf j} & -5{\bf j} \\ 5 - 5{\bf j} & -3-{\bf j} & 5 \end{bmatrix} , \]
Outer[Times, {1 + I, 2, -1 - 2*I, 2 - I} , Conjugate[{3 + I, -1 + I, 2 - I}]]
{{1 + I}, {2}, {-1 - 2*I}, { 2 - I}} .Conjugate[{{3 + I, -1 + I, 2 - I}}]
{{4 + 2 I, -2 I, 1 + 3 I}, {6 - 2 I, -2 - 2 I, 4 + 2 I}, {-5 - 5 I, -1 + 3 I, -5 I}, {5 - 5 I, -3 - I, 5}}
MatrixRank[%]
Out[3]= 1
which is rank 1 matrix.
End of Example 5
Example 6: Let V = 𝔽[x] be vector space of polynomials in one variable. Then VV = 𝔽[x₁, x₂] is the space of polynomials in two variables, the product is defined to be p(x)⊗q(x) = p(x₁) q(x₂).

Note: this is not a commutative product, because in general

\[ p(x)\otimes q(x) = p\left( x_1 \right) q\left( x_2 \right) \ne q \left( x_1 \right) p \left( x_2 \right) = q\left( x \right) \otimes p \left( x \right) . \]
End of Example 6

Example 7: If V is any vector space over 𝔽, then V ⊗ 𝔽 ≌ V. In this case, tensor product, ⊗, is just scalar multiplication.
End of Example 7
Example 8: If V and U are finite dimensional vector spaces over field 𝔽, then V* = ℒ(V, 𝔽) is the set of all linear functionals on V. Hence, V* ⊗ U = ℒ(V, U), with multiplication defined as φ⊗u ∈ ℒ(V, U) is the linear transformation from V to U because (φ⊗u)(v) = φ(v)⋅u, for φ ∈ V* and uU.

This result is just the abstract version of Example 4. If V = 𝔽n,1 and U = 𝔽m,1, then V* = 𝔽1,n. From Example 4, VU is identified with 𝔽m,n, which is in turn identified with ℒ(V, U), the set of all linear transformations from V to U.

Note: If V and U are both infinite dimensional, then V* ⊗ U is a subspace of ℒ(V, U), but not equal to it because linear combinations include only finite number of terms. Specifically, V* ⊗ U = { T ∈ ℒ(V, U) : dim range(T) < ∞ } is the set of finite rank linear transformations in ℒ(V, U), the space of all linear functions from V to U. Recall that the tensor product V* ⊗ U consists of only finite number of linear combinations of rank 1 elements φ⊗u.

End of Example 8
Example 9: Let us consider ℚn and ℝ as vector spaces over the field of rational numbers. Then ℚn⊗ℝ is isomorphic as a vector space over ℚ to ℝn, where the multiplication is just scalar multiplication on ℝn.

Let α = { e1, e2, … , en } be the standard basis for ℚn and let β be an infinite basis for ℝ over ℚ. First we show that α⊗β spans ℝn. Let (𝑎1, 𝑎2, … , 𝑎n) ∈ ℝn. Every component can be expanded into finite sum \( \displaystyle a_i = \sum_j b_{ij} x_j , \quad i=1,2,\ldots , n , \) where xi ∈ β. Thus, \( \displaystyle ( a_1 , a_2 , \ldots , a_n ) = \sum_{ij} b_{ij} {\bf e}_i \otimes x_j . \)

Next we show that α⊗β is linearly independent. Suppose opposite \( \displaystyle \sum_{i,j} c_{ij} {\bf e}_i \otimes x_j = 0 . \) Since \[ \sum_{i,j} c_{ij} {\bf e}_i \otimes x_j = \left( \sum_j c_{1,j} x_j , \sum_j c_{2,j} x_j , \ldots , \sum_j c_{n,j} x_j , \right) , \] we have \( \displaystyle \sum_{j} c_{ij} x_j = 0 , \) for all i. Since { xi } are linearly independent, cij = 0, as required.

End of Example 9

 

Vector or Cross product


It is well-known that a vector space can be equipped with a product operation (besides vector addition) only for dimensions 1 and 2. The cross product is a successful attempt to implement the product in a three-dimensional vector space, but loosing commutative property of multiplication. On the other hand, the outer product assigns a matrix to two vectors of arbitrary size.

For any two vectors in ℝ³, \( \displaystyle {\bf a} = \left[ a_1 , a_2 , a_3 \right] \) and \( \displaystyle {\bf b} = \left[ b_1 , b_2 , b_3 \right] , \) their cross product is the vector
\begin{equation} \label{EqCross.1} {\bf a} \times {\bf b} = \det \begin{bmatrix} \hat{\bf e}_1 & \hat{\bf e}_2 & \hat{\bf e}_3 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \beta_1 & \beta_2 & \beta_3 \end{bmatrix} = \begin{bmatrix} a_2 b_3 - a_3 b_2 \\ a_3 b_1 - a_1 b_3 \\ a_1 b_2 - a_2 b_1 \end{bmatrix} . \end{equation}
Here \( \displaystyle \hat{\bf e}_1 , \ \hat{\bf e}_2 , \ \hat{\bf e}_3 \ \) are unit vectors in ℝ³ that are also usually denoted by i, j, and k, respectively.
Cross product of two vectors a and b from ℝ³ can be defined by
\begin{equation} \label{EqCross.2} {\bf a} \times {\bf b} = \| {\bf a} \| \, \| {\bf b} \|\,\sin (\theta )\,{\bf n} , \end{equation}
where
  • θ is the angle between a and b in the plane containing them (hence, it is between 0 and π);
  • ∥ x ∥ is the Euclidean norm, so \( \displaystyle \| {\bf x} \| = \| (x_1 , x_2 , x_3 )\| = \sqrt{x_1^2 + x_2^2 + x_3^2} ; \)
  • n is a unit vector perpendicular to the plane containing a and b, with direction such that the ordered set (a, b, n) is positively-oriented. Conventionally, direction of n is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the adjacent picture).

   Right hand rule.
     
   Cross product.

We plot with Mathematica cross product of two vectors using DynamicModule command so you can see the value of cross product, which is the area of parallelogram formed by two given vectors---it is marked in blue.
mani1 = Manipulate[ DynamicModule[{vv, ww, angles, cros}, vv = Normalize[v]; ww = Normalize[w]; cros = First[PadLeft[v, 3]\[Cross]PadLeft[w, 3]]; angles = Sort[N@{ArcTan @@ vv, ArcTan @@ ww}]; Graphics[{(*x ticks,*){If[showPar, {If[cros > 0, RGBColor[0.6, 0.7, 1], RGBColor[ 1, 0.6, 0.7]], Polygon[{{0, 0}, v, v + w, w}]}]}, {If[cros < 0, Hue[0.7], RGBColor[1, 0.47, 0]], Circle[{0, 0}, .3, If[-Subtract @@ angles < \[Pi], angles, Reverse[angles] + {0, 2 \[Pi]}]]}, {GrayLevel[0], Arrow[{{0, 0}, v}]}, {GrayLevel[0], Arrow[{{0, 0}, w}]}, {If[cros > 0, RGBColor[1, 0.47, 0], RGBColor[ 0.4156862745098039, 0.3529411764705882, 0.803921568627451]], Arrow[{{0, 0}, cros {-1/2, -1/2}}]}, Text["\!\(\*OverscriptBox[\(v\), \(\[RightVector]\)]\)", v 3/4, {.3, 1}], Text["\!\(\*OverscriptBox[\(w\), \(\[RightVector]\)]\)", 3 w/4, {1.75, 1}], Text["\!\(\*OverscriptBox[\(v\), \(\[RightVector]\)]\)\ \[ThinSpace]\[Cross]\[ThinSpace]\!\(\*OverscriptBox[\(w\), \(\ \[RightVector]\)]\)", cros {-1/2, -1/2}, {1, 1}]}, Axes -> True, Ticks -> False, PlotRange -> 3,(*AxesLabel\[Rule]{y,z},*) ImageSize -> {400, 400}, PlotLabel -> Grid[ {{If[cros > 0, Style["acute or obtuse angle", RGBColor[1, 0.47, 0]], Style["reflex angle", RGBColor[ 0.4156862745098039, 0.3529411764705882, 0.803921568627451]]], Norm["\!\(\*OverscriptBox[\(v\), \(\[RightVector]\)]\)\ \[ThinSpace]\[Cross]\[ThinSpace]\!\(\*OverscriptBox[\(w\), \(\ \[RightVector]\)]\)"] == DecimalForm[Abs[cros], {4, 3}]}(*,{v},{w}*)}, ItemSize -> 10]]], {{showPar, True, "show parallelogram"}, {True, False}}, {{v, {1., 0.5}}, {-3, -3}, {3, 3}, Locator, Appearance -> None, Exclusions -> {0, 0}}, {{w, {(*-2.*)0.4,(*1.*)1.53}}, {-3, -3}, {3, 3}, Locator, Appearance -> None, Exclusions -> {0, 0}} ]
Example 10: We consider two vectors
\[ {\bf v} = (-1, 4, 14) \qquad \mbox{and} \qquad {\bf w} = (-7, 21, 24) , \]
Their cross product can be evaluaeted with the aid of Mathematica:
v = {-1, 4, 14}; w={-7, 21, 24}; Cross[w, v]
{198, 74, -7}
\[ {\bf w} \times {\bf v} = \begin{bmatrix} {\bf i} & {\bf j} & {\bf k} \\ -7 & 21 & 24 \\ -1 & 4 & 14 \end{bmatrix} = \begin{bmatrix} 198 \\ 74 \\ -7 \end{bmatrix} . \]
Its length is
\[ \| {\bf w} \times {\bf v} \| = \sqrt{198^2 + 74^2 + 7^2} = \sqrt{44729} \approx 211.492 \]
Norm[%]
Sqrt[44729]
On the other hand, the lengths of the given two vectors are
\[ \| {\bf w} \| = \sqrt{7^2 + 21^2 + 24^2} = \sqrt{1066} \approx 32.6497, \qquad \| {\bf v} \| = \sqrt{1 + 16+ 14^2} = \sqrt{213} \approx 14.5945. \]
With this information, we can find sine of the angle between these two vectors:
\[ \sin\theta = \frac{\|{\bf w} \times {\bf v} \| }{\| {\bf v} \| \cdot \| {\bf w} \|} = \frac{\sqrt{44729}}{\sqrt{1066} \cdot \sqrt{213}} = \sqrt{\frac{44729}{227058}} \approx 0.44384 . \]
Therefore, the angle between these two vectors v and w is
\[ \theta = \arcsin 0.44384 \approx 0.459879 . \]
Finally, we plot the cross product of the given two vectors:
v = {-1.4285726530612253`, 4.285717959183676`, 14.062499776785712`}; w = {-7.142857551020407`, 21.428572653061224`, 24.999998214285714`}; k = Cross[v, w]; vplot = Graphics3D[{ {Arrowheads[.02], Red, Thick, Arrow[{{0, 0, 0}, v}], Text[ StyleForm[ "v", {Red, "Times-Bold", 12}], {{v[[1]], v[[2]], v[[3]]} + {1.9, -1, 1.9}}]}, {Arrowheads[.02], Red, Thick, Arrow[{{0, 0, 0}, w}], Text[ StyleForm[ "w", {Red, "Times-Bold", 12}], {{w[[1]], w[[2]], w[[3]]} + {.3, 2.9, .3}}]}, {Blue, Thick, Arrow[{{0, 0, 0}, Cross[v, w]}], Text[ StyleForm[ "v \[Cross] w", {"Times-Bold", 12}], {.8 (k)[[1]], .6 (k)[[2]], (k)[[3]]}]} }]; poly = Polygon[{{0, 0, 0}, v, v + w, w}]; polGr = Graphics3D[{LightBlue, Opacity[.5], poly}]; show1 = Show[polGr, vplot, Axes -> False, Lighting -> False, Boxed -> False, ViewPoint -> {3.3150892772343012`, 0.11273490348561471`, 0.6689349187344378}]
   Cross product.

Perpendicular relationship is slightly more pronounced from this viewpoint

show2 = Show[polGr, vplot, Axes -> False, Lighting -> False, Boxed -> False, ViewPoint -> {2.9675671762254416`, -1.3765773178764422`, 0.8652051447474535}]
   Cross product.

The cross product is indeed perpendicular to the original vectors

VectorAngle[v, k]/Degree VectorAngle[w, k]/Degree
90
End of Example 10

Despite that the cross product is identified uniquely by three entries, there is something very peculiar about the vector u × v. If u and v are orthogonal unit vectors, then the vectors u, v, u × v form a right-handed coordinate system. But if R ∈ ℝ³ is the linear transformation that mirrors vectors in the u, v plane, then { Ru, Rv, R(u×v) } = { u, v, −u×v } forms a left-handed coordinate system. In general, the cross product obeys the following identity under matrix transformation:

\begin{equation} \label{EqCross.3} \left( {\bf M}\,{\bf u} \right) \times \left( {\bf M}\,{\bf v} \right) = (\det {\bf M} )\,{\bf M}^{-\mathrm{T}} \left( {\bf u} \times {\bf v} \right) , \end{equation}
where M is a 3×3 matrix and M−T is the transpose of the inverse matrix.

Thus, u×v does not really transform as a vector. This anomaly should alert us to the fact that cross product is not really a true vector. In fact, cross product transforms more like a tensor than a vector.

   (−u) × (−v)
     
Cross product of negative vectors

Example 11: Let us consider two vectors a = [ 2, −1, −1 ] and b = [ −1, 2, 1 ]. Their cross product is
\[ {\bf a} \times {\bf b} = \begin{pmatrix} \phantom{-}2 \\ -1 \\ -1 \end{pmatrix} \times \begin{pmatrix} -1 \\ \phantom{-}2 \\ \phantom{-}1 \end{pmatrix} = \begin{pmatrix} \phantom{-}1 \\ -1 \\ \phantom{-}3 \end{pmatrix} . \]
Cross[{2, -1, -1}, {-1, 2, 1}]
{1, -1, 3}

Let us consider the following transformation:

\[ {\bf M} = \begin{bmatrix} 3 & -1 & 11 \\ 2 & \phantom{-}2 & -3 \\ 2 & \phantom{-}1 & \phantom{-}1 \end{bmatrix} . \]
M = {{3, -1, 11}, {2, 2, -3}, {2, 1, 1}}
Its transposed inverse is
\[ {\bf M}^{-{\mathrm T}} = \begin{bmatrix} \phantom{-}5 & -8 & -2 \\ 12 & -19 & -5 \\ -19 & \phantom{-}31 & \phantom{-}8 \end{bmatrix} . \]
sM = Transpose[Inverse[M]]
{{5, -8, -2}, {12, -19, -5}, {-19, 31, 8}}
Now we multiply matrix M and vectors:
\[ {\bf M}\,{\bf a} = \begin{bmatrix} -4 \\ \phantom{-}5 \\ \phantom{-}2 \end{bmatrix} , \qquad {\bf M}\,{\bf b} = \begin{bmatrix} \phantom{-}6 \\ -1 \\ \phantom{-}1 \end{bmatrix} . \]
M = {{3, -1, 11}, {2, 2, -3}, {2, 1, 1}} ;
a = {2, -1, -1} ;
M.a
{-4, 5, 2}
and
M = {{3, -1, 11}, {2, 2, -3}, {2, 1, 1}} ;
b = {-1, 2, 1};
M.b
{6, -1, 1}
Their cross product becomes
\[ \left( {\bf M}\,{\bf a} \right) \times \left( {\bf M}\,{\bf b} \right) = \begin{pmatrix} -4 \\ \phantom{-}5 \\ \phantom{-}2 \end{pmatrix} \times \begin{pmatrix} \phantom{-}6 \\ -1 \\ \phantom{-}1 \end{pmatrix} = \begin{pmatrix} \phantom{-}7 \\ \phantom{-}16 \\ -26 \end{pmatrix} . \]
Cross[{-4, 5, 2}, {6, -1, 1}]
{7, 16, -26}
On the other hand,
\[ \left( {\bf M}^{-\mathrm{T}} \right) \left( {\bf a} \times {\bf b} \right) = \begin{pmatrix} 7 \\ 16 \\ -26 \end{pmatrix} . \]
ab = {1, -1, 3};
sM = Transpose[Inverse[M]] ;
sM.ab
{7, 16, -26}
End of Example 11

One of the main motivations to use cross product comes from classical mechanics: A torque τ about a point due to a force F acting at another point at distance r is expressed through cross product: τ = r × F.

The cross product is convenient to define with the Levi-Civita symbol that was invented in the late 1800's by the Italian mathematician Tullio Levi-Civita (1873--1941), a student of Gregorio Ricci:

\begin{equation} \label{EqCross.4} \varepsilon_{i,j,k} = \begin{cases} \phantom{-}0 , & \quad\mbox{if any two labels are the same}, \\ -1, & \quad \mbox{if }i,\, j,\,k \mbox{ is an odd permutation of 1, 2, 3}, \\ \phantom{-}1 , & \quad \mbox{if } i,\, j,\,k \mbox{ is an even permutation of 1, 2, 3}. \end{cases} \end{equation}
The Levi-Civita symbol εijk is a tensor of rank three and is anti-symmetric on each pair of indexes. The determinant of a matrix A with elements 𝑎ij can be written in term of εijk
\[ \det\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} = \sum_{i,j=1}^3 \varepsilon_{ijk} a_{1i} a_{2j} a_{3k} = \varepsilon_{ijk} a_{1i} a_{2j} a_{3k} , \]
upon dropping sign of summation using the Einstein summation rule. The Levi-Civita symbol possesses the following properties
  • The Levi-Civita tensor has 3×3×3 = 27 components;
  • 3 × (6+1) = 21 components are equal to 0;
  • 3 components are equal to 1 = ε123 = ε231 = ε321;
  • 3 components are equal to −1 = ε321 = ε213 = ε132.
Cross product in index notation using Levi-Civita symbol is
\begin{equation} \label{EqCross.5} {\bf a} \times {\bf b} = \varepsilon_{ijk} \hat{\bf e}_i a_j b_k . \end{equation}

Given two space vectors, a and b, we can find a third space vector c, called the cross product of a and b, and denoted by c = a × b. The magnitude of c is defined by |c| = |a| |b| sin(θ), where θ is the angle between a and b. The direction of c is given by the right-hand rule: If a is turned to b (note the order in which a and b appear here) through the angle between a and b, a (right-handed) screw that is perpendicular to a and b will advance in the direction of a × b. This definition implies that

\[ {\bf a} \times {\bf b} = - {\bf b} \times {\bf a} . \]
This property is described by saying that the cross product is antisymmetric. The definition also implies that
\[ {\bf a} \cdot \left( {\bf a} \times {\bf b}\right) = {\bf b} \cdot \left( {\bf a} \times {\bf b}\right) = {\bf 0} , \]
where dot stands for dot product. That is, a × b is perpendicular to both a and b. The vector product has the following properties:
\begin{align*} {\bf a} \times \left( \alpha {\bf b}\right) &= \left( \alpha {\bf a} \right) \times {\bf b} = \alpha \left( {\bf a} \times {\bf b}\right) , \\ {\bf a} \times \left( {\bf b} + {\bf c} \right) &= {\bf a} \times {\bf b} + {\bf a} \times {\bf c} , \\ {\bf a} \times {\bf a} &= {\bf 0} . \end{align*}
Using these properties, we can write the vector product of two vectors in terms of their components. We are interested in a more general result valid in other coordinate systems as well. So, rather than using x, y, and z as subscripts for unit vectors, we use the numbers 1, 2, and 3. In that case, our results can also be used for spherical and cylindrical coordinates which we shall discuss shortly.
\begin{align*} {\bf a} \times {\bf b} &= \left( \alpha_1 \hat{\bf e}_1 + \alpha_2 \hat{\bf e}_2 + \alpha_3 \hat{\bf e}_3 \right) \times \left( \beta_1 \hat{\bf e}_1 + \beta_2 \hat{\bf e}_2 + \beta_3 \hat{\bf e}_3 \right) \\ &= \alpha_1 \beta_1 \hat{\bf e}_1 \times \hat{\bf e}_1 + \alpha_1 \beta_2 \hat{\bf e}_1 \times \hat{\bf e}_2 + \alpha_1 \beta_3 \hat{\bf e}_1 \times \hat{\bf e}_3 \\ & \quad + \alpha_2 \beta_1 \hat{\bf e}_2 \times \hat{\bf e}_1 + \alpha_2 \beta_2 \hat{\bf e}_2 \times \hat{\bf e}_2 + \alpha_2 \beta_3 \hat{\bf e}_2 \times \hat{\bf e}_3 \\ & \quad + \alpha_3 \beta_1 \hat{\bf e}_3 \times \hat{\bf e}_1 + \alpha_3 \beta_2 \hat{\bf e}_3 \times \hat{\bf e}_2 + \alpha_3 \beta_3 \hat{\bf e}_3 \times \hat{\bf e}_3 \end{align*}
Using the antisymmetry property of cross product, we have
\[ \hat{\bf e}_1 \times \hat{\bf e}_1 = \hat{\bf e}_2 \times \hat{\bf e}_2 = \hat{\bf e}_3 \times \hat{\bf e}_3 = {\bf 0}. \]
Also, if we assume that \( \hat{\bf e}_1 , \ \hat{\bf e}_2 , \ \hat{\bf e}_3 \) form a so-called right-handed set, i.e., if
\begin{align} \notag \hat{\bf e}_1 \times \hat{\bf e}_2 &= - \hat{\bf e}_2 \times \hat{\bf e}_1 = \hat{\bf e}_3 , \\ \hat{\bf e}_1 \times \hat{\bf e}_3 &= - \hat{\bf e}_3 \times \hat{\bf e}_1 = - \hat{\bf e}_2 , \label{EqCross.6} \\ \hat{\bf e}_2 \times \hat{\bf e}_3 &= - \hat{\bf e}_3 \times \hat{\bf e}_2 = \hat{\bf e}_1 , \notag \end{align}
then we obtain
\[ {\bf a} \times {\bf b} = \left( \alpha_2 \beta_3 - \alpha_3 \beta_2 \right) \hat{\bf e}_1 + \left( \alpha_3 \beta_1 - \alpha_1 \beta_3 \right) \hat{\bf e}_2 + \left( \alpha_1 \beta_2 - \alpha_2 \beta_1 \right) \hat{\bf e}_3 . \]
Theorem 1: \[ {\bf a} \times \left( {\bf b} \times {\bf c} \right) = {\bf b} \left( {\bf a}\cdot {\bf c} \right) - {\bf c} \left( {\bf a}\cdot {\bf b} \right) . \]
Let
\[ {\bf d} = {\bf a} \times \left( {\bf b} \times {\bf c} \right) , \]
then its m-th component is
\begin{align*} d_m &= \varepsilon_{mni} a_n \left( \varepsilon_{ijk} b_j c_k \right) \\ &= \varepsilon_{mni} \varepsilon_{ijk} a_n b_j c_k = \left( \delta_{mj} \delta_{nk} - \delta_{mk} \delta_{nj} \right) a_n b_j c_k \\ &= b_m a_k c_k - c_m a_j b_j \\ &= \left[{\bf b} \left( {\bf a}\cdot {\bf c} \right) \right]_m - \left[ {\bf c} \left( {\bf a}\cdot {\bf b} \right) \right]_m . \end{align*}
Example 11:
End of Example 12
It immediately follows that
\[ \left[ \nabla \times \left( \nabla \times {\bf v} \right) \right]_i = \left[ \nabla \left( \nabla \cdot {\bf v} \right) - \nabla^2 {\bf v} \right]_i . \]
Actually, there does not exist a cross product vector in space with more than 3 dimensions. The fact that the cross product of 3 dimensions vector gives an object which also has 3 dimensions is just pure coincidence.
Theorem 2: The cross product in 3 dimensions is actually a tensor of rank 2 with 3 independent coordinates.
We have
\begin{align*} \left( {\bf a} \times {\bf b} \right)_{ij} &= a_i b_j - a_j b_i = c_{ij} \\ &= \begin{bmatrix} 0 & a_1 b_2 - a_2 b_1 & a_1 b_3 - a_3 b_1 \\ a_2 b_1 - a_1 b_2 & 0 & a_2 b_3 - a_3 - b_2 \\ a_3 b_1 - a_1 b_3 & a_3 b_2 - a_2 b_3 & 0 \end{bmatrix} \\ &= \begin{bmatrix} 0 & - \left( a_2 b_1 - a_1 b_2 \right) & a_1 b_3 - a_3 b_1 \\ a_2 b_1 - a_1 b_2 & 0 & - \left( a_3 b_2 - a_2 b_3 \right) \\ - \left( a_1 b_3 - a_3 b_1 \right) & a_3 b_2 - a_2 b_3 & 0 \end{bmatrix} \\ &= \begin{bmatrix} 0 & -c_3 & c_2 \\ c_3 & 0 & -c_1 \\ -c_2 & c_1 & 0 \end{bmatrix} . \end{align*}
Example 13: From the definition of the vector product, it follows that
\[ \left\vert {\bf a} \times {\bf b} \right\vert = \mbox{ area of the parallelogram defined by} \quad {\bf a} \mbox{ and } {\bf b} . \]
So we can use Eq.\eqref{EqCross.6} to find the area of a parallelogram defined by two vectors directly in terms of their components. For instance, the area defined by a = (1, −1, 2) and b = (−2, 3, 1) can be found by calculating their vector product
\[ {\bf a} \times {\bf b} = \det \begin{bmatrix} \hat{\bf e}_1 & \hat{\bf e}_2 & \hat{\bf e}_3 \\ 1 & -1 & 2 \\ -2 & 3 & 1 \end{bmatrix} = -7\,{\bf i} -5\,{\bf j} + {\bf k} \]
Cross[{1,-1,2}, {-2,3,1}]
{-7, -5, 1}
Its absolute value is approximately 8.66025 .
\[ \| (-7,-5,1) \| = \sqrt{7^2 + 5^2 + 1} = \sqrt{49 + 26} = \sqrt{75} = 5\sqrt{3} \approx 8.66025 . \]
Norm[%]
5 Sqrt[3]

Example 14: The volume of a parallelepiped defined by three non-coplanar vectors a, b, and c is given by \( \displaystyle \left\vert {\bf a} \cdot \left( {\bf b} \times {\bf c} \right) \right\vert . \) The absolute value is taken to ensure the positivity of the area. In terms of components we have
\[ \mbox{volume} = \left\vert \det \begin{bmatrix} \alpha_1 & \alpha_2 & \alpha_3 \\ \beta_1 & \beta_2 & \beta_3 \\ \gamma_1 & \gamma _2 & \gamma_3 \end{bmatrix} \right\vert . \]
Taking three vectors a = (1,2,-3), b = (4, -2, 1), and c = (2, -1,-5), we find
\[ \mbox{volume} = \left\vert \det \begin{bmatrix} 1 & \phantom{-}2 & -3 \\ 4 & -2 & \phantom{-}1 \\ 2 & -1 & -5 \end{bmatrix} \right\vert = 55. \]
Det[{{1,2,-3},{4,-2,1},{2,-1,-5}}]
55
End of Example 14

Coordinates are “functions” that specify points of a space. The smallest number of these functions necessary to specify a point P is called the dimension of that space. There are three widely used coordinate systems in a three-dimensional vector space ℝ³: Cartesian (x(P), y(P), z(P)), cylindrical (ρ(P), φ(P), z(P)), and spherical (r(P), θ(P), φ(P)). The latter φ(P) is called the azimuth or the azimuthal angle of P, while θ(P) is called its polar angle.

   Spherical coordinates
      For r > 0, 0 < θ ≤ π, 0 ≤ φ ≤ 2π, we have spherical coordinates
\[ \begin{split} x &= r\,\sin\theta \,\cos\varphi , \\ y &= r\,\sin\theta \,\sin\varphi , \\ %\qquad z &= r\,\cos\theta , \end{split} \]
where

\[ r = \sqrt{x^2 + y^2 + z^2}, \]
and
\[ \theta = \arccos \frac{z}{r} = \begin{cases} \mbox{indeterminate} , & \quad \mbox{ if } x=0 \ \mbox{ and }\ y = 0, \ \mbox{ and }\ z = 0, \\ \arctan \frac{\sqrt{x^2 + y^2}}{z} , & \quad \mbox{ if } \ z > 0, \\ \pi + \arctan \frac{\sqrt{x^2 + y^2}}{z} , & \quad \mbox{ if } \ z < 0, \\ + \frac{\pi}{2} , & \quad \mbox{ if } z = 0 \ \mbox{ and }\ y \ne 0, \end{cases} \]
\[ \varphi = \mbox{sign}(y)\,\arccos \frac{x}{\sqrt{x^2 + y^2}} = \begin{cases} \mbox{indeterminate} , & \quad \mbox{ if } x=0 \ \mbox{ and }\ y = 0, \\ \arctan \left( \frac{y}{x} \right) + \pi , & \quad \mbox{ if } x > 0, \\ \arctan \left( \frac{y}{x} \right) + \pi , & \quad \mbox{ if } x < 0, \ \mbox{ and }\ y \ge 0, \\ \arctan \left( \frac{y}{x} \right) - \pi , & \quad \mbox{ if } x < 0, \ \mbox{ and }\ y < 0, \\ + \frac{\pi}{2} , & \quad \mbox{ if } x = 0 \ \mbox{ and }\ y > 0, \\ - \frac{\pi}{2} , & \quad \mbox{ if } x = 0 \ \mbox{ and }\ y < 0. \end{cases} \]
Cross product in spherical coordinates is given by the formula:
\[ \begin{bmatrix} r_1 \\ \theta_1 \\ \varphi_1 \end{bmatrix} \,\times \, \begin{bmatrix} r_2 \\ \theta_2 \\ \varphi_2 \end{bmatrix} = r_1 r_2 \left[ \begin{array}{c} \sin (\theta_1 )\,\sin (\varphi_1 )\,\cos (\varphi_2 ) - \sin (\theta_2 )\,\cos (\varphi_1 )\,\sin (\varphi_2 ) \\ \left( \sin (\theta_2 ) - \sin (\theta_1 ) \right) \cos (\varphi_1 )\,\cos (\varphi_2 ) \\ \sin (\theta_1 )\, \sin (\theta_2 ) \,\sin (\varphi_2 - \varphi_1 ) \end{array} \right] . \]
Example 15:
End of Example 15
In cylindrical coordinates, we have
\[ x = \rho\,\cos\varphi , \qquad y = \rho\,\sin\varphi , \qquad z = z , \quad \varphi \in [0, 2 \pi ), \]
where
\[ \rho = \sqrt{x^2 + y^2} , \qquad \varphi = \begin{cases} \mbox{indeterminate} , & \quad \mbox{ if } x=0 \ \mbox{ and }\ y = 0, \\ \arcsin \left( \frac{y}{\rho} \right) , & \quad \mbox{ if } x\ge 0 , \\ \pi + \arcsin \left( \frac{y}{\rho} \right) , & \quad \mbox{ if } x < 0 \ \mbox{ and }\ y \ge 0 , \\ \arcsin \left( \frac{y}{\rho} \right) - \pi , & \quad \mbox{ if } x < 0 \ \mbox{ and }\ y < 0 , \end{cases} \]
or
\[ \varphi = \begin{cases} \mbox{indeterminate} , & \quad \mbox{ if } x=0 \ \mbox{ and }\ y = 0, \\ \arctan \left( \frac{y}{x} \right) , & \quad \mbox{ if } x > 0 , \\ \arctan \left( \frac{y}{x} \right) + \pi , & \quad \mbox{ if } x< 0 , \ \mbox{ and }\ y \ge 0 , \\ \arctan \left( \frac{y}{x} \right) - \pi , & \quad \mbox{ if } x < 0 , \ \mbox{ and }\ y < 0 , \\ + \frac{\pi}{2} , & \quad \mbox{ if } x = 0 \ \mbox{ and }\ y > 0 , \\ - \frac{\pi}{2} , & \quad \mbox{ if } x = 0 \ \mbox{ and }\ y < 0 , \end{cases} \]
Cross product in cylindrical coordinates is
\[ \begin{bmatrix} \rho_1 \\ \varphi_1 \\ z_1 \end{bmatrix} \,\times \, \begin{bmatrix} \rho_2 \\ \varphi_2 \\ z_2 \end{bmatrix} = \left[ \begin{array}{c} z_2 \rho_1 \sin\varphi_1 - z_1 \rho_2 \sin\varphi_2 \\ z_1 \rho_2 \cos\varphi_2 - z_2 \rho_1 \cos\varphi_1 \\ \rho_1 \rho_2 \,\sin \left( \varphi_2 - \varphi_1 \right) \end{array} \right] . \]
Example 16:
End of Example 16

Mathematica has three multiplication commands for vectors: the dot (or inner) and outer products (for arbitrary vectors), and the cross product (for three dimensional vectors).

The cross product can be done on two vectors. It is important to note that the cross product is an operation that is only functional in three dimensions. The operation can be computed using the Cross[vector 1, vector 2] operation or by generating a cross product operator between two vectors by pressing [Esc] cross [Esc]. ([Esc] refers to the escape button)

Cross[{1,2,7}, {3,4,5}]
{-18,16,-2}

Example 17: Let us consider a rigid body which rotates around the origin O. To avoid having to carry out integrals, we will think of the rigid body as consisting of (a possibly large number of) mass points, labeled by an index α, each with mass mα, position vector rα and velocity vα. The total kinetic energy of this body is of course obtained by summing over the kinetic energy of all mass points, that is \( \displaystyle E = \frac{1}{2} \sum_{\alpha} m_{\alpha} {\bf v}_{\alpha}^2 . \)

Since the body is rigid, the velocities of the mass points are not independent but are related to their positions by vα = ω × rα, where ω is the angular velocity of the body. (The length |ω| is the angular speed and the direction of ω indicates the axis of rotation.) The kinetic energy of the rotating body can then be written as

\begin{align*} E &= \frac{1}{2} \sum_{\alpha} m_{\alpha} {\bf v}_{\alpha}^2 = \sum_{\alpha} m_{\alpha} \left\vert \omega \times {\bf r}_{\alpha} \right\vert^2 = \frac{1}{2} \sum_{\alpha} m_{\alpha} \left[ |\omega |^2 |{\bf r}_{\alpha} |^2 - \left( \omega \bullet {\bf r}_{\alpha}\right)^2 \right] \\ &= \frac{1}{2} \sum_{\alpha} m_{\alpha} \left[ \omega_i \omega_j \belta_{ij} \left\vert {\bf r}_{\alpha} \right\vert^2 - \omega_i r_{\alpha i} \omega_j r_{\alpha j}\right] \\ &= \frac{1}{2} \omega_i \underbrace{\left[ \sum_{\alpha} m_{\alpha} \left( |{\bf r}_{\alpha} |^2 \delta_{ij} - r_{\alpha i} r_{\alpha j} \right) \right]}_{I_{ij}} \omega_j . \end{align*}
The object in the square bracket, denoted by Iij, is called the moment of inertia tensor, a characteristic quantity of the rigid body. It plays a role in rotational motion analogous to that of regular mass in linear motion. In terms of the moment of inertia tensor, the total kinetic energy of the rigid body can be written as
\[ E = \frac{1}{2}\,I_{ij} \omega_i \omega_j . \]
This relation is of fundamental importance for the mechanics of rigid bodies, in particular the motion of tops. (For more on the mechanics of rigid bodies.

 

Wedge or Exterior product


The wedge product of two vectors u and v measures the noncommutativity of their tensor product.
An wedge product (also known as exterior product) is the tensor product of two vectors \( {\bf u} = \left[ u_1 , u_2 , \ldots , u_n \right] \) and \( {\bf v} = \left[ v_1 , v_2 , \ldots , v_n \right] , \) denoted uv, is the square matrix defined by \[ {\bf v} \wedge {\bf u} = {\bf v} \otimes {\bf u} - {\bf u} \otimes {\bf v} . \] Equivalently, \[ \left( {\bf v} \wedge {\bf u} \right)_{ij} = \left( v_i u_j - v_j u_i \right) . \]
Example 18: We are going to show that the wedge product uv has, up to sign, six, not four, distinct entries. Let u = [ 𝑎, b, c, d ] and v = [ α, β, γ, δ ] be any two vectors in ℝ4. Their wedge product is the square matrix:
\[ {\bf u} \wedge {\bf v} = \begin{bmatrix} 0 & a\beta - \alpha b & a\gamma - \alpha c & a\delta - \alpha d \\ b \alpha - \beta a & 0 & b \gamma - \beta c & b \delta - \beta d \\ c \alpha - \gamma a & c \beta - \gamma b & 0 & c \delta - \gamma d \\ d \alpha - \delta a & d \beta - \delta b & d \gamma - \delta c & 0 \end{bmatrix} . \]
Let us introduce six parameters:
\[ c_1 = a\beta - \alpha b , \quad c_2 = a\gamma - \alpha c , \quad c_3 = a\delta - \alpha d , \]
and
\[ c_4 = b \gamma - \beta c , \quad c_5 = b \delta - \beta d , \quad c_6 = c \delta - \gamma d . \]
Then the wedge product of these two vectors becomes
\[ {\bf u} \wedge {\bf v} = \begin{bmatrix} 0 & c_1 & c_2 & c_3 \\ -c_1 & 0 & c_4 & c_5 \\ - c_2 & - c_4 & 0 & c_6 \\ - c_3 & - c_5 & - c_6 & 0 \end{bmatrix} . \]
End of Example 18

It can be shown that in n dimensions, the antisymmetric matrix uv has n(n − 1)/2 unique entries. The wedge product is an antisymmetric 2-tensor, in any dimension.

Like the tensor product, the wedge product is defined for two vectors of arbitrary dimension. Notice, too, that the wedge product shares many properties with the cross product. For example, it is easy to verify directly from the definition of the wedge product as the difference of two tensor products obeys the following properties:

  1. \( \displaystyle {\bf u} \wedge {\bf u} = 0 ; \)
  2. \( \displaystyle {\bf v} \wedge {\bf u} = - {\bf u} \wedge {\bf v} \) (anticommutative);
  3. \( \displaystyle {\bf u} * \left( {\bf v} \wedge {\bf w} \right) \ne \left( {\bf u} \wedge {\bf v} \right) * {\bf w}^{\mathrm{T}} \) (nonassociative);
  4. \( \displaystyle c \left( {\bf u} \wedge {\bf u} \right) = \left( c\,{\bf u} \right) \wedge {\bf u} = {\bf u} \wedge \left( c\,{\bf u} \right) ; \)
  5. \( \displaystyle {\bf u} \wedge \left( {\bf v} + {\bf w} \right) = {\bf u} \wedge {\bf v} + {\bf u} \wedge {\bf w} \) (distributive);
  6. \( \displaystyle {\bf u} * \left( {\bf v} \wedge {\bf w} \right) + {\bf v} * \left( {\bf w} \wedge {\bf u} \right) + {\bf w} * \left( {\bf u} \wedge {\bf v} \right) = 0 \) (Jacobi identity).
The wedge product also shares some other important properties with the cross product. The defining characteristics of the cross product are captured by the formulas
  • \( \displaystyle {\bf u} * \left( {\bf u} \wedge {\bf v} \right) * {\bf u}^{\mathrm{T}} = {\bf v} * \left( {\bf u} \wedge {\bf v} \right) * {\bf v}^{\mathrm{T}} , \)
  • \( \displaystyle {\bf u} * \left( {\bf u} \wedge {\bf v} \right) * {\bf v}^{\mathrm{T}} = \left( {\bf u} \cdot {\bf u} \right) \left( {\bf v} \cdot {\bf v} \right) - \left( {\bf u} \cdot {\bf v} \right)^2 = \| {\bf u} \|^2 \, \| {\bf v} \|^2 \sin\theta . \)
Moreover, in three dimensions, the entries of the wedge product matrix uv are, up to sign, the same as the components of the cross product vector u × v.
\[ {\bf u} \wedge {\bf v} = \begin{bmatrix} 0 & u_1 v_2 - u_2 v_1 & u_1 v_3 - u_3 v_1 \\ u_2 v_1 - u_1 v_2 & 0 & u_2 v_3 - u_3 v_2 \\ u_3 v_1 - u_1 v_3 & u_3 v_2 - u_2 v_3 & 0 \end{bmatrix} . \]

========================================

An outer product is the tensor product of two coordinate vectors \( {\bf u} = \left[ u_1 , u_2 , \ldots , u_m \right] \) and \( {\bf v} = \left[ v_1 , v_2 , \ldots , v_n \right] , \) denoted \( {\bf u} \otimes {\bf v} , \) is an m-by-n matrix W of rank 1 such that its coordinates satisfy \( w_{i,j} = u_i v_j . \) The outer product \( {\bf u} \otimes {\bf v} , \) is equivalent to a matrix multiplication \( {\bf u} \, {\bf v}^{\ast} , \) (or \( {\bf u} \, {\bf v}^{\mathrm T} , \) if vectors are real) provided that u is represented as a column \( m \times 1 \) vector, and v as a column \( n \times 1 \) vector. Here \( {\bf v}^{\ast} = \overline{{\bf v}^{\mathrm T}} . \)

Example 19: Let
&omega1 = 3xdx − 5ydy
and
&omega2 = 2zdx + 4xdz

Then

\begin{align*} \omega_1 \wedge \omega_2 &= \left( 3x\,{\text d}x - 5y\,{\text d}y \right) \wedge \left( 2z\,{\text d}x + 4x\,{\text d}z \right) \\ &= 12 x^2 \,{\text d}x \wedge {\text d}z -10yz \,{\text d}y \wedge {\text d}x -20xy \,{\text d}y \wedge {\text d}z \\ &= 12 x^2 \,{\text d}x \wedge {\text d}z +10yz \,{\text d}x \wedge {\text d}y -20xy \,{\text d}y \wedge {\text d}z . \end{align*}
Here we used identities dx∧dx = 0 and dy∧dx = − dx∧dy.

If &omega3 = ydzzdy, then

\begin{align*} \omega_1 \wedge \omega_2 \wedge \omega_3 &= \left( x^2 \,{\text d}x \wedge {\text d}z +10yz \,{\text d}x \wedge {\text d}y -20xy \,{\text d}y \wedge {\text d}z \right) \wedge \left( y\,{\text d}z - z\,{\text d}y \right) \\ &= - \end{align*}
End of Example 19
Example 20:
End of Example 19

The outer product operation can be extended for matrices. If A is an m×n matrix and B is an p×q matrix, then their outer product AB is mp×nq matrix:

\[ {\bf A} \otimes {\bf B} = \begin{bmatrix} a_{1,1} {\bf B} & \cdots & a_{1,n} {\bf B} \\ \vdots & \ddots & \vdots \\ a_{m,1} {\bf B} & \cdots & a_{m,n} {\bf B} \end{bmatrix} . \]
Example 21: Let us consider two matrices:
\[ {\bf A} = \begin{bmatrix} 01 & 02 \\ 03 & 04 \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} 11 & 12 & 13 \\ 21 & 22 & 23 \\ 31 & 32 & 33 \end{bmatrix} . \]
Their outer product is
\[ {\bf A} \otimes {\bf B} = \begin{bmatrix} 01 11 & 01 12 & 01 13 & 02 11 & 02 12 & 02 13 \\ 01 21 & 01 22 & 01 23 & 02 21 & 02 22 & 02 23 \\ 01 31 & 01 32 & 0133 &0231 & 02 32 & 02 33 \\ 03 11 & 03 12 & 03 13 & 04 11 & 04 12 & 04 13 \\ 03 21 & 03 22 & 03 23 & 04 21 & 04 22 & 04 23 \\ 03 31 & 03 32 & 0333 &0431 & 04 32 & 04 33 \end{bmatrix} . \]

 

  1. With v = [ 1, 0, −1 ], find vv.
  2. Find [ 1, 0, −1 ] ⊗ [ 1, 1, 1 ].
  3. Given two vectors a and b from ℝ³, find a third (non-zero!) vector v perpendicular to the first two vectors. In other words, find v such that annihilates two dot products: v · a = 0 and v · b = 0.
  4. Find the cross product of the following pairs of vectors.
    1. [3, −1, 2] and [4, 2, −3];
    2. [−3, 6, 5] and [2, −4, 1].
  5. Two vectors u and v are separated by an angle of π/6, and |u|=3/2 and |v|=2/3. Find |u×v|.
  6. Two vectors u and v are separated by an angle of π/4, and |u|=4 and |v|=5. Find |u×v|.
  7. Define the triple product of three vectors, x, y, and z, to be the scalar x⋅(y×z). Show that three vectors lie in the same plane if and only if their triple product is zero. Verify that ⟨1,1,−3⟩, ⟨2,3,4⟩ and ⟨4,5,−2⟩ are coplanar.
  8. Find the area of the parallelogram with vertices (0,0), (2,1), (7,4), and (2,3).
  9. Find the area of the triangle with vertices (1,0,0), (3,7,4), and (−2,−3,1).
  10. Find and explain the value of (i×jk and (i+j)×(ij).

 

  1. Banchoff, T.F., Lovett, S., Differential Geometry of Curves and Surfaces, Third edition, CRC Press, Boca Raton, FL, 2023.
  2. Conrad, K., Exterior Powers, 2013.
  3. Lang, S., Algebra, 3rd Ed. Springer, New York, 2013/2002.
  4. Fitzpatrick, S., Linear Algebra: A second course, featuring proofs and Python. 2023.
  5. Lucht, P., Tensor Products, Wedge Products and Differential Forms, Rimrock Digital Technology, Salt Lake City, Utah 84103.
  6. Lucht, P., Tensor Analysis and Curvilinear Coordinates (2016.
  7. Roman, S., Advanced Linear Algebra, 3rd Ed. (Springer, New York, 2008).
  8. Shankar, R., Principles of Quantum Mechanics, 2nd Ed. (Plenum Press, New York, 1994).
  9. Suter. J., Geometric Algebra Primer (2003, www.jaapsuter.com/geometric-algebra.pdf ).