es

To enhance pedagogical effectiveness, the treatment of the dot product is presented in several distinct sections:

Dot product

Metric

Duality

Orthogonality

Projection

Solvability

 

Geometric Properties of the Dot Product


Many years before Gibbs definition, ancient Greeks discovered that geometrically the product of the corresponding entries of the two sequences of numbers is equivalent to the product of their magnitudes and the cosine of the angle between them. This leads to introducing a metric (or length or distance) in the Cartesian product ℝ³ transferring it into the Euclidean space. Originally, it was the three-dimensional physical space, but in modern mathematics there are Euclidean spaces of any positive integer dimension n, which are called Euclidean n-spaces.

Geometrical analysis yields further interesting properties of the dot product operation that can then be used in nongeometric applications. If we rewrite the Cauchy inequality as an equality with parameter k:

\[ \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{v} \| \, k \qquad (0 \le k \le 1) , \]
it was discovered by ancient Greeks (see Example 13) that the parameter k has a geometric meaning in physical space (ℝ³ or ℝ²). This leads to the equation
\begin{equation} \label{EqDot.9} \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{v} \| \, \cos\theta , \end{equation}
where θ is called the angle between vectors u and v because it is an abstract version of three dimensional case. Your question is anticipated: why sign of the dot product does match the sign of the cosine of the angle between the two vectors?    
Example 20: The cosine of an angle can range between -1 and +1 while the corresponding angle (θ) varies from 0 to π. First, we verify the extreme cases.

A non-zero vector v should form an angle 0 with itself; this is indeed the case (and can be seen as the motivation for using the cos, rather than other mathematical functions such as the sine or the Chebyshev polynomial). When vectors have different directions, say u = −v, we have uv = (−v) • v = −∥v∥², so cosθ = −1, which leads to θ = π, the expected result for the angle between a vector and its negative. We can also check that the definition of the angle is consistent with the geometrical definition of the cosine function for angles ±π/2 (orthogonality).

Let A(xa, ya, za), B(xb, yb, zb), and C(xc, yc, zc) be arbitrary points in ℝ³ such that corresponding vectors \begin{align*} \vec{AB} &= \left( x_b - x_a , y_b - y_a , z_b - z_a \right) , \\ \vec{AC} &= \left( x_c - x_a , y_c - y_a , z_c - z_a \right) , \\ \vec{BC} &= \left( x_b - x_c , y_b - y_c , z_b - z_c \right) , \end{align*} are complanar in three-dimensional space, so they can be drawn on a single plane. All these three vectors are assumed to have positive lengths: \begin{align*} a = \| \vec{AB} \| &= +\sqrt{\left( x_b - x_a \right)^2 + \left( y_b - y_a \right)^2 + \left( z_b - z_a \right)^2} > 0, \\ b = \| \vec{AC} \| &= +\sqrt{\left( x_c - x_a \right)^2 + \left( y_c - y_a \right)^2 + \left( z_c - z_a \right)^2} > 0, \\ c = \| \vec{BC} \| &= +\sqrt{\left( x_b - x_c \right)^2 + \left( y_b - y_c \right)^2 + \left( z_b - z_c \right)^2 } > 0. \end{align*}

a = 2; b = -3; c = 1; x0 = -1; y0 = 2; z0 = 1; F[x_, y_] = (a*x0 + b*y0 + c*z0 - a*x - b*y)/c; plane = Plot3D[F[x, y], {x, -3, 3}, {y, -3, 3}, AxesLabel -> {x, y, z}]; ab = Graphics3D[{Black, Thickness[0.01], Line[{{0, -3, F[0, -3]}, {3, 2, F[3, 2]}}]}]; ac = Graphics3D[{Black, Thickness[0.01], Line[{{0, -3, F[0, -3]}, {-2, 2, F[-2, 2]}}]}]; bc = Graphics3D[{Black, Thickness[0.01], Line[{{3, 2, F[3, 2]}, {-2, 2, F[-2, 2]}}]}]; txt = Graphics3D[{Text[ Style["\[Alpha]", FontSize -> 20, Bold, Blue], {0.05, -2.2, F[0.1, -2.2]}], Text[Style["\[Beta]", FontSize -> 20, Bold, Blue], {2.0, 1.2, F[2.0, 1.2]}], Text[Style["\[Gamma]", FontSize -> 20, Bold, Blue], {-1.5, 1.7, F[-1.5, 1.7]}], Text[Style["A", FontSize -> 18, Bold, Blue], {-0.7, -2.8, F[-0.7, -2.8]}], Text[Style["B", FontSize -> 18, Bold, Blue], {2.8, 2.5, F[3.0, 2.5]}], Text[Style["C", FontSize -> 18, Bold, Blue], {-2.7, 2.2, F[-2.7, 2.2]}]}]; Show[plane, ab, ac, bc, txt ]
Triangle ΔABC on plane in space ℝ³

We proceed similar to Example 13 and write three equations: \begin{align*} c &= b\,\cos\alpha + a\,\cos\beta , \\ a &= c\,\cos\beta + \,\cos\gamma , \\ b &= c\,\cos\alpha + a\,\cos\gamma . \end{align*} This is a linear system with respect to three unknowns: X = cosα, Y = cosβ, and Z = cosγ, which we solve using Mathematica:

Solve[{c == b*X + a*Y, a == c*Y + b*Z, b == c*X + a*Z}, {X, Y, Z}]
{{X -> -((a^2 - b^2 - c^2)/(2 b c)), Y -> -((-a^2 + b^2 - c^2)/(2 a c)), Z -> -((-a^2 - b^2 + c^2)/(2 a b))}}
This yields \[ \cos\alpha = \frac{b^2 + c^2 - a^2}{2bc} , \quad \cos\beta = \frac{a^2 + c^2 - b^2}{2ac} , \quad \gamma = \frac{a^2 + b^2 - c^2}{2ab} . \] Let us consider the first equation: \[ 2bc\,\cos\alpha = b^2 + c^2 - a^2 . \] Since according to Mathematica, we have \[ \frac{b^2 + c^2 - a^2}{2} = \left( x_c - x_a \right) \left( x_b - x_c \right) + \left( y_a -y_c \right) \left( y_b - y_c \right) + \left( z_a - z_c \right) \left( z_b - z_c \right) , \]
FullSimplify[(xc - xa)^2 + (yc - ya)^2 + (zc - za)^2 + (xb - xc)^2 + (yb - yc)^2 + (zb - zc)^2 - (xb - xa)^2 - (yb - ya)^2 - (zb - za)^2]
2 ((xa - xc) (xb - xc) + (ya - yc) (yb - yc) + (za - zc) (zb - zc))
which leads to the dot product of two vectors \( \displaystyle \quad \vec{AC} \quad\mbox{and} \quad \vec{BC} . \quad \) Therefore, upon canceling common multiple 2, we get \[ \| \vec{AC} \| \cdot \| \vec{BC} \| \,\cos\alpha = \vec{AC} \bullet \vec{BC} . \]    ■
End of Example 20
    If both vectors have unit length (∥u∥ = ∥v∥ = 1), then equation \eqref{EqDot.9} reduces to the cosine of the angle between the two vectors.
\[ \cos\left( \theta_{uv} \right) = \mathbf{u} \bullet \mathbf{v} \qquad \iff \qquad \theta_{uv} = \mbox{arccos} \left( \mathbf{u} \bullet \mathbf{v} \right) . \]
It can be rewritten to give an expression for the angle between two vectors

\begin{equation} \label{EqDot.10} \cos\left( \theta_{uv} \right) = \frac{\mathbf{u} \bullet \mathbf{v}}{\| \mathbf{u} \| \cdot \| \mathbf{v} \|} \qquad \iff \qquad \theta_{uv} = \mbox{arccos} \left( \frac{\mathbf{u} \bullet \mathbf{v}}{\| \mathbf{u} \| \cdot \| \mathbf{v} \|} \right) . \end{equation}
In statistics, the ratio \( \displaystyle \quad r = \frac{\mathbf{u} \bullet \mathbf{v}}{\| \mathbf{u} \| \cdot \| \mathbf{v} \|} \quad \) is called the Pearson correlation coefficient (with suitable normalization), which measures the strength of a linear relationship between two variables. Eq.\eqref{EqDot.10} can be used to define an angle between two vectors in any higher-dimensional space.

   
Example 21: What is the angle between i and i + j + 2k?
\begin{align*} \theta &= \arccos \left( \frac{{\bf i} \cdot ({\bf i} + {\bf j} + 2 {\bf k})}{\| {\bf i} \| \cdot \| {\bf i} + {\bf j} + 2 {\bf k} \| } \right) \\ &= \arccos \left( \frac{1}{\sqrt{6}} \right) \approx 1.15026 \approx 65.9^{\circ} . \end{align*}
Mathematica knows how to evaluate the arccos, but the answer is given in radians, not degrees.
    N[ArcCos[1/Sqrt[6]]]
    1.15026
To convert this number into degrees, just make a simple multiplication
    %*180/Pi
    65.9052

   

From angle to dot product:   

Suppose we are given two vectors u, v with ∥u∥ = 2 and ∥v∥ = 7. Suppose the angle between u and v is π/6. Find uv.

Solution:    From formula (9), we get \[ \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{v} \| \,\cos\theta = 2 \cdot 7\cdot\cos\left( \frac{\pi}{6} \right) = 7\,\sqrt{3} . \]    ■

End of Example 21

When θ = π/2 or −π/2, the cosine of a right angle is zero, so the scalar product will be zero, regardless of the magnitudes of the vectors. This is such an important case that it has its own name: orthogonal. In this case, mathematicians use a special symbol for perpendicularity which is an upside-down "T." So you can write that two vectors are orthogonal as uv.

When θ = 0 or π, the cosine of such angle is either +1 or −1. Hence the dot product is reduced to the product of the magnitudes of the two vectors. The term for this situation is collinear (meaning on the same line). This also means that the dot product of a vector with itself is simply ±∥u∥², which is the same result obtained algebraically in Eq.(2).

Cosine is positive for acute angle (−π/2 < θ < π/2) and negative for obtuse angle, π/2 < θ < 3π/2. We visualize all these situations with figures below:

Figure 2: Acute angle
     
Figure 3: Obtuse angle

ar1 = Graphics[{Blue, Thick, Arrow[{{0, 0}, {1, 0.4}}]}]; ar2 = Graphics[{Blue, Thick, Arrow[{{0, 0}, {0.4, 1}}]}]; txt = Graphics[{Black, Text[Style["|\[Theta]| < \[Pi]/2", FontSize -> 18, Bold], {0.6, 0.6}]}] Show[ar1, ar2, txt, PlotLabel -> Style[Framed["Acute"], 20, Black, Background -> Lighter[Yellow]]]
ar1 = Graphics[{Purple, Thick, Arrow[{{0, 0}, {1, 0.4}}]}]; ar2 = Graphics[{Purple, Thick, Arrow[{{0, 0}, {-0.6, 0.4}}]}]; txt = Graphics[{Black, Text[Style["\[Pi]/2 < \[Theta] < 3\[Pi]/2", FontSize -> 18, Bold], {0.2, 0.4}]}]; Show[ar1, ar2, txt, PlotLabel -> Style[Framed["Obtuse"], 20, Black, Background -> Lighter[Brown]]]

Figure 4: Collinear when cos(θ) = 1
     
Figure 5: Collinear when cos(θ) = −1

ar1 = Graphics[{Purple, Thick, Arrow[{{0, 0}, {1, 0.5}}]}]; ar2 = Graphics[{Blue, Thick, Arrow[{{1, 0.5}, {1.5, 0.75}}]}]; txt = Graphics[{Black, Text[Style["\[Theta] = 0", FontSize -> 18, Bold], {0.4, 0.46}]}]; Show[ar1, ar2, txt, PlotLabel -> Style[Framed["Collinear"], 20, Black, Background -> Lighter[Brown]]]
ar1 = Graphics[{Brown, Thick, Arrow[{{0, 0}, {1, 0.5}}]}]; ar2 = Graphics[{Blue, Thick, Arrow[{{0, 0}, {-1.0, -0.5}}]}]; txt = Graphics[{Black, Text[Style["\[Theta] = \[Pi]", FontSize -> 18, Bold], {-0.1, 0.2}]}]; Show[ar1, ar2, txt, PlotLabel -> Style[Framed["Collinear"], 20, Black, Background -> Lighter[Yellow]]]

 

Dot product in coordinate systems


Although there are known many coordinate transformations, we present three the most popular systems: polar, cylindrical, and spherical.

The concepts of angle and radius were already used by ancient Greek astronomer and astrologer Hipparchus (190–120 BC). Although, Grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the system's concepts in the mid-17th century, though the actual term polar coordinates has been attributed to Gregorio Fontana in the 18th century.

The polar coordinate system specifies a given point P(x, y) on the plane by using a distance r and an angle θ as its two coordinates (r, θ), where r is the point's distance from a reference point called the pole, and θ is the point's direction from the pole relative to the direction of the abscissa. The distance r from the pole is called the radial coordinate, radial distance or simply radius, and the angle θ is called the angular coordinate, polar angle, or azimuth, measured in counterclockwise direction stating at zero from a reference direction (abscissa). Note that the origin (0, 0) has no polar coordinates.

The polar coordinates r and θ can be converted to the Cartesian coordinates x and y by using the trigonometric functions sine and cosine or complex numbers:

\[ \begin{split} x & = r\,\cos\theta , \\ y &= r\,\sin\theta , \end{split} \qquad\mbox{or}\qquad z = x + {\bf j}\,y = r\,e^{{\bf j}\,\theta} \in \mathbb{C} , \]
with \( \displaystyle \quad r = \| \begin{bmatrix} x & y \end{bmatrix} \| = +\sqrt{x^2 + y^2} . \quad \) It makes no sense to define dot product in polar coordinates similar to the Cartesian coordinates:
\[ \left( r_1 , \theta_1 \right) \bullet \left( r_2 , \theta_2 \right) \ne r_1 r_2 + \theta_1 \theta_2 \]
because products of components isn't even dimensionally correct – the radial coordinates are dimensional while the angles are dimensionless. Upon introducing the radius vector \( \displaystyle \quad \mathbf{r} = r\,\hat{\bf r}(\theta ), \quad \) we have \( \displaystyle \quad \hat{\bf r}(\theta ) = \cos\theta\, \hat{\bf x} + \sin\theta\, \hat{\bf y} = \mathbf{i}\,\cos\theta + \mathbf{j}\,\sin\theta . \quad \) Then scalar product of two vectors \( \displaystyle \quad \mathbf{u} = r_1 e^{{\bf j}\theta_1} = \| \mathbf{u} \|\,e^{{\bf j}\theta_1} \quad \mbox{and} \quad \mathbf{v} = r_2 e^{{\bf j}\theta_2} = \| \mathbf{v} \|\,e^{{\bf j}\theta_2} \quad \) in polar coordinates becomes
\begin{align*} r_1 e^{{\bf j}\,\theta_1} \bullet r_2 e^{{\bf j}\,\theta_2} &= r_1 \left( {\bf i}\,\cos \theta_1 + {\bf j}\,\sin \theta_1 \right) \bullet r_2 \left( {\bf i}\,\cos \theta_2 + {\bf j}\,\sin \theta_2 \right) \\ &= r_1 \cos \theta_1 \, r_2 \cos \theta_2 + r_1 \sin \theta_1 \, r_2 \sin \theta_2 \\ &= r_1 r_2 \left( \cos\theta_1 \cos\theta_2 + \sin\theta_1 \sin\theta_2 \right) = r_1 r_2 \cos\left( \theta_1 - \theta_2 \right) . \end{align*}
Here we use a trigonometric formula for cosine of difference of two angles, which we verify with Mathematica:
({1,0}*Cos[theta1] + {0,1}*Sin[theta1]).({1,0}*Cos[theta2] + {0, 1}*Sin[theta2])
   
Example 22: First, we randomly generate two vectors:
u = RandomInteger[{-9, 9}, 2]
{-2, -7}
v = RandomInteger[{-9, 9}, 2]
{5, -4}
\[ \mathbf{u} = \begin{bmatrix} -2&-7 \end{bmatrix} , \qquad \mathbf{v} = \begin{bmatrix} 5 & -4 \end{bmatrix} \] Their dot product is (if you trust Mathematica)
Dot[u . v]
18
We convert these two vectors into polar form: \[ \mathbf{u} = r_1 e^{{\bf j} \theta_1} = \sqrt{53}\,e^{{\bf j} \theta_1} \approx 7.28\,e^{-{\bf j}1.849} \]
ToPolarCoordinates[u]
{Sqrt[53], -\[Pi] + ArcTan[7/2]}
N[%]
7.28011, -1.8491}
and \[ \mathbf{v} = r_2 e^{{\bf j} \theta_2} = \sqrt{41}\,e^{{\bf j} \theta_2} \approx 6.4\,e^{-0.67{\bf j}} . \]
ToPolarCoordinates[v]
{Sqrt[41], -ArcTan[4/5]}
N[%]
{6.40312, -0.674741}
Since arguments of these two vectors are θ₁ ≅ -1.849 and θ₂ ≅ -0.674741, their difference is θ₁ − θ₂ ≅ -1.17436
-1.8490959858000078 + 0.6747409422235527
-1.17436
So the cosine of their difference is
Cos[%]
0.386138
Using Mathematica, we determine the dot product of vectors u and v in polar form:
Sqrt[53]*Sqrt[41]*0.386138
18.
which is accurate up to 16 decimal places.    ■
End of Example 22

The polar coordinate system can be extended to three dimensions in two ways: the cylindrical coordinate system adds a second distance coordinate, and the spherical coordinate system adds a second angular coordinate. These two extensions of the polar coordinate system suffers the same drawback---the original is a singular point not having proper dimensions. When vector u = [x, y, z] = xi + yj + zk ∈ ℝ³ is expressed via cylindrical coordinates, its components become

\[ x = r\,\cos\theta , \quad y = r\,\sin\theta , \quad z = z, \]
where
\[ r = \sqrt{x^2 + y^2} , \qquad \theta = \mbox{arctan} \left( \frac{y}{x} \right) . \]
The dot product in cylindrical coordinates is similar to polar coordinate:
\[ \left( r_1 , \theta_1 , z_1 \right) \bullet \left( r_2 , \theta_2 , z_2 \right) = r_1 r_2 \cos\left( \theta_1 - \theta_2 \right) + z_1 z_2 . \]
   
Example 23: As usual, we randomly generate two vectors:
u = RandomInteger[{-9, 9}, 3]
{-6, -1, 3}
v = RandomInteger[{-9, 9}, 3]
{1, -4, 9}
\[ \mathbf{u} = \begin{bmatrix} -6& -1& 3 \end{bmatrix} , \qquad \mathbf{v} = \begin{bmatrix} 1& -4& 9 \end{bmatrix} \] Their dot product is
Dot[u . v]
25
We convert these two vectors into cylindrical coordinates. To achive this, we extract two-dimensional part of each vector and find their polar forms: \[ \mathbf{u} = \mathbf{u}_2 + 3\,\mathbf{k} , \qquad \mathbf{v} = \mathbf{v}_2 + 9\,\mathbf{k} , \] where k is the unit vector in applicat (z-) direction and \[ \mathbf{u}_2 = \begin{bmatrix} -6&-1 \end{bmatrix} = - 6{\bf i} - {\bf j} , \qquad \mathbf{v}_2 = \begin{bmatrix} 1&-4 \end{bmatrix} = {\bf i} -4{\bf j} . \] Using Mathematica, we convert two dimensional vectors u₂ = [−6, −1] and v₂ = [1, −4] into polar form: \[ \mathbf{u}_2 = \| \mathbf{u}_2 \|\, e^{{\bf j} \theta_1} = \sqrt{37}\,e^{{\bf j} \theta_1} \approx 6.08276\,e^{-2.97644{\bf j}} \]
u2 = {-6, -1}; ToPolarCoordinates[u2]
{Sqrt[37], -\[Pi] + ArcTan[1/6]}
N[%]
{6.08276, -2.97644}
and \[ \mathbf{v}_2 = \| \mathbf{v}_2 \|\, e^{{\bf j} \theta_2} = \sqrt{17}\,e^{{\bf j} \theta_2} \approx 4.12311\,e^{-1.32582{\bf j}} . \]
v2 = {1, -4}; ToPolarCoordinates[v2]
{Sqrt[17], -ArcTan[4]}
N[%]
{4.12311, -1.32582}
Using the dot product formula for two dimensional vectors in polar form \[ \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{u} \| \cdot \cos\left( \theta_1 - \theta_2 \right) , \] we obtain u₂ • v₂ = −1.99984.
Sqrt[37] * Sqrt[17] * Cos[-2.97644 + 1.32582]
-1.99984
Since 3k • 9k = 27, we find the scalar product of u and v to be \[ \mathbf{u} \bullet \mathbf{v} = 27 - 1.99984 = 25.0002 . \]    ■
End of Example 23

The radius vector of a point in space with spherical coordinates (ρ, 𝜃, 𝜙) can be written as

\[ \mathbf{r} = \rho\,\hat{\bf r} (\theta , \phi ) , \qquad \mbox{with}\quad \| \mathbf{r} \| = \rho , \]
because \( \displaystyle \quad \hat{\bf r} (\theta , \phi ) \quad \) is a unit vector:
\[ \hat{\bf r} (\theta , \phi ) = \sin\phi\,\cos\theta\,\hat{\bf x} + \sin\phi\,\sin\theta\,\hat{\bf y} + \cos\phi\,\hat{\bf z} . \]
In engineering, the above relation is usually written as
\[ \hat{\bf r} (\theta , \phi ) = \sin\phi\,\cos\theta\,{\bf i} + \sin\phi\,\sin\theta\,{\bf j} + \cos\phi\,{\bf k} , \]
with rectangular coordinate unit vectors i (abscissa), j (ordinate), and k (applicate). Thus, the components of the radius vector with respect to the "spherical basis" form a vector field because they vary from point to point. Moreover, the radius vector has coordinates (ρ, 0, 0) because θ and ϕ have no physical dimension, and cannot be the components of a radius vector. The origin (0, 0, 0) has no spherical coordinates.

When spherical coordinates (ρ₁, θ₁, ϕ₁) and (ρ₂, θ₂, ϕ₂) are known for two vectors u and v, we have

\[ \mathbf{u} = \rho_1 \hat{\bf r} (\theta_1 , \phi_1 ) \qquad \mbox{and} \qquad \mathbf{v} = \rho_2 \hat{\bf r} (\theta_2 , \phi_2 ) , \]
with Euclidean norms ∥u∥ = ρ₁ and ∥v∥ = ρ₂. Their dot product is
\begin{align*} \mathbf{u} \bullet \mathbf{v} &= \left[ \rho_1 \hat{\bf r} (\theta_1 , \phi_1 ) \right] \bullet \left[ \rho_2 \hat{\bf r} (\theta_2 , \phi_2 ) \right] \\ &= \rho_1 \rho_2 \hat{\bf r} (\theta_1 , \phi_1 ) \bullet \hat{\bf r} (\theta_2 , \phi_2 ) \\ &= \rho_1 \rho_2 \left( \sin\phi_1 \cos\theta_1 \hat{\bf x} + \sin\phi_1 \sin\theta_1 \hat{\bf y} + \cos\phi_1 \hat{\bf z} \right) \bullet \left( \sin\phi_2 \cos\theta_2 \hat{\bf x} + \sin\phi_2 \sin\theta_2 \hat{\bf y} + \cos\phi_2 \hat{\bf z} \right) \\ &= \rho_1 \rho_2 \left( \sin\phi_1 \sin\phi_2 \cos\theta_1 \cos\theta_2 + \sin\phi_1 \sin\phi_2 \sin\theta_1 \sin \theta_2 + \cos\phi_1 \cos\phi_2 \right) \\ &= \rho_1 \rho_2 \left[ \sin\phi_1 \sin\phi_2 \cos \left( \theta_1 - \theta_2 \right) + \cos\phi_1 \cos\phi_2 \right] . \end{align*}
If we introduce the angle ω by
\[ \cos\omega = \sin\phi_1 \sin\phi_2 \cos \left( \theta_1 - \theta_2 \right) + \cos\phi_1 \cos\phi_2 , \]
then equation \eqref{EqDot.9} in spherical coordinates becomes
\[ \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{v} \| \,\cos\omega . \]
k = 0.5; Plot3D[{Sin[phi1] * Sin[phi2]*k + Cos[phi1]*Cos[phi2]}, {phi1, 0, 2*Pi}, {phi2, 0, 2*Pi}]
Figure 8: Plot of function cosω
   
Example 24: First, we randomly generate two vectors:
u = RandomInteger[{-9, 9}, 3]
{1, 1, -9}
v = RandomInteger[{-9, 9}, 3]
{-5, 6, -3}
\[ \mathbf{u} = \begin{bmatrix} 1&1&-9 \end{bmatrix} , \qquad \mathbf{v} = \begin{bmatrix} -5& 6& -3 \end{bmatrix} \] Their dot product is (if you trust Mathematica)
Dot[u . v]
28
We convert these two vectors into spherical form: \[ \mathbf{u} = \sqrt{83}\left( \cos\theta_1 \sin\phi_1 , \sin\theta_ 1 \sin\phi_ 1 , \cos\phi_ 1 \right) \approx \left( 1.00001, 1.00001, -9. \right) \]
CoordinateTransformData[ "Cartesian" -> "Spherical", "Mapping", {1, 1, -9}]
{Sqrt[83], \[Pi] - ArcTan[Sqrt[2]/9], \[Pi]/4}
N[%]
9.11043, 2.98573, 0.785398}
su = Sqrt[83]*{Sin[2.98573]*Cos[0.785398], Sin[2.98573]*Sin[0.785398], Cos[2.98573]}
1.00001, 1.00001, -9.}
and \[ \mathbf{v} = \sqrt{70}\left( \cos\theta_2 \sin\phi_2 , \sin\theta_ 2 \sin\phi_ 2 , \cos\phi_ 2 \right) \approx \left( -4.99997, 6.00002, -3. \right) . \]
CoordinateTransformData[ "Cartesian" -> "Spherical", "Mapping", {-5, 6, -3}]
Sqrt[70], \[Pi] - ArcTan[Sqrt[61]/3], \[Pi] - ArcTan[6/5]}
N[%]
{8.3666, 1.93753, 2.26553}
sv = Sqrt[70]*{Sin[1.93753]*Cos[2.26553], Sin[1.93753]*Sin[2.26553], Cos[1.93753]}
{-4.99997, 6.00002, -3.}
Now we calculate the dot product of these two vectors using spherical coordinate formula \begin{align*} \mathbf{u} \bullet \mathbf{v} &= \rho_1 \rho_2 \left[ \sin\phi_1 \sin\phi_2 \cos \left( \theta_1 - \theta_2 \right) + \cos\phi_1 \cos\phi_2 \right] \\ &= \sqrt{83}\cdot \sqrt{70} \left[ \sin\phi_1 \sin\phi_2 \cos \left( \theta_1 - \theta_2 \right) + \cos\phi_1 \cos\phi_2 \right] \end{align*} We calculate each term in the last expression separately.
Sin[2.98573] * Sin[1.93753] * Cos[0.785398 - 2.26553]
0.0131202
and
Cos[2.98573] * Cos[1.93753]
0.354222
Since the product of vector norms is \[ \| \mathbf{u} \| \cdot \| \mathbf{v} \| = \sqrt{83} \cdot \sqrt{70} \approx 76.2234 , \]
N[Sqrt[83] * Sqrt[70]]
76.2234
we are able to finish our calculations:
Sqrt[83] * Sqrt[70] * (0.0131202 + 0.354222)
28.0001
However, if we use the six digit approximation of norm product, we will get less accurate answer (which is correct within 6 decimal places)
76.2234 * (0.0131202 + 0.354222)
28.000128.0001
   ■
End of Example 24

It is not always guaranteed that one can use such special coordinate systems (polar coordinates are an example in which the local orthonormal basis of vectors is not the coordinate basis). However, the dot product between a vector x and a covector y is invariant under all transformations because this product defines a functional generated by covector y. Then the given dot product is just one representation of this linear functional in particular coordinates. Making linear transformation with matrix A, we get

\begin{align*} \mathbf{x} \bullet \mathbf{y} &= \sum_i x^i y_i = \left( \mathbf{A}\,\vec{\xi} \right) \bullet \left( \mathbf{A}^{-T} \vec{\eta} \right) \\ &= \sum_i \sum_j A^i_j \xi^j \bullet \left( \mathbf{A}^{-1} \right)^j_i \eta_j \\ &= \sum_j \sum_i \left( \mathbf{A}^{-1} \right)^j_i A^i_j \xi^j \bullet \eta_j = \sum_j \xi^j \bullet \eta_j . \end{align*}
   
Example 25: Using randomly generated matrix
A = RandomInteger[{-9, 9}, {3, 3}]
{{9, 4, -6}, {1, 9, -9}, {9, 5, -7}}
\[ \mathbf{A} = \begin{bmatrix} 9& 4& -6 \\ 1& 9& -9 \\ 9& 5& -7 \end{bmatrix} , \] we can define the linear transformation \[ T\, : \ \mathbb{R}^3 \mapsto \mathbb{R}^3 , \qquad \vec{\xi} = \mathbf{A}\,\mathbf{x} \] because matrix A is not singular, as Mathematica confirms
Det[A]
-2
If vector x has components x = (x¹, x², x³}, then application of transformation T gives \[ \vec{\xi} = \mathbf{A}\,\mathbf{x} \qquad \Longrightarrow \qquad \begin{cases} \xi^1 &= 9x^1 + 4 x^2 -6 x^3 , \\ \xi^2 &= x^1 + 9 x^2 -9 x^3 , \\ \xi^3 &= 9 x^1 + 5 x^2 -7 x^3 . \end{cases} \] Using the inverse transpose matrix \[ \mathbf{A}^{-T} = \begin{bmatrix} 9& 37& 38 \\ 1& \frac{9}{2}& -\frac{9}{2} \\ -9& -\frac{75}{2}& -\frac{77}{2} \end{bmatrix} , \]
itA = Transpose[Inverse[A]]
{{9, 37, 38}, {1, 9/2, 9/2}, {-9, -(75/2), -(77/2)}}
we define the dual transformation: \[ \vec{\eta} = \mathbf{A}^{-T} \mathbf{y} \qquad \Longrightarrow \qquad \begin{cases} \eta_1 &= 9\, y_1 + 37\,y_2 +38\, y_3 , \\ \eta_2 &= y_1 + \frac{9}{2}\, y_2 - \frac{9}{2}\, y_3 , \\ \eta_3 &= -9\, y_1 - \frac{75}{2}\, y_2 - \frac{77}{2}\, y_3 , \end{cases} \] where A−T is transpose of the inverse matrix. The scalar product of vector x and coverctor y is \[ \mathbf{x} \bullet \mathbf{y} = x^1 y_1 + x^2 y_2 + x^3 y_3 . \] Upon transformation T, their dot product becomes \begin{align*} \vec{\xi} \bullet \vec{\eta} &= \left( \mathbf{A}\,\mathbf{x} \right) \bullet \left( \mathbf{A}^{-T} \mathbf{y} \right) \\ &= \xi^1 \eta_1 + \xi^2 \eta_2 + \xi^3 \eta_3 \\ &= \left( 9x^1 + 4 x^2 -6 x^3 \right) \left( 9\, y_1 + 37\,y_2 +38\, y_3 \right) \\ &\quad + \left( x^1 + 9 x^2 -9 x^3 \right) \left( y_1 + \frac{9}{2}\, y_2 - \frac{9}{2}\, y_3 \right) \\ &\quad + \left( 9 x^1 + 5 x^2 -7 x^3 \right) \left( -9\, y_1 - \frac{75}{2}\, y_2 - \frac{77}{2}\, y_3 \right) \\ &= x^1 y_1 + x^2 y_2 + x^3 y_3 . \end{align*} Mathematica confirms:
FullSimFullSimplify[Dot[A . {x1, x2, x3}, Transpose[iA] . {y1, y2, y3}]]plify[Dot[A . {x1, x2, x3}, Transpose[iA] . {y1, y2, y3}]]
x1x1 y1 + x2 y2 + x3 y y1 + x2 y2 + x3 y
   ■
End of Example 25

 

  1. Calculate the angle between vectors u = (2, 4, −2) and v = (2, 1, 1).

 

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Beezer, R., A First Course in Linear Algebra, 2015.
  3. Beezer, R., A Second Course in Linear Algebra, 2013.
  4. Fitzpatrick, S., Orthogonal sets of vectors, Linear Algebra.