es

To enhance pedagogical effectiveness, the treatment of the dot product is presented in several distinct sections

Dot product

Geometrical interpretation

Duality

Orthogonality

Projection

Solvability

Metric

 

Metric via Dot Product


In mathematics, a metric space is a set where a notion of distance between any two elements (usually called points) is defined, satisfying specific axioms: non-negativity, identity of indiscernibles, symmetry, and the triangle inequality. Essentially, it's a set with a defined distance function that allows us to measure how "far apart" any two elements are.

A vector space, by definition, has no metric inside it, which is very desirable property. It turns out that the scalar product can be used to define length or distance between vectors transferring ℝn into a metric space, known as the Euclidean space. In order to distinguish metric space from vector space with metric, mathematicians call the magnitude or length of vector v by norm and denote it as ∥v∥.    

Example 15: Let us start with a triangle ΔABC in the Euclidean plane, with vertices A, B, and C. Suppose we know the lengths of sides \( \displaystyle \quad a = \left\vert BC \right\vert , \ b = \left\vert AC \right\vert , \ c = \left\vert AB \right\vert . \quad \) Suppose that we have to determine the angles (or their cosines) of this triangle. Call α the angle at vertex A (respectively, β, γ for the other angles). We know that the length of the orthogonal projection of a segment is shrunk by a factor equal to the cosine of the angle between the two directions.
line = Graphics[{Purple, Thickness[0.01], Line[{{0, 0}, {2.7, 0}, {1, 1.2}, {0, 0}}]}] perp = Graphics[{Blue, Dashed, Thick, Line[{{1, 1.2}, {1, 0}}]}] txt = Graphics[{Black, Text[Style["\[Alpha]", FontSize -> 18, Bold], {0.25, 0.15}], Text[Style["\[Beta]", FontSize -> 18, Bold], {2.23, 0.15}], Text[Style["A", FontSize -> 18, Bold], {-0.1, -0.2}], Text[Style["a", FontSize -> 18, Bold], {2.1, 0.6}], Text[Style["b", FontSize -> 18, Bold], {0.3, 0.6}], Text[Style["a cos\[Beta]", FontSize -> 18, Bold], {1.8, -0.2}], Text[Style["b cos\[Alpha]", FontSize -> 18, Bold], {0.5, -0.2}], Text[Style["B", FontSize -> 18, Bold], {2.71, -0.2}], Text[Style["C", FontSize -> 18, Bold], {1.0, 1.33}]}]; Show[txt, perp, line]
Triangle

A first relation between the angles α and β is found if we project the sides AC and BC onto AB \[ c = b\,\cos\alpha + a\,\cos\beta . \] Projecting similarly onto the other sides, we are led to two further equations (which can be obtained from the first one by the use of circular permutations). Here is the system \[ \begin{cases} b\,\cos\alpha + a\,\cos\beta \phantom{+b\,\cos\gamma} &= c , \\ \phantom{b\,\cos\alpha} c \,\cos\beta + b\,\cos\gamma &= a , \\ c\,\cos\alpha + \phantom{c\.\cos\beta} + a\,\cos\gamma &= b . \end{cases} \] It is linear in the three variables x₁ = cosα, x₂ = cosβ, x₃ = cosγ. Let us solve this system by Gaussian elimination: \begin{align*} \left( \begin{array}{ccc|l} b&a&0&c \\ 0&c&b&a \\ c&0&a&b \end{array} \right) &\sim \left( \begin{array}{ccc|l} b&a&0&c \\ 0&c&b&a \\ 0& -\frac{c}{b}\,a & a& b - \frac{c}{b}\,c \end{array} \right) \\ &\sim \left( \begin{array}{ccc|l} b&a&0&c \\ 0&c&b&a \\ 0&0& 2a & b - \frac{c^2}{b} + \frac{a^2}{b} \end{array} \right) \end{align*} This yields

Solve[{ b*Cos[al] + a*Cos[be] == c, c*Cos[be] + b*Cos[ga] == a, c*Cos[al] + a*Cos[ga] == b}, {Cos[al], Cos[be], Cos[ga]}]
{{Cos[al] -> -((a^2 - b^2 - c^2)/(2 b c)), Cos[be] -> -((-a^2 + b^2 - c^2)/(2 a c)), Cos[ga] -> -((-a^2 - b^2 + c^2)/(2 a b))}}
\[ 2a\,\cos\gamma = \frac{a^2 + b^2 - c^2}{b} , \qquad \cos\gamma = \frac{a^2 + b^2 - c^2}{2ab} ,
\] as well as two similar expressions for the other angles. We have obtained the law of cosines \[ c^2 = a^2 + b^2 - 2ab\,\cos\gamma . \] In particular, we get the Pythagorean theorem for right triangle: \[ c^2 = a^2 + b^2 \qquad \iff \qquad \cos\gamma = 0 . \]    ■
End of Example 15

With standard basis in ℝn

\[ \mathbf{e}_1 = \left( 1, 0, 0, \ldots , 0 \right) , \quad \mathbf{e}_2 = \left( 0, 1, 0, \ldots , 0 \right) , \quad \ldots , \quad \mathbf{e}_n = \left( 0, \ldots , 0, 1 \right) , \]
every vector is uniquely represented as linear combination of these basis vectors
\[ \mathbf{v} = v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2 + \cdots + v_n \mathbf{e}_n . \]
The Euclidean metric on the vector space ℝn is defined through norm (or length)
\begin{equation} \label{EqDot.7} \| \mathbf{u} \| = + \sqrt{\mathbf{u} \bullet \mathbf{u}} = + \sqrt{u_1^2 + u_2^2 + \cdots + u_n^2} , \end{equation}
where "plus" upfront of the square root indicates that only positive root is chosen out of two branches. Here u = ue₁ + ue₂ + ⋯ + unen is the expansion of vector u with respect to the standard (ordered) basis.
In mathematics, norm ∥·∥ is used to define a distance or length between two vectors:
\[ \| \mathbf{u} - \mathbf{v} \| = + \sqrt{\left( \mathbf{u} - \mathbf{v} \right) \bullet \left( \mathbf{u} - \mathbf{v} \right)} = \sqrt{\left( u_1 - v_1 \right)^2 + \cdots + \left( u_n - v_n \right)^2} . \]

In order to convince you that the above is a sensible notion of length, we consider a possible change in length under scalar (λ ∈ ℝ) multiplication of a vector:
\[ \| \lambda \mathbf{v} \| = \sqrt{\left( \lambda \mathbf{v} \right) \bullet \left( \lambda \mathbf{v} \right)} = \sqrt{\lambda^2 \left( \mathbf{v} \bullet \mathbf{v} \right)} = \left\vert \lambda \right\vert \| \mathbf{v} \| . \]
Evidently, if a vector is multiplied with a scalar λ its length scales with the modulus |λ|. This certainly makes intuitive sense and explains why the square root has been included in Eq. (7). (Otherwise, the length would scale with the square of the scalar.)

We summarize properties of Euclidean norm in the following assesment.

Theorem 6: The Euclidean length (norm) ∥·∥ : ℝn ↦ ℝ≥0 defined in Eq. (7) has the following properties, for all v, u ∈ ℝn and all λ ∈ ℝ.
  1. v∥ > 0 for v ≠ 0          (positivity).
  2. ∥λv∥ = |λ| ∥v∥        (scaling).
  3. v + u∥ ≤ ∥v∥ + ∥u∥     (triangle inequality).
  4. | ∥v∥ − ∥u∥ | ≤ ∥vu∥ ≤ ∥v∥ + ∥u∥.
  1. This property follows immediately from the definition \[ \mathbf{v} \bullet \mathbf{v} = v_1^2 + v_2^2 + \cdots + v_n^2 . \] A sum of positive entries cannot be negative, and it is zero only when every term is zero.
  2. It has been shown previously.
  3. By definition, \begin{align*} \| \mathbf{v} - \mathbf{u} \|^2 &= \left( \mathbf{v} - \mathbf{u} \right) \bullet \left( \mathbf{v} - \mathbf{u} \right) \\ &= \mathbf{v} \bullet \mathbf{v} + \mathbf{v} \bullet \mathbf{u} + \mathbf{u} \bullet \mathbf{v} + \mathbf{u} \bullet \mathbf{u} = \| \mathbf{v} \|^2 + \| \mathbf{u} \|^2 + 2\,\mathbf{v} \bullet \mathbf{u} . \end{align*} Using Cauchy's inequality, we get \[ \left\vert \mathbf{v} \bullet \mathbf{u} \right\vert \le \| \mathbf{v} \| \cdot \| \mathbf{u} \| . \] This yields \[ \| \mathbf{v} - \mathbf{u} \|^2 \le \| \mathbf{v} \|^2 + \| \mathbf{u} \|^2 + 2\,\| \mathbf{v} \| \cdot \| \mathbf{u} \| = \left( \| \mathbf{v} \| + \| \mathbf{u} \| \right)^2 . \]
  4. Similarly to the previous proof, \[ \| \mathbf{v} - \mathbf{u} \|^2 = \left( \mathbf{v} - \mathbf{u} \right) \bullet \left( \mathbf{v} - \mathbf{u} \right) = \| \mathbf{v} \|^2 + \| \mathbf{u} \|^2 - 2\left( \mathbf{v} \bullet \mathbf{u} \right) \] Again from Cauchy's inequality, \[ - 2\left( \mathbf{v} \bullet \mathbf{u} \right) \ge 2\,\| \mathbf{v} \| \cdot \| \mathbf{u} \| , \] which leads to \[ \| \mathbf{v} - \mathbf{u} \|^2 \ge \| \mathbf{v} \|^2 + \| \mathbf{u} \|^2 - 2\, \| \mathbf{v} \| \cdot \| \mathbf{u} \| = \left( \| \mathbf{v} \| - \| \mathbf{u} \| \right)^2 . \] The next inequality \( \displaystyle \quad \| \mathbf{v} - \mathbf{u} \| \le \| \mathbf{v} \| + \| \mathbf{u} \| \quad \) follows from the triangle inequality (property 3).
   
Example 16: Mathematica is a very smart CAS, and it can evaluate norms of vectors independently how you write it either as an n-tuple or in matrix form (row or column).
    u = {{-1, 3, -2, 2, 3}}; Norm[u]
    3 Sqrt[3]
    uu = {-1, 3, -2, 2, 3}; Norm[uu]
    3 Sqrt[3]
    uuu = {{-1}, {3}, {-2}, {2}, {3}}; Norm[uuu]
    3 Sqrt[3]
We will use Mathematica for random;y generating vectors. However, its output is always given in matrix form, In order to convert the output in vector form (as an element of 𝔽n), use Flatten command:
     u = Flatten[RandomInteger[{-7, 9}, {1, 6}]]; VectorQ[u]
     True

Using Mathematica, we verify properties included in Theorem 6 with the following examples.

  1. This formulas follows from the identity: \[ \mathbf{v} \bullet \mathbf{v} = v_1^2 + v_2^2 + \cdots + v_n^2 . \] which is zero only when all components of vector v are zeroes.
  2. We generate a vector of size five:
         v = Flatten[RandomReal[{-1, 1}, {1, 5}]]
         {0.371645, -0.811594, 0.591386, -0.037053, 0.758338}
    Upon choosing a real scalar λ = 2.71, we multiply
         2.71*v
         {1.00716, -2.19942, 1.60266, -0.100414, 2.0551}
    Its norm is
         Norm[2.71*v]
         3.55722
    Now we calculate the norm of v and multiply the result by λ = 2.71,
         2.71*Norm[v]
         3.55722
  3. We generate two vectors of length five:
         v = RandomInteger[{-8, 8}, {1, 5}]
         {{-1, -7, 0, 6, 1}}
         u = RandomInteger[{-7, 8}, {1, 5}]
         {{-1, 3, -2, 2, 3}}
    In order to verify the triangle inequality, we evaluate both sides:
         N[Norm[u] + Norm[v]]
         14.5235
         N[Norm[u + v]]
         10.198
    \[ 10.198 \approx \| \mathbf{v} + \mathbf{u} \| < \| \mathbf{v} \| + \| \mathbf{u} \| \approx 14.5235 . \]
  4. We choose randomly two vectors
         v = Flatten[RandomInteger[{-7, 9}, {1, 6}]]
        u = Flatten[RandomInteger[{-7, 9}, {1, 6}]]
         {5, 9, -5, 1, 2, -2}
        {-1, -6, 3, -5, -3, -1}
    Their norms are
         Norm[v]
         2 Sqrt[35]
         Norm[u]
         9
    The difference of norms is
         N[Norm[v] - Norm[u]]
        2.83216
    The norm of their difference is
         N[Norm[v - u]]
         19.6723
    The sum of norms is
         N[Norm[v] + Norm[u]]
         20.8322
    These numbers confirm inequalities in part 4.
   ■
End of Example 16

Scalar multiplication property allows us to define, for any non-zero vector v ∈ ℝn, an associated normalized vector n with unit length given by

\[ \hat{\mathbf{n}} = \frac{\mathbf{v}}{\| \mathbf{v} \|} \quad \Longrightarrow \quad \| \hat{\mathbf{n}} \| = \| \mathbf{v} \|^{-1} \| \mathbf{v} \| = 1 . \]
Each component of n is divided by the scalar value ∥v∥ ≥ 0. Therefore, the unit vector n = n(v) represents the direction of vector v. If v = 0, then its length is zero and the corresponding direction vector does not exist. Remember that you must check the value ∥v∥ before dividing to be sure it is greater than your zero divide tolerance. The zero divide tolerance is the absolute value of the smallest number by which you can divide confidently.    
Example 17: Let us consider a vector v = (4, 3) ∈ ℝ². Its Euclidean norm is \( \displaystyle \quad \| \mathbf{v} \| = \sqrt{4^2 + 3^2} = \sqrt{16 + 9} = \sqrt{25} = 5 . \quad \) Then the corresponding direction vector becomes \[ \mathbf{n} = \frac{\mathbf{v}}{\| \mathbf{v} \|} = \frac{(4, 3)}{5} = \left( \frac{4}{5}, \frac{3}{5} \right) = \left( 0.8, 0.6 \right) . \]
n = Graphics[{Blue, Thickness[0.01], Arrowheads[0.1], Arrow[{{0, 0}, {0.8, 0.6}}]}]; v = Graphics[{Purple, Thickness[0.01], Arrowheads[0.1], Arrow[{{0.8, 0.6}, {4, 3}}]}]; ax = Graphics[{Black, Thick, Arrow[{{-0.3, 0}, {4.3, 0}}]}]; ay = Graphics[{Black, Thick, Arrow[{{0, -0.3}, {0, 3.3}}]}]; txt = Graphics[{Text[ Style["n", Blue, FontSize -> 18, Bold], {1, 0.5}], Text[Style["v", Purple, FontSize -> 18, Bold], {4, 2.6}], Text[Style["O", Black, FontSize -> 18, Bold], {-0.2, -0.2}], Text[Style["x-axis", Black, FontSize -> 18, Bold], {4, 0.3}], Text[Style["y-axis", FontSize -> 18, Bold], {0, 3.6}]}]; Show[n, v, ax, ay, txt]
Figure 15.1: Normalizating vector

Since we have only scaled v by a positive amount ∥v∥ = 5, the direction of n is the same as v. There are infinitely many unit vectors. Imagine drawing them all, emanating from the origin. The figure that you will get is a circle of radius one!
a = 30; Graphics[Table[ {Line[{{Cos[\[Theta]]/5, Sin[\[Theta]]/ 5}, {Cos[\[Pi]/(2 a) + \[Theta]], Sin[\[Pi]/(2 a) + \[Theta]]}}], Line[{{1/5 Cos[\[Pi]/(2 a) + \[Theta]], 1/5 Sin[\[Pi]/(2 a) + \[Theta]]}, {Cos[\[Theta]], Sin[\[Theta]]}}], Line[{{Cos[\[Pi]/(2 a) + \[Theta]], Sin[\[Pi]/(2 a) + \[Theta]]}, {Cos[\[Pi]/a + \[Theta]], Sin[\[Pi]/a + \[Theta]]}}], Line[{{Cos[(3 \[Pi])/(4 a) + \[Theta]], Sin[(3 \[Pi])/(4 a) + \[Theta]]}, {6/ 5 Cos[(5 \[Pi])/(4 a) + \[Theta]], 6/5 Sin[(5 \[Pi])/(4 a) + \[Theta]]}}], Line[{{Cos[(7 \[Pi])/(4 a) + \[Theta]], Sin[(7 \[Pi])/(4 a) + \[Theta]]}, {6/ 5 Cos[(5 \[Pi])/(4 a) + \[Theta]], 6/5 Sin[(5 \[Pi])/(4 a) + \[Theta]]}}], Disk[{0, 0}, 2/10]}, {\[Theta], 0, 2 Pi, Pi/a} ], PlotRange -> All, Axes -> False]
Figure 15.2: Unit vectors

   ■
End of Example 17

Upon introducing the norm (meaning length or magnitude) of a vector, \( \displaystyle \quad \| {\bf v} \| = +\sqrt{{\bf v} \bullet {\bf v}} , \quad \) Cauchy's inequality can be written as

\begin{equation} \label{EqDot.8} -1 \leqslant \frac{\mathbf{u} \bullet \mathbf{v}}{\| \mathbf{u} \| \cdot \| \mathbf{v} \|} \leqslant 1 . \end{equation}
   
Example 18: Let us consider a rectangular parallelepiped (also known as rectangular cuboid) with sizes 𝑎, b, and c. We want to compute the angle between the diagonals on two adjacent faces. First, we plot the prizm with two vectors along adjacent diagonals.
edge = Graphics[{Purple, Thick, Line[{{0, 0}, {-0.8485, -0.8485}, {1.2515, -0.8485}, {1.2515, 0.1515}, {2.1, 1}, {0, 1}, {0, 0}}]}]; edge2 = Graphics[{Purple, Thick, Line[{{0, 1}, {-0.8485, -0.1515}, {-0.8485, -0.8485}}]}]; edge3 = Graphics[{Purple, Thick, Line[{{2.1, 1}, {2.1, 0.0}, {1.2515, -0.8485}}]}]; line = Graphics[{Purple, Thick, Line[{{0, 0}, {2.1, 0.0}}]}]; line2 = Graphics[{Purple, Thick, Line[{{-0.8485, 0.1515}, {1.2515, 0.1515}}]}]; ar1 = Graphics[{Blue, Thickness[0.01], Arrow[{{-0.8485, -0.8485}, {0, 1}}]}]; ar2 = Graphics[{Blue, Thickness[0.01], Arrow[{{-0.8485, -0.8485}, {2.1, 0}}]}]; txt = Graphics[{Black, Text[Style["A", FontSize -> 18, Bold], {0.0, 1.12}], Text[Style["B", FontSize -> 18, Bold], {-0.9, -0.98}], Text[Style["C", FontSize -> 18, Bold], {2.2, 0.1}], Text[Style["\[Theta]", FontSize -> 18, Bold], {-0.5, -0.6}], Text[Style["a", FontSize -> 18, Bold], {-0.55, 0.6}], Text[Style["b", FontSize -> 18, Bold], {1.1, 1.12}], Text[Style["c", FontSize -> 18, Bold], {2.2, 0.5}]}]; circ = Graphics[{Red, Thick, Circle[{-0.8485, -0.8485}, 0.57, {0.3, 1.12}]}]; Show[edge, edge2, edge3, line, line2, ar1, ar2, txt, circ]
Figure 16.1: Cuboid

Using points A(0, 0, c), B(𝑎, 0, 0), and C(0, b, 0), we find vectors \[ \mathbf{u} = \vec{BA} = (-a, 0, c) \qquad\mbox{and} \qquad \mathbf{v} = \vec{BC} = (-a, b, 0) . \] Their norms \[ \| \mathbf{u} \| = \sqrt{a^2 + c^2} , \qquad \|\mathbf{v} \| = \sqrt{a^2 + b^2} \] and dot product \[ \mathbf{u} \bullet \mathbf{v} = a^2 \] show that formula (8) can be written as \[ \frac{\mathbf{u} \bullet \mathbf{v}}{\| \mathbf{u} \| \cdot \| \mathbf{v} \|} = \frac{a^2}{\sqrt{a^2 + c^2}\cdot\sqrt{a^2 + b^2}} = \cos\theta \] because the ratio is definitely less than 1. For instance, if 𝑎 =1, b = 2, and c = 3, we get \[ \cos\theta = \frac{1}{\sqrt{5}\cdot \sqrt{10}} \qquad \Longrightarrow \qquad \theta = \mbox{arctan}\left( \frac{1}{5\sqrt{2}} \right) \approx 0.14049 . \]

N[ArcTan[1/(5 Sqrt[2])]]
0.14049
Converting radians into degrees, we multiply the latter by 180/&pi::
N[ArcTan[1/(5 Sqrt[2])]]*180/Pi
8.04947

Another angle problem:    We consider the same cuboid, but now we are going to determine the angle between a diagonal on a face and a diagonal of the cuboid. Upon plotting Figure 16.2, we are after the angle ∠ABD.

edge = Graphics[{Purple, Thick, Line[{{0, 0}, {-0.8485, -0.8485}, {1.2515, -0.8485}, {1.2515, 0.1515}, {2.1, 1}, {0, 1}, {0, 0}}]}]; edge2 = Graphics[{Purple, Thick, Line[{{0, 1}, {-0.8485, -0.1515}, {-0.8485, -0.8485}}]}]; edge3 = Graphics[{Purple, Thick, Line[{{2.1, 1}, {2.1, 0.0}, {1.2515, -0.8485}}]}]; line = Graphics[{Purple, Thick, Line[{{0, 0}, {2.1, 0.0}}]}]; line2 = Graphics[{Purple, Thick, Line[{{-0.8485, 0.1515}, {1.2515, 0.1515}}]}]; ar1 = Graphics[{Blue, Thickness[0.01], Arrow[{{-0.8485, -0.8485}, {0, 1}}]}]; ar2 = Graphics[{Blue, Thickness[0.01], Arrow[{{-0.8485, -0.8485}, {2.1, 1}}]}]; txt = Graphics[{Black, Text[Style["A", FontSize -> 18, Bold], {0.0, 1.12}], Text[Style["B", FontSize -> 18, Bold], {-0.9, -0.98}], Text[Style["D", FontSize -> 18, Bold], {2.2, 1.1}], Text[Style["\[Theta]", FontSize -> 18, Bold], {-0.53, -0.46}], Text[Style["D", FontSize -> 18, Bold], {2.2, 1.1}], Text[Style["\[Theta]", FontSize -> 18, Bold], {-0.53, -0.46}], Text[Style["a", FontSize -> 18, Bold], {-0.55, 0.6}], Text[Style["b", FontSize -> 18, Bold], {1.1, 1.12}], Text[Style["c", FontSize -> 18, Bold], {2.2, 0.5}]}]; circ = Graphics[{Red, Thick, Circle[{-0.8485, -0.8485}, 0.65, {0.59, 1.12}]}]; Show[edge, edge2, edge3, line, line2, ar1, ar2, txt, circ]
Figure 16.2: Cuboid

Since point D has coordinates (0, b, c), we find vector \( \displaystyle \quad \mathbf{v} = \vec{BD} = \left( -a, b, c \right) . \quad \) Then the dot product of vectors u and v becomes \[ \mathbf{u} \bullet \mathbf{v} = \left( -a, 0, c \right) \bullet \left( -a, b, c \right) = a^2 + c^2 . \] Then formula (8) yields \[ \cos\theta = \frac{\mathbf{u} \bullet \mathbf{v}}{\| \mathbf{u} \| \, \| \mathbf{v} \|} = \frac{a^2 + c^2}{\sqrt{a^2 + c^2}\,\sqrt{a^2 + b^2 + c^2}} . \] Taking inverse cosine function, we find the angle to be \[ \theta = \mbox{arccos} \left( \frac{\sqrt{a^2 + c^2}}{\sqrt{a^2 + b^2 + c^2}} \right) . \] For 𝑎 =1, b = 2, c = 3, we get \[ \theta = \mbox{arccos} \left( \frac{\sqrt{10}}{\sqrt{14}} \right) \approx 0.563943 = 32.3115^{\circ} . \]

N[ArcCos[Sqrt[5/7]]]
0.563943
%*180/P%*180/Pii
32.3115
   ■
End of Example 18

Dot product in not standard bases

When ordered basis [e₁, e₂, … , en] in ℝn is not a standard one, the dot product must be modified in order to maintain regular (Euclidean) distance between points or vectors:

\[ \mathbf{u} \bullet \mathbf{v} = \sum_{\alpha , \beta} g_{\alpha , \beta} u^{\alpha} v^{\beta} = g_{\alpha , \beta} u^{\alpha} v^{\beta} , \]
where u = u¹e₁ + u²e₂ + ⋯ + unen, v = v¹e₁ + v²e₂ + ⋯ + vnen, and the Einstein summation convention is employed. The metric tensor provides the scalar product of a pair of contravariant vector
\[ g_{\alpha , \beta} = \mathbf{e}_{\alpha} \bullet \mathbf{e}_{\beta} . \]
For example, in ℝ², the metric tensor is the 2-by-2 matrix
\[ g_{\alpha , \beta} = \mathbf{e}_{\alpha} \bullet \mathbf{e}_{\beta} = \begin{bmatrix} \mathbf{e}_1 \bullet \mathbf{e}_1 & \mathbf{e}_1 \bullet \mathbf{e}_2 \\ \mathbf{e}_2 \bullet \mathbf{e}_1 & \mathbf{e}_2 \bullet \mathbf{e}_2 \end{bmatrix} , \]
and corresponding square norm becomes
\[ \| \mathbf{u} \|^2 = g_{1,1} \left( u^1 \right)^2 + g_{2,2} \left( u^2 \right)^2 + 2\,g_{1,2} u^1 u^2 . \]

If [e¹, e², … , en] is the dual basis, then the inverse of gi,j is the raised-indices metric tensor for the covector space:

\[ g^{i,j} = \mathbf{e}^i \bullet \mathbf{e}^j . \]
In two dimensional case, it is convenient to write the covariant metric tensor in matrix form
\[ g^{i,j} = \begin{bmatrix} \mathbf{e}^1 \bullet \mathbf{e}^1 & . \mathbf{e}^1 \bullet \mathbf{e}^2 \\ \mathbf{e}^2 \bullet \mathbf{e}^1 & . \mathbf{e}^2 \bullet \mathbf{e}^2 \end{bmatrix} . \]
Then using the Einstein summation convention, the dot product of covectors is defined as
\[ \phi \bullet \psi = g^{i,j} \phi_i \psi_j = \sum_{i,j} g^{i,j} \phi_i \psi_j . \]
Since eiej = δij (Kronecker's delta), the dot product of vectors and covectors is just standard scalar product:
\[ \phi \bullet \mathbf{v} = \left( \phi_1 , \phi_2 , \ldots , \phi_n \right) \bullet \left( v^1 , v^2 , \ldots v^n \right) = \sum_{i=1}^n \phi_{i} v^i . \]
   
Example 19: We consider a couple of examples of dot product in flat (ℝ²) using non standard bases. So we start with a simple example of two vector basis: \[ \mathbf{e}_1 = \mathbf{i} = (1. 0) , \qquad \mathbf{e}_2 = 2\,{\bf j} = (0, 2) , \] where i and j are standard unit vectors along abscissa and ordinate, respectively.Writing metric tensor in matrix form, we get \[ {\bf g} = \left[ g_{i,j} \right] = \begin{bmatrix} \mathbf{e}_1 \bullet \mathbf{e}_1 & \mathbf{e}_1 \bullet \mathbf{e}_2 \\ \mathbf{e}_2 \bullet \mathbf{e}_1 & \mathbf{e}_2 \bullet \mathbf{e}_2 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&4 \end{bmatrix} \] because \[ \mathbf{e}_2 \bullet \mathbf{e}_2 = (2\,{\bf j}) \bullet (2\,{\bf j}) = 4\left({\bf j} \bullet {\bf j} \right) = 4, \quad \mathbf{e}_1 \bullet \mathbf{e}_2 = 0 . \] Choosing a vector (point) u = (3, 4) = 3 i + 4 j∈ ℝ², we find its magnitude \[ \| \mathbf{u} \|^2 = \mathbf{u} \bullet \mathbf{u} = 3^2 + 4^2 = 25 \quad \Longrightarrow \quad \| \mathbf{u} \| = 5 . \] Now we expand this vector in basis α = [e₁, e₂] = [i, 2j]: \[ \mathbf{u} = 3\,\mathbf{e}_1 + 2\,\mathbf{e}_2 . \] Calculating its dot product, we find \begin{align*} \mathbf{u} \bullet \mathbf{u} &= \sum_{i,j=1}^2 g_{i,j} \, u^i \, u^j \\ &= g_{1,1} \, 3 \times 3 + g_{2,2} \ 2\times 2 = 1 \cdot 3 \times 3 + 4\cdot 2 \times 2 = 25 . \end{align*}
RJB

Another basis:    \[ \beta = \left[ \mathbf{b}_1 , \mathbf{b}_2 \right] = \left[ \mathbf{i} , \mathbf{i} + \mathbf{j} \right] . \] The vector v = (−3, 4) has the following coordinates in basis β: \[ \mathbf{v} = \left( -3, 4 \right) \qquad \Longrightarrow \qquad \left[\!\left[ \mathbf{v} \right]\!\right]_{\beta} = \left( v^1 , v^2 \right) = \left( -7 , 4 \right) \] because \[ \mathbf{v} = -3\mathbf{i} + 4\mathbf{j} = -3\mathbf{i} + 4 \left( \mathbf{i} + \mathbf{j} - \mathbf{i} \right) = -3\mathbf{i} + 4 \left( \mathbf{i} + \mathbf{j} \right) - 4\mathbf{i} . \] Here we used a mathematical trick: add and subtruct the same value. The second component j is replaced by j + ii = b₂ − i. The dot product in basis β becomes \begin{align*} \mathbf{v} \bullet \mathbf{v} &= g_{1,1} v^1 v^1 + g_{1,2} v^1 v^2 + g_{2,1} v^2 v^1 + g_{2,2} v^2 v^2 \\ &= g_{1,1} \left( -7 \right)^2 + g_{1,2} \left( -7 \right) \left( 4 \right) + g_{2,1} \left( 4 \right) \left( -7 \right) + g_{2,2} \left( 4 \right)^2 \\ &= g_{1,1} 49 - g_{1,2} 28 - g_{2,1} 28 + g_{2,2} 16 . \end{align*} The components of the metric tensor are \begin{align*} g_{1,1} &= \mathbf{b}^1 \bullet \mathbf{b}^1 = \mathbf{i} \bullet \mathbf{i} = 1 , \\ g_{1,2} &= \mathbf{b}^1 \bullet \mathbf{b}^2 = \mathbf{i} \bullet \left( \mathbf{i} + \mathbf{j} \right) = \mathbf{i} \bullet \mathbf{i} + \mathbf{i} \bullet \mathbf{j} = 1 + 0 = 1 , \\ g_{2,1} &= \mathbf{b}^2 \bullet \mathbf{b}^1 = \mathbf{i} \bullet \left( \mathbf{i} + \mathbf{j} \right) = \mathbf{i} \bullet \mathbf{i} + \mathbf{i} \bullet \mathbf{j} = 1 + 0 = 1 , \\ g_{2,2} &= \mathbf{b}^2 \bullet \mathbf{b}^2 = \left( \mathbf{i} + \mathbf{j} \right) \bullet \left( \mathbf{i} + \mathbf{j} \right) = \mathbf{i} \bullet \mathbf{i} + \mathbf{j} \bullet \mathbf{j} = 2 . \end{align*} Using these values, we calculate the dot product \begin{align*} \mathbf{v} \bullet \mathbf{v} &= 49 - 28 - 26 + 2\cdot 16 = 25 . \end{align*}

49 - 28 - 28 + 32
25
   ■
End of Example 19