Many abstract concepts that make linear algebra a powerful mathematical tool have their roots in plane geometry so we begin our study of dot product by reviewing basic properties of lengths and angles in the real two-dimensional plane ℝ². Guided by these geometrical properties, we formulate properties of dot product that further will be generalized in the following section. Dot product is used to define a magnitude of a vector that is called norm in mathematics.
ℝ² Motivation
Classical Pythagorean Theorem:
If 𝑎 and b are the lengths of two sides of a right triangle, and if c is the length of its hypotenuse, then 𝑎² + b² = c².
Construct a square with side c and place four copies of the right triangle around it to form a larger square with side 𝑎 + b. The are of the larger square is equal to the area of the smaller square plus four times the area of the original triangle with sides 𝑎 and b:
The length of any one side of a plane triangle (it needs not be a right triangle) is determined by the lengths of its other two sides and the cosine of the angle between them. This is a consequence of the Pythagorean theorem.
Theorem (Law of cosines):
Let 𝑎 and b be the lengths of two sides of a plane triangle, let θ be the angle between two sides, and let c be the length of the third side. Then 𝑎² + b² − 2𝑎b cosθ = c².
The following command finds the length (number of components) of a vector:
v = {3, 5};
Length[v]
Out[5]= 2
The law of cosines implies a familiar fact about plane triangles: the length of one side is not greater than the sum of the lengths of the other two sides.
Theorem (Triangle inequality):
Let 𝑎, b, and c be the lengths of the sides of a plane triangle. Then
\[
c \le a+b .
\]
Let θ be the angle between the sides whose lengths are 𝑎 and b. Since −cosθ ≤ 1, the law of cosines theorem tells us that
where we choose a positive branch for the square root (analytic function). Usually, the Euclidean norm is supplied with subscript "2" because there are known many other norms (that are discussed in the following section).
TrueQ[Norm[a3] == Sqrt[a3.a3]]
True
The prime example of dot operation is work that is defined as the scalar product of force and
displacement. The presence of cos(θ) ensures the requirement that the work
done by a force perpendicular to the displacement is zero.
Example 1:
Suppose that a boat is sailing at 7 knots, following a course of 015° (that is, 15° east of north), on a bay that has a 3-knot current setting in the direction 073° (that is, 73° east of north).
In navy, the expression made good is standard navigation terminology for the actual course and speed of a vessel over the bottom. The velocity vector s for the ketch and c for the current are shown in the figure, in which the vertical axis points due north.
By adding s and c, we find the vector v = s + c representing the course and speed of the ketch over the bottom---that is, the course and speed made good. Hence, we have v = [4.68064 , 6.76148]. Therefore, the speed of the boat is
Example 2:
Suppose that the captain of our boat realizes the importance of keeping track of the current. He wishes to sail in 6 hours to a harbor that bears 110° and is 42 nautical miles away. That is, he wishes to make good the course 110° and the speed 6 knots. He knows from a tide and current table that the current is setting due south at 3 knots.
The vector s = v − c
 
In vector diagram, we again represent the course and speed to be made good by a vector v and the velocity of the current by c. The correct course and speed to follow are represented by the vector s, which is obtained by computing
Motivated by two-dimensional case, we extend dot product to a finite-dimensional vector space under arbitrary field. However, this operation between two vectors can be implemented only
when a basis is chosen for the corresponding vector space, allowing one to render coordinates .
The dot product of two vectors of the same size
\( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and
\( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) (independently whether they are columns or rows) is the scalar,
denoted either by \( {\bf x} \cdot {\bf y} : \)
when entries are complex. Here \( (a + {\bf j}\,b )^{\ast} = \overline{a + {\bf j}\,b} = a - {\bf j}\,b \) is the complex conjugate to 𝑎 + jb ∈ ℂ.
Using definition of the dot product, we can verify its following properties. For x, y, z ∈ ℂn and k ∈ ℂ, we have
This list of properties suggests that the dot product it is a positive-definite bilinear mapping
\[
\mbox{dot product}\,:\, V^{\ast} \times V \mapsto \mathbb{F} \ \left( = \mathbb{R} \mbox{ or } \mathbb{C} \right) ,
\]
where V* is the dual space.
However, in complex case, the first component does not enjoy multiplication by a complex scalar, but by its complex conjugate; this means that
This definition is usually used in mathematics and we mark it as
West Coast . Our definition is mostly used in physics and engineering; correspondingly, we mark it as East Coast. Both definitions lead to the same conclusions and it does not matter which one is applied, but you have to be consistent.
Example 3:
Let us calculate the dot prodict of the following two complex vectors:
However, when we use a standard command, Mathematica provides
Dot[{1 + I, -2 + 3*I, 3 - I}, {2 - I, 4 + 2*I, 6 - 2*I}]
5 - 3 I
because it just multiplies the compoenents. So you need to enter complex conjugate entries into the first vector:
Dot[{1 - I, -2 - 3*I, 3 + I}, {2 - I, 4 + 2*I, 6 - 2*I}]
19 - 19 I
A unit vector u is a vector whose length equals one: \( {\bf u} \cdot {\bf u} =1 . \) We say that two vectors
x and y are perpendicular if their dot product is zero.
Josiah Gibbs
The dot product was first introduced by the American physicist and mathematician Josiah Willard Gibbs (1839--1903) in the 1880s.
With Euclidean norm ‖·‖2, the dot product formula
One can use the distributive property of the dot product to show that
if (ax, ay, az) and (bx, by, bz) represent the components of a and b along the
axes x, y, and z, then
Noting that |b| cos θ is simply the projection of b along a, we conclude that in order to find the perpendicular projection of a vector b along another vector a, take dot product of b with \( \hat{\bf e}_a , \)
the unit vector along a.
The dot product of any two vectors of the same dimension can be done with the dot operation given as Dot[vector 1, vector 2] or with use of a period “. “ .
The dot product of two vectors of the same size
\( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and
\( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n
\right] \) (regardless of whether they are columns or rows
because Mathematica does not distinguish rows from columns) is the number,
denoted either by \( {\bf x} \cdot {\bf y} \) or \( \left\langle {\bf x} , {\bf y} \right\rangle ,\)
when entries are complex.
Here \( \overline{\bf x} = \overline{a + {\bf j}\, b} =
a - {\bf j}\,b \) is a complex conjugate of a complex number
x = a + jb.
The dot product of any two vectors of the same dimension can be done with the dot operation given as Dot[vector 1, vector 2] or with use of a period “. “ .
{1,2,3}.{2,4,6}
28
Dot[{1,2,3},{3,2,1} ]
10
With Euclidean norm ‖·‖2, the dot product formula
defines θ, the angle between two vectors.
The dot product was first introduced by the American physicist and
mathematician Josiah Willard Gibbs (1839--1903) in the 1880s. ■
The inequality for sums was published by the French mathematician and physicist Augustin-Louis Cauchy (1789--1857) in 1821, while the corresponding
inequality for integrals was first proved by the Russian mathematician Viktor Yakovlevich Bunyakovsky (1804--1889) in 1859. The modern proof
(which is actually a repetition of the Bunyakovsky's one) of the integral inequality was given by the German mathematician Hermann Amandus Schwarz (1843--1921) in 1888.
With Euclidean norm, we can define the dot product as