To enhance pedagogical effectiveness, the treatment of the dot product is presented in several distinct sections
Orthogonality
Of all the angles that vectors can make with each other, the two most important angles are when the vectors are aligned with each other, and when the vectors are at right-angles to each other. Recall formula (9) from dot product section gives the angle θ between two vectors via cos θ:
The standard unit vectors i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1) are orthogonal; \[ \mathbf{i} \bullet \mathbf{j} = 0, \quad \mathbf{i} \bullet \mathbf{k} = 0, \quad \mathbf{k} \bullet \mathbf{j} = 0. \]
We randomly generate two vectors

Sets of orthogonal vectors
The standard coordinate vectors in 𝔽n always form an orthonormal set. For instance, in ℝ³ vectors
- v₁ • v₂ = 1 · 1 + 1 · (−1) + 0 · 0 = 0;
- v₁ • v₃ = 1 · 0 + 1 · 0 + 0 · 5 = 0;
- v₂ • v₃ = 1 · 0 + (−1) · 0 + 0 · 5 = 0.
-
∥v₁∥ = √2;
v1 = {1, 1, 0}; Norm[v1]Sqrt[2]
-
∥v₂∥ = √2;
v2 = {1, -1, 0}; Norm[v2]Sqrt[2]
-
∥v₃∥ = 5.
v3 = {0, 0, 5}; Norm[v3]5
Similarly, dotting with v₂ and v₃ gives c₂ = 0 and c₃ = 0. Thus, set S is linearly independent. ■
The next couple of innocuous looking theorems are vital keys to important results in subsequent chapters.
- 0 = u • e₁ = (u₁, u₂, … , un) • (1, 0, … , 0) = u₁,
- 0 = u • e₂ = (u₁, u₂, … , un) • (0, 1, 0, … , 0) = u₂,
- and so on
- 0 = u • en = (u₁, u₂, … , un) • (0, 0, … , 0, 1) = un.
Compute the dot products:
- x • e₁ = 𝑎,
- x • e₂ = b,
- x • e₃ = c,
- x • e₄ = d,
We also give a matrix version of Theorem 8.
The standard unit vectors e₁, e₂, … , eₙ form the columns of the identity matrix Iₙ. The condition that vector x is orthogonal to every vector ei means that \[ \mathbf{e}_i \bullet \mathbf{x} = \mathbf{e}_i^{\mathrm T} \mathbf{x} = 0 \qquad \forall i. \] However, the vector of all these dot products is exactly the matrix product \[ \mathbf{I}_n \mathbf{x} = 0 . \] Since Iₙx = x, we conclude x = 0.
The matrix form is important because
- The standard basis vectors are the columns of Iₙ.
- Their transposes are the rows of Iₙ.
- Orthogonality to each ei is equivalent to saying every row of Iₙ annihilates x.
- However, the rows of Iₙ simply extract the coordinates of x.
- So all coordinates must be zero.
We first show that such a set is linearly independent. Consider a linear combination \[ \alpha_1 \mathbf{v}_1 + \alpha_2 \mathbf{v}_2 + \cdots +\alpha_k \mathbf{v}_k = 0, \] where α₁, α₂, … , αk ∈ ℝ. We will show that all αi = 0.
Take the dot product of both sides with vj, for some fixed j ∈ [1..k] = { 1, 2, … k }: \[ \left( \alpha _1 \mathbf{v}_1 +\dots +\alpha _k \mathbf{v}_k\right) \bullet \mathbf{v}_j = 0\bullet \mathbf{v}_j =0. \] Using linearity of the dot product, this becomes \[ \alpha _1 (\mathbf{v}_1 \bullet \mathbf{v}_j ) + \dots +\alpha _k (\mathbf{v}_k\bullet \mathbf{v}_j ) = 0. \] By orthogonality, vi • vj = 0 whenever i ≠ j, and vj • vj = ∥ vj ∥² = 1. Hence all terms vanish except the i = j term: \[ \alpha _j(v_j\cdot v_j)=\alpha _j\cdot 1=\alpha _j. \] Therefore, \[ \alpha _j = 0. \] Since j was arbitrary, this shows that α₁ = α₂ = ⋯ = αk = 0. Thus, the set { v₁, v₂, … , vk } is linearly independent.
Now recall the fundamental fact from linear algebra: in ℝn, any linearly independent set can have at most n vectors. Equivalently, the maximum size of a linearly independent set in ℝn is n, the dimension of the space.
Since { v₁, v₂, … , vk } is linearly independent in ℝn, we must have \[ k\leq n. \] This completes the proof.
- ∥ui∥ = 1 for i = 1, 2, 3, 4;
- ui • uj = 0 for all i ≠ j.
Orthogonality + unit length gives a key identity. Because the columns are orthonormal, we must have: \[ \mathbf{U}^{\mathrm T} \mathbf{U}= \mathbf{I}_4. \] But here is the contradiction:
- UTU is a 4 × 4 matrix.
- The rank of UTU equals the rank of U.
- But U is a 3 × 4 matrix, so its rank is at most 3.
Why this supports Theorem 9? The same argument works in general:
- If you had k orthogonal unit vectors in ℝn×1 ≌ ℝn, the matrix U with those vectors as columns would satisfy \[ \mathbf{U}^{\mathrm T}\mathbf{U}=\mathbf{I}_k . \]
- But rank(U) ≤ n, so rank(UTU) ≤ n.
- Since rank(Ik) = k, we must have k ≤ n.
Theorem 9 (Geometric meaning): In ℝn, you cannot fit more than n mutually perpendicular directions. Each orthogonal unit vector represents a new, independent direction in the space.
Let’s visualize this dimension by dimension.
In ℝ²: Only 2 orthogonal unit vectors are possible. Geometrically:
- The first unit vector picks a direction (say, pointing east).
- The second must be perpendicular to it (pointing north).
- There is no third direction in the plane that is perpendicular to both east and north.
In ℝ³: Only 3 orthogonal unit vectors are possible. Think of the familiar axes:
- x-axis or x₁,
- y-axis or x₂,
- z-axis or x₃.
- Any vector in ℝ³ must have some component along at least one of the three axes. If it had zero component along all three axes, it would be the zero vector.
- But the zero vector is not a unit vector.
Generalizing to ℝn ≌ ℝn×1 ≌ ℝ1×n.
Each orthogonal unit vector you add forces the next one to lie in a smaller and smaller “remaining” space. Here’s the geometric picture:
- The first vector picks a direction in ℝn. The remaining space is an (n-1)-dimensional hyperplane perpendicular to it.
- The second vector must lie in that (n-1)-dimensional hyperplane. The remaining space becomes (n-2)-dimensional.
- The third vector must lie in that (n-2)-dimensional space. The remaining space becomes (n-3)-dimensional.
- After choosing k orthogonal unit vectors, the remaining space has dimension n-k.
- Once k=n, the remaining space has dimension 0.
- There is no room for an (n+1)-st perpendicular direction.
A simple geometric analogy. Imagine trying to place arrows so that each one points in a direction completely independent of the others:
- In 2D, you can place 2 such arrows.
- In 3D, you can place 3.
- In 4D, you can place 4.
- …
- In n-dimensions, you can place n vectors. After that, there is literally no new direction left that is perpendicular to all the previous ones.
- MatrixRank[U] returns 4.
- MatrixRank[G] returns 4, even though G is 6 × 6.
- The Gram matrix cannot be full rank unless k ≤ n.
Orthogonal unit vectors behave like coordinate axes. Each new orthogonal direction consumes one dimension. After n mutually perpendicular directions, no new direction remains.
We generate k random vectors in ℝn, orthonormalize them, and check the rank.
Let U be the n × k matrix whose columns are the vectors ui. The Gram matrix (Gramian) is \[ \mathbf{G}=\mathbf{U}^{\top }\mathbf{U}. \] If the vectors are orthonormal, then G = Ik. However, \[ \operatorname{rank} (G)=\operatorname{rank} (U)\leq n. \] Since rank(Ik) = k, we must have k ≤ n. Mathematica demonstration:
When the set of vectors is orthogonal but not orthonormal, orthogonality alone does not force unit length. However, the same dimension bound applies: you still cannot exceed n vectors. ■
Proof: ( ⟹ ) If x = 0, then for each j, \[ \mathbf{x} \bullet \mathbf{v}_j = 0 \qquad j = 1, 2, \ldots , m, \] so the forward implication is immediate.
( ⟸ ) Now assume \[ \mathbf{x} \bullet \mathbf{v}_j =0\quad \mathrm{for\ all\ }j=1,\dots ,m. \] We must show x = 0.
Let y ∈ ℝn be arbitrary. Since ℝn = span{v₁, v₂, … , vm}, there exist scalars α₁, α₂, … , αm such that \[ \mathbf{y}=\alpha_1 \mathbf{v}_1 +\alpha_2 \mathbf{v}_2 +\cdots +\alpha_m \mathbf{v}_m. \] Using bilinearity of the dot product, we compute \[ \mathbf{x} \bullet \mathbf{y}=\mathbf{x} \bullet \left( \alpha_1 \mathbf{v}_1 +\cdots +\alpha_m \mathbf{v}_m \right) = \alpha_1 \left( \mathbf{x}\bullet \mathbf{v}_1 \right) +\cdots +\alpha_m \left( \mathbf{x}\bullet \mathbf{v}_m\right) . \] By assumption, each x • vj = 0, so every term on the right-hand side is zero; hence, \[ \mathbf{x} \bullet \mathbf{y} =0 \quad \mathrm{for\ all\ }\mathbf{y}\in \mathbb{R}^{n} . \] In particular, taking y = x, we obtain \[ \mathbf{x} \bullet \mathbf{x} = 0. \] But in ℝn, x • x = ∥x∥², and ∥x∥² = 0 if and only if x = 0. Therefore, x = 0.
Combining both directions, we conclude \[ \mathbf{x}=0\quad \Longleftrightarrow \quad \mathbf{x} \bullet \mathbf{v}_j = 0\mathrm{\ for\ all\ }j=1,\ldots ,m. \]
- A 3‑dimensional example (non‑orthogonal spanning set)
- An example where the spanning set has more than n vectors
- A geometric visualization showing why a vector orthogonal to a spanning set must be zero
1. A 3‑Dimensional Example (Non‑Orthogonal Spanning Set)
Let \[ \mathbf{v}_1 =\left( \begin{matrix}1\\ 0\\ 1\end{matrix}\right) ,\quad \mathbf{v}_2 =\left( \begin{matrix}1\\ 1\\ 0\end{matrix}\right) ,\quad \mathbf{v}_3 =\left( \begin{matrix}0\\ 1\\ 1\end{matrix}\right) . \] These are not orthogonal, but the matrix \[ \left( \begin{matrix}1&1&0\\ 0&1&1\\ 1&0&1\end{matrix}\right) \] has determinant 2 ≠ 0, so they span ℝ³.
Corollary 1 says: If x • vj = 0 for all j, then x = 0.
Mathematica code
{{x1 -> 0, x2 -> 0, x3 -> 0}}
2. A Spanning Set With More Than n Vectors. Let’s work in ℝ³ again, but now use four vectors: \[ \mathbf{v}_1 =\left( \begin{matrix}1\\ 0\\ 0\end{matrix}\right) ,\quad \mathbf{v}_2 =\left( \begin{matrix}0\\ 1\\ 0\end{matrix}\right) ,\quad \mathbf{v}_3 =\left( \begin{matrix}0\\ 0\\ 1\end{matrix}\right) ,\quad \mathbf{v}_4 =\left( \begin{matrix}1\\ 1\\ 1\end{matrix}\right) . \] These clearly span ℝ&sip3; (the first three already do).
Corollary 1 still applies: If x • vj = 0 for all four vectors, then x = 0.
Mathematica code
{{x1 -> 0, x2 -> 0, x3 -> 0}}
{{0, 0, 0}}
3. Geometric Visualization in ℝ². Let’s visualize the 2D example from earlier: \[ \mathbf{v}_1 =\left( \begin{matrix}1\\ 0\end{matrix}\right) ,\quad \mathbf{v}_2 =\left( \begin{matrix}1\\ 1\end{matrix}\right) . \] We will:
- Plot the spanning vectors.
- Plot the lines x • v₁ = 0 and x • v₂ = 0.
- Show that their intersection is only the origin.

- The blue and green regions are the sets of points orthogonal to v_1 and v_2.
- Their intersection is only the origin.
- This visually confirms Corollary 1.
- Accepts a list of vectors \{ v_1,\dots ,v_m\}
- Forms symbolic variables for a vector x\in \mathbb{R^{\mathnormal{n}}}
- Computes all dot products x\cdot v_j
- Solves the system x\cdot v_j=0
- Returns True if the only solution is x=0, and False otherwise
- Also returns the solution itself for inspection
Assuming that the statement is true up to n − 1, we consider the general case. The sum of first orthogonal vectors we denote by v = u₁ + u₂ + ⋯ + un-1. Then the sum of n vectors is written as sum of two vectors: \[ \mathbf{u}_1 + \cdots + \mathbf{u}_{n-1} + \mathbf{u}_n = \mathbf{v} + \mathbf{u}_n . \] However, the Pythagorean Theorem has been established for two terms.
- u₁ • u₂ = 0;
- u₁ • u₃ = 0;
- u₂ • u₃ = 0.
Step 2: Compute the norm of the sum. First compute the sum: \[ \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3 =\left( \begin{matrix}3\\ 4\\ 12\end{matrix}\right) . \] Now compute its squared norm: \[ \| \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3 \| ^2 = 3^2+4^2+12^2=9+16+144 = 169. \]
Step 3: Compute the sum of the individual squared norms \[ \| \mathbf{u}_1\| ^2 =3^2 =9,\qquad \| \mathbf{u}_2\| ^2 =4^2 =16,\qquad \| \mathbf{u}_3\| ^2 =12^2 =144. \] Add them: \[ \| \mathbf{u}_1\| ^2 +\| \mathbf{u}_2\| ^2 +\| \mathbf{u}_3\| ^2 =9+16+144=169. \]
Step 4: Compare both sides. \[ \| \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3\| ^2 =169\quad \mathrm{and}\quad \| \mathbf{u}_1\|^2 +\| \mathbf{u}_2\|^2 +\| \mathbf{u}_3\|^2 =169. \] They match exactly, confirming the theorem.
Why this works? Because the vectors are orthogonal, all cross‑terms vanish when expanding the squared norm: \begin{align*} &\| \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3\|^2 = \left( \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3\right) \bullet \left( \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3 \right) \\ &=\| \mathbf{u}_1\|^2 +\| \mathbf{u}_2\|^2 +\| \mathbf{u}_3\|^2\quad \mathrm{(all\ dot\ products\ }\mathbf{u}_i \bullet \mathbf{u}_j =0\mathrm{\ for\ }i\neq j). \end{align*} This is the higher‑dimensional version of the familiar Pythagorean theorem.
Geometrically:
- u₁ lies along the x‑axis
- u₂ lies along the y‑axis
- u₃ lies along the z‑axis.
Step 1: Visualize the sum as a diagonal. If you place these vectors tail‑to‑tail, they form three perpendicular edges of a rectangular box (a rectangular parallelepiped).
Now imagine walking: 3 units along the x‑axis, then 4 units along the y‑axis, then 12 units along the z‑axis. The resulting displacement vector is \[ \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3 =\left( \begin{matrix}3\\ 4\\ 12\end{matrix}\right) . \] This is the space diagonal of the rectangular box.
Step 2: Apply the Pythagorean theorem twice because the edges are perpendicular, the diagonal length can be computed using Pythagoras:
First in the base (x–y plane): \[ \mathrm{base\ diagonal}^{2} = 3^2+4^2=25. \] Then in 3D: \[ \| \mathbf{u}_1 +\mathbf{u}_2 +\mathbf{u}_3\|^2=(\mathrm{base\ diagonal})^2+12^2=25+144=169. \]
Step 3: Compare with the sum of squared lengths \[ \| \mathbf{u}_1\|^2 =9,\quad \| \mathbf{u}_2\|^2 =16,\quad \| \mathbf{u}_3\|^2 =144. \] Add them: \[ 9+16+144=169. \] Exactly the same.
Geometric Insight Each vector contributes a “leg” of a right angle:
u₁ is perpendicular to u₂, both are perpendicular to u₃.
So the sum vector is the hypotenuse of a right triangle in 3D (or more precisely, the diagonal of a rectangular box). Because all angles between the vectors are right angles, no cross‑terms appear—only the squared lengths matter. This is the geometric heart of Theorem 10. ■
The following result provides explicit formula for determination of coefficients in linear combination with respect to the orthogonal sets of vectors.
Step 1: Compute the coefficients \( \displaystyle \quad \frac{\mathbf{y} \bullet \mathbf{v}_i}{\| \mathbf{v}_i\|^2} . \quad \) For v₁: \[ \mathbf{y} \bullet \mathbf{v}_1 = 6\cdot 3=18,\qquad \| \mathbf{v}_1\|^2=3^2 =9, \] and \[ \frac{\mathbf{y} \bullet \mathbf{v}_1}{\| \mathbf{v}_1\|^2} =\frac{18}{9} =2. \] For v₂: \[ \mathbf{y}\bullet \mathbf{v}_2 =8\cdot 4=32,\qquad \| \mathbf{v}_2\|^2 =4^2 =16 , \] and \[ \frac{\mathbf{y}\bullet \mathbf{v}_2}{\| \mathbf{v}_2\|^2} =\frac{32}{16}=2. \] For v₃: \[ \mathbf{y} \bullet \mathbf{v}_3 =24\cdot 12 =288,\qquad \| \mathbf{v}_3\|^2 =12^2 =144 , \] and \[ \frac{\mathbf{y} \bullet \mathbf{v}_3}{\| \mathbf{v}_3\|^2} =\frac{288}{144}=2. \]
Step 2: Reconstruct y using the theorem: \[ \mathbf{y} =2\mathbf{v}_1 +2\mathbf{v}_2 +2\mathbf{v}_3 = 2\left( \begin{matrix}3\\ 0\\ 0\end{matrix}\right) +2\left( \begin{matrix}0\\ 4\\ 0\end{matrix}\right) +2\left( \begin{matrix}0\\ 0\\ 12\end{matrix}\right) . \] Compute the sum: \[ \mathbf{y} =\left( \begin{matrix}6\\ 8\\ 24\end{matrix}\right) , \] which matches the original vector exactly.
Why this example supports the theorem?
- The vectors v₁, v₂, v₃ form an orthogonal basis for their span.
- The theorem says that any vector in that span can be decomposed into components along each vi.
- The coefficient of each vi is exactly the projection of y onto vi.
- Because the set is orthogonal, these projections add cleanly with no cross‑terms.
Normal vectors and equations of a plane
This section uses the dot product to find equations of a plane in 3D and then in a general case. The key is to write points in the plane as all those at right-angles to a certain direction. This direction is perpendicular to the required plane, and is called a normal. Let’s start with an example of the idea in 2D.Recall that orthogonal vectors have their dot product equals to zero. Thus, the position vector x of every point in the line satisfies the dot product x • (2, −1) = 0. For x = (x, y), we get 2x −y = 0, so the equation of the line is 2x −y = 0.

Another problem: Now we consider a similar problem but asking to find the equation of the line that passes through the point (0, 0.7) instead of the origin. Then it has the displacement vector x − (0, 0.7) that must be orthogonal to (2, −1), as illustrated in the figure below. That is, the equation of the line is (x, y − 0.7) • (2, −1) = 0. Evaluating the dot product gives 0.7 + 2x − y = 0.

Now we are going to find the equation of plane in ℝ³ that goes through a given point P and is perpendicular to a given vector n, called a normal vector. Suppose that x(x, y, z) points to the plane, and we know the position of point P(x₀, y₀, z₀) and the normal vector n(𝑎, b, c). Identifying vector x with the point X on the plane, we claim that \( \displaystyle \quad \vec{PX} \quad \) is perpendicular to the normal vector. Taking the dot product, we obtain the required equation of the plane:

2. Mathematica Code: Define the plane
3. Normal vectors in ℝ4. A normal vector n satisfies \[ {\bf n} \bullet \mathbf{v}_1 = 0,\qquad {\bf n} \bullet \mathbf{v}_2 =0. \] Solve for all normals:
4. Projection of a point in ℝ4 onto the plane. Let x ∈ ℝ4 ≌ ℝ4×1. The projection onto the plane spanned by v₁, v₂ is \[ \mathrm{proj}_{\mathrm P}(\mathbf{x}) = \mathbf{V}\left( \mathbf{V}^{\mathrm T}\mathbf{V}\right)^{-1}\mathbf{V}^{\mathrm T}\mathbf{x}, \] where V = [v₁, v₂]. Mathematica code:
{0.456588, 0.8576, 0.401013, 0.}
5. 3D Visualization of a 4D Plane (Shadow Projection).
We cannot visualize a 4D plane directly, but we can project it into 3D by dropping the 4th coordinate. Define a 3D shadow:

6. Interactive Manipulation. You can move a point in ℝ4 and see its projection onto the plane (in 3D shadow form):
7. A 4D rotation mapping this plane to a coordinate plane. Objective: map span{v₁, v₂} to span{e₁, e₂}.
- Build an orthonormal basis of ℝ4 whose first two vectors span the plane.
- Use it as a change‑of‑basis matrix.
8. Package‑style functions for arbitrary planes in ℝn. Below is a compact “toolkit” you can drop into a notebook. It constructs an orthogonal matrix R that rotate an arbitrary basis of the plane into the standard basis achieving the following steps:
- Compute an orthonormal basis of the plane.
- Extend it to an orthonormal basis of ℝn without disturbing the first block.
- Form the orthogonal matrix whose rows are this basis.
{0.707107, 1.22474, 0., 1., 1., 0.}
Orthogonal Complement
- The orthogonal complement, W⊥, is a subspace of V.
- W ∩ W⊥ = {0}
- If U ⊂ W, then W⊥ ⊂ U⊥.
- If V is finite dimensional, then dim(W) + dim(W⊥) = dim(V), and V = W ⊕ W⊥.
- If V is finite dimensional, then W⊥⊥ = W.
-
To show that W⊥ is a subspace, we must check the following three properties:
-
Non-emptiness (contains the zero vector).
For any w ∈ W, \[ \langle \mathbf{0}, \mathbf{w}\rangle = \mathbf{0} \bullet \mathbf{w} = 0, \] so 0 ∈ W⊥.
-
Closed under addition.
Let u, v ∈ W⊥. Then for any w ∈ W, \[ \left( \mathbf{u} + \mathbf{v} \right) \bullet \mathbf{w} = \mathbf{u} \bullet \mathbf{w} + \mathbf{v} \bullet \mathbf{w} \] Since u, v ∈ W⊥, we have u • w = 0 and v • w = 0. Hence, \[ \left( \mathbf{u} + \mathbf{v} \right) \bullet \mathbf{w} =0+0=0 \quad \forall \mathbf{w}\in W, \] so u + v ∈ W⊥.
-
Closed under scalar multiplication.
Let v ∈ W⊥ and α ∈ ℝ. For any w ∈ W, \[ (\alpha \mathbf{v} ) \bullet \mathbf{w} =\alpha \, \mathbf{v} \bullet \mathbf{w} . \] Since v ∈ W⊥, v • w = 0, so \[ \left( \alpha \mathbf{v} \right) \bullet \mathbf{w} = \alpha \left( \mathbf{v} \bullet \mathbf{w} \right) = \alpha \cdot 0=0 \] for all w ∈ W. Thus, α v ∈ W⊥. Having verified these three properties, we conclude that W⊥ is a subspace of V.
- Note that if x ∈ W ∩ W⊥, then x • x = 0 and hence x = 0.
- Assume U is a subset of W. Let v ∈ W⊥. By definition, this means \[ \mathbf{v} \bullet \mathbf{w} = 0\quad \mathrm{for\ all\ }\mathbf{w}\in W. \] Since U ⊆ W, every u ∈ U is also an element of W. Therefore, for any u ∈ U, \[ \mathbf{v} \bullet \mathbf{u} =0, \] because u is a particular case of w ∈ W. Thus, v is orthogonal to every vector in U, so v ∈ U⊥. Since v ∈ W⊥ was arbitrary, we have shown \[ W^{\perp }\subset U^{\perp }. \]
- Now suppose that V is n dimensional and W is m dimensional. Let {e₁, e₂, … , em} be a basis of W. Then x ∈ W⊥ if and only if x • ej = 0 for j = 1, 2, … , m. Let A be the n-by-m matrix with columns e₁, e₂, … , em. We know there is positive definite Hermitian matrix P such that x • y = y✶Px, so x • ej = 0 for j = 1, 2, … , m. Hence, x ∈ W⊥ if and only if A✶P x = 0, so W⊥ is the null space of A✶P. Since A has rank m, and P is nonsingular, the m × n matrix A✶P also has rank m, and thus the null space of A✶P has dimension n − m. This, combined with (2), establishes (4).
- First note that W ⊂ W⊥⊥. Then from (4), we have dim(W⊥⊥) = n − dim(W⊥) = n − (n − m) = m = dim(W), so W = W⊥⊥.
-
Non-emptiness (contains the zero vector).
-
W and W⊥ are subspaces of ℝ4 because any linear combination of two vectors from either W or W⊥ is an elemet of W or W⊥, respectively
If x, y ∈ W⊥, then for all w ∈ W: \[ \langle \mathbf{x} +\mathbf{y}, \mathbf{w}\rangle =\langle \mathbf{x}, \mathbf{w} \rangle +\langle \mathbf{y}, \mathbf{w}\rangle = 0, \] and \[ \left( \alpha \mathbf{x} \right) \bullet \mathbf{w} = \alpha \left( \mathbf{x} \bullet \mathbf{w} \right) = 0. \] Thus, W⊥ is closed under addition and scalar multiplication. Mathematica verification
x = WperpBasis[[1]];All results return 0, confirming subspace closure.
y = WperpBasis[[2]];
(* Check orthogonality to W *)
{x.v1, x.v2, y.v1, y.v2}
(* Check closure: x+y and 3x are still orthogonal to W *)
{(x+y).v1, (x+y).v2, (3 x).v1, (3 x).v2} -
W ∩ W⊥ = {0}
(* Solve x ∈ W and x ∈ W⊥ *)
sol = Solve[ { a v1 + b v2 == c WperpBasis[[1]] + d WperpBasis[[2]] }, {a, b, c, d} ]
(* Extract the vector *)
x = a v1 + b v2 /. sol[[1]]{0,0,0,0} -
If U ⊂ W, then W⊥ ⊂ U⊥.
Let’s choose: \[ U=\mathrm{span}\{ \mathbf{v}_1\} \subset W. \] If x ∈ W⊥, then x is orthogonal to all of W, hence to all of U ⊆ W.
u1 = v1;
(* Check that every basis vector of W⊥ is orthogonal to u1 *)
WperpBasis.u1{0,0}. -
If V is finite dimensional, then dim(W) + dim(W⊥) = dim(V), and V = W ⊕ W⊥.
In finite dimensions: \[ \dim (W)+\dim (W^{\perp })=\dim (V) \] and \[ V=W\oplus W^{\perp }. \]
{Length[Wbasis], Length[WperpBasis], 4}{2, 2, 4}Now check direct sum:(* Check that Wbasis ∪ WperpBasis is an orthonormal basis of R^4 *)You get the identity matrix (up to signs), confirming a full orthonormal basis.
Orthogonalize[Join[Wbasis, WperpBasis]] // MatrixForm -
If V is finite dimensional, then W⊥⊥ = W.
WperpperpBasis = Orthogonalize[NullSpace[WperpBasis]];You will see the zero matrix, meaning the two sets of vectors span the same subspace.
(* Compare spans *)
RowReduce[Join[Wbasis, -WperpperpBasis]]
-
Which pair of the following three vectors are orthogonal to each other?
= 3i - 2j + k, = −2i + 4j + 5k, = i _ 3j − 2k. \[ (a) \ \mathbf{x}, \mathbf{y} , \quad (b)\ \mathbf{y}, \mathbf{z} , \quad (c)\ \mbox{ no pair}. \] - Find the scalar number λ such that vectors a = 2i + 3j + 4k and b = 5i + λj − 3k are at right-angles.
- Prove Corollary 1.
- Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
- Beezer, R., A First Course in Linear Algebra, 2015.
- Beezer, R., A Second Course in Linear Algebra, 2013.
- Fitzpatrick, S., Orthogonal sets of vectors, Linear Algebra.
