Inner Product

In this section, we consider scalars from either the field of real numbers ℝ or complex numbers ℂ by putting rational numbers ℚ into back burner. Roughly speaking, the inner product is a positive-definite bilinear mapping into the field of real or complex numbers.
An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following postulates:
  • \( \left\langle {\bf v}+{\bf u} , {\bf w} \right\rangle = \left\langle {\bf v} , {\bf w} \right\rangle + \left\langle {\bf u} , {\bf w} \right\rangle . \)
  • \( \left\langle {\bf v} , \alpha {\bf u} \right\rangle = \alpha \left\langle {\bf v} , {\bf u} \right\rangle \) for any scalar α.
  • \( \left\langle {\bf v} , {\bf u} \right\rangle = \overline{\left\langle {\bf u} , {\bf v} \right\rangle} = \left\langle {\bf u} , {\bf v} \right\rangle^{\ast} , \) where overline or asterisk means complex conjugate.
  • \( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if \( {\bf v} = {\bf 0} . \)

The fourth condition in the list above is known as the positive-definite condition.

Example 1: We consider ℂ² with the inner product
\[ \langle {\bf u}, {\bf v} \rangle = \langle (u_1 , u_2 ), (v_1 , v_2 ) \rangle = \overline{u_1} v_1 + 4 \overline{u_2} v_2 . \]
Then
\[ \langle {\bf u}, {\bf u} \rangle = \left\vert u_1 \right\vert^2 + 4 \left\vert u_2 \right\vert^2 . \]
For example
\[ \langle (1, 2{\bf j}), ( 1- {\bf j} , 2 -3{\bf j}) \rangle = 1- {\bf j} + 4 \left( -2{\bf j} \right) \left( 2 -3{\bf j} \right) = -23 - 18{\bf j} . \]
1 -2*I + 4*(-2*I)*(2 - 3*I)
-23 - 18 I
Also
\[ \| (1, 2{\bf j}) \|^2 = \langle (1, 2{\bf j}), (1, 2{\bf j}) \rangle = 1^2 + 4 \left( -2{\bf j} \right) \left( 2{\bf j} \right) = 9 , \]
1 + 4 *(-2*I)*(1*I)
9
and
\[ \| ( 1- {\bf j} , 2 -3{\bf j}) \|^2 = \langle ( 1- {\bf j} , 2 -3{\bf j}), (1- {\bf j} , 2 - 3{\bf j}) \rangle = 1^2 + 1^2 + 4 \left( 4 + 9 \right) = 54 . \]
(1 + I)*(1 - I) + 4*(2 + 3*I)*(2 - 3*I)
54
A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by

\[ \| {\bf u} \| = \sqrt{\left\langle {\bf u} , {\bf u} \right\rangle} . \]
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero: \( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)

A generalized length function on a vector space can be imposed in many different ways, not necessarily through the inner product. What is important that this generalized length, called in mathematics a norm, should satisfy the following four axioms.

A norm on a vector space V is a nonnegative function \( \| \, \cdot \, \| \, : \, V \to [0, \infty ) \) that satisfies the following axioms for any vectors \( {\bf u}, {\bf v} \in V \) and arbitrary scalar k.
  1. \( \| {\bf u} \| \) is real and nonnegative;
  2. \( \| {\bf u} \| =0 \) if and only if u = 0;
  3. \( \| k\,{\bf u} \| = |k| \, \| {\bf u} \| ;\)
  4. \( \| {\bf u} + {\bf v} \| \le \| {\bf u} \| + \| {\bf v} \| . \)
If A is an n × n positive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf v} = {\bf u} \cdot {\bf A}^{\ast} {\bf v} \qquad\mbox{and} \qquad {\bf u} \cdot {\bf A} {\bf v} = {\bf A}^{\ast} {\bf u} \cdot {\bf v} . \]
In particular, if w1, w2, ... , wn are positive real numbers, which are called weights, and if u = ( u1, u2, ... , un) and v = ( v1, v2, ... , vn) are vectors in \( \mathbb{R}^n , \) then the formula
\[ \left\langle {\bf u} , {\bf v} \right\rangle = w_1 u_1 v_1 + w_2 u_2 v_2 + \cdots + w_n u_n v_n \]
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights w1, w2, ... , wn.
Example 2: The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \) for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an invertible n-by-n matrix. Then the formula
\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf A} {\bf v} = {\bf v}^{\mathrm T} {\bf A}^{\mathrm T} {\bf A} {\bf u} \]
defines an inner product generated by A.
Example 3: In the set of integrable functions on an interval [a,b], we can define the inner product of two functions f and g as
\[ \left\langle f , g \right\rangle = \int_a^b \overline{f} (x)\, g(x) \, {\text d}x \qquad\mbox{or} \qquad \left\langle f , g \right\rangle = \int_a^b f(x)\,\overline{g} (x) \, {\text d}x . \]
Then the norm \( \| f \| \) (also called the 2-norm) becomes the square root of
\[ \| f \|^2 = \left\langle f , f \right\rangle = \int_a^b \left\vert f(x) \right\vert^2 \, {\text d}x . \]
In particular, the 2-norm of the function \( f(x) = 5x^2 +2x -1 \) on the interval [0,1] is
\[ \| 2 x^2 +2x -1 \| = \sqrt{\int_0^1 \left( 5x^2 +2x -1 \right)^2 {\text d}x } = \sqrt{7} . \]

Example 4: Consider a set of polynomials of degree up to n, denoted by ℘[x]. If
\[ {\bf p} = p(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_n x^n \quad\mbox{and} \quad {\bf q} = q(x) = q_0 + q_1 x + q_2 x^2 + \cdots + q_n x^n \]
are two polynomials, and if \( x_0 , x_1 , \ldots , x_n \) are distinct real numbers (called sample points), then the formula
\[ \left\langle {\bf p} , {\bf q} \right\rangle = p(x_0 ) q(x_0 ) + p_1 (x_1 )q(x_1 ) + \cdots + p(x_n ) q(x_n ) \]
defines an inner product, which is called the evaluation inner product at \( x_0 , x_1 , \ldots , x_n . \)

With dot product, we can assign a length of a vector, which is also called the Euclidean norm or 2-norm:

\[ \| {\bf x} \|_2 = \| {\bf x} \| = \sqrt{ {\bf x}\cdot {\bf x}} = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2} . \]
This norm can be generalized for arbitrary real p: \[ \| {\bf x} \|_p = \left( x_1^p + x_2^p + \cdots + x_n^p \right)^{1/p} . \]

Example 5: Taking a vector \( {\bf v} = \left( 2, {\bf j} , -2 \right) , \) we calculate norms:
Norm[{2, \[ImaginaryJ], -2}]
Norm[{2, \[ImaginaryJ], -2}, 3/2]
Out[1]= 3
Out[2]= (1 + 4 Sqrt[2])^(2/3)
In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space—save for the zero vector, which is assigned a length of zero. On an n-dimensional complex space \( \mathbb{C}^n ,\) the most common norm is
\[ \| {\bf z} \| = \sqrt{ {\bf z}\cdot {\bf z}} = \sqrt{\overline{z_1} \,z_1 + \overline{z_2}\,z_2 + \cdots + \overline{z_n}\,z_n} = \sqrt{|z_1|^2 + |z_2 |^2 + \cdots + |z_n |^2} . \]
A unit vector u is a vector whose length equals one: \( {\bf u} \cdot {\bf u} =1 . \) We say that two vectors x and y are perpendicular if their dot product is zero. There are known many other norms.
Riesz representation theorem: Let H be an inner product space. For every linear functional L on H there exists a unique vector yH such that
\[ L({\bf v}) = \,< {\bf y} \,|\,{\bf v} > . \]
Let us fix an orthonormal basis β = { e1, e2, … , en } in H, and let
\[ \left[ L \right] = \left[ {\bf L}_1 , {\bf L}_2 , \ldots , {\bf L}_n \right] \]
be the matrix of L in this basis, with column vectors Lk. Define vector y by
\[ {\bf y} = \sum_k \overline{\bf L}_k {\bf e}_k = \sum_k {\bf L}_k^{\ast} {\bf e}_k , \]
where overline or asterisk \( \displaystyle \overline{\bf L}_k = {\bf L}_k^{\ast} \) denotes the complex conjugate of Lk. In the case of a real space the conjugation does nothing and can be simply ignored.

Arbitrary vector from H can be uniquely expanded as

\[ {\bf v} = \sum_k a_k {\bf e}_k \qquad \iff \qquad \vert\,{\bf v} >\, = \left[ a_1 , a_2 , \ldots , a_n \right]^{\mathrm T} . \]
Then
\[ L({\bf v}) = \left[ L \right]\,\vert\,{\bf v} >\, = \sum_k a_k {\bf L}_k . \]
On the other hand,
\[ <\,{\bf y}\,\vert\,{\bf v} >\, = \sum_k {\bf L}_k^{\ast\ast} a_k = \sum_k a_k {\bf L}_k = L({\bf v}) . \]
To show that the vector y is unique, we choose elements of orthonormal basis instead of v:
\[ L({\bf e}_k) = {\bf L}_k = <\,{\bf y}\,\vert\,{\bf e}_k > \]
Then, using the formula for the decomposition in the orthonormal basis, we get
\[ {\bf y} = \sum_k <\,{\bf y}\,|\,{\bf e}_k >\,{\bf e}_k = \sum_k {\bf L}_k^{\ast} {\bf e}_k \]
we conclude the proof.
This statement was proved by the Hungarian mathematician of Jewish descent Frigyes Riesz (1880--1956) in 1909.

While the proof of the Riesz representation theorem does not require a basis, the proof presented above utilizes an orthonormal basis in H. Although the resulting vector y does not depend on the choice of the basis, this proof gives a formula for computing the representing vector.

 

  1. Perform the indicated operations
    1. u,v⟩ = u₁·v₁ + 2u₂·v₂, where u = ( 1 -j, −3+j), v = ( 2 −3j, 2).

 

  1. Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
  2. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.