# Row Space

With every*m*×

*n*matrix

**A**, we can associate a vector space that is generated by its rows.

**Row Space**of a

*m*×

*n*matrix

**A**is the span (set of all possible linear combinations) of its row vectors. So it is a subspace of ℝ

^{n}in case of real entries or ℂ

^{n}when matrix

**A**has complex entries.

Recall that a set of vectors β is said to generate or span a vector space *V* if every element from
*V* can be represented as a linear combination of vectors from β.

Lemma: The nonzero rows of a matrix in reduced row echelon form (Gauus--Jordan form) are linearly independent.

**A**be an

*m*×

*n*matrix in the Gauss-Jordan form, and let

**r**

_{1},

**r**

_{2}, ... ,

**r**

_{k}be its nonzero rows. Suppose that there exist scalars such that

**A**is in its row echelon form, all entries below the leading entry of

**r**

_{1}are zeroes. Hence,

*a*

_{1}= 0 because

*a*

_{1}

**r**

_{1}would otherwise contribute a nonzero entry to the zero vector

**0**. A similar argument shows that the coefficient

*a*

_{2}must be zero, and so on. Hence all coefficients are zeroes, and vectors

**r**

_{1}, ... ,

**r**

_{k}must be linearly independent.

Theorem:
The nonzero rows of the reduced row echelon form **R** of an
*m*×*n* matrix **A** are a basis for the row space of
matrix **A**.

**A**is an

*m*×

*n*matrix and

**C**is obtained by an elementary row operations. We are going to show that both matrices have the same row spaces.

Let **r**_{1}, **r**_{2}, ... ,
**r**_{k} be its nonzero rows of **A**, and suppose **C** is
obtained from **A** by an elementary row operation (*R*_{i}
⇾ s *R*_{i}). Then each vector **x** from row space of
**C** is of the form

**A**. Conversely, since

*s*≠ 0, each vector

**y**from row space of

**A**is of the form

**C**. The calculations for the other two types are analogous.

Hence both matrices **A** and **C** have the same row space. Since the
reduced row echelon matrix **R** results from **A** by a finite number
of applications of elementary row operations, we conclude the proof.

Corollary: Row-equivalent matrices have the same row spaces.

**A**and

**B**are spans of the columns of their transposes. For each row operation we perform on a matrix, we can define an analogous operation on the columns. Perhaps we should call these column operations. Instead, we will still call them row operations, but we will apply them to the columns of the transposes

Refer to the columns of **A**^{T} and **B**^{T} as
**A**_{i} and **B**_{i}, 1≤i≤m. The row operation that switches rows will just switch columns of the transposed matrices. This will have no effect on the possible linear combinations formed by the columns.

Theorem:
For any *m*×*n* matrix **A**, its column rank and the row
rank are always equal.

**A**be a matrix of size

*m*×

*n*(with

*m*rows and

*n*columns). Let the column rank of

**A**be

*r*and let

*c*

_{1},

*c*

_{2}, ... ,

*c*

_{r}be any basis for the column space of

**A**. Place these as the columns of an

*m*×

*r*matrix

**C**. Every column of

**A**can be expressed as a linear combination of the

*r*columns in

**C**. This means that there is an

*r*×

*n*matrix

**R**such that

**A**=

**C**

**R**.

**R**is the matrix whose i-th column is formed from the coefficients giving the i-th column of

**A**as a linear combination of the

*r*columns of

**C**. Now, each row of

**A**is given by a linear combination of the

*r*rows of

**R**. Therefore, the rows of

**R**form a spanning set of the row space of

**A**and, by the Steinitz exchange lemma, the row rank of

**A**cannot exceed

*r*. This proves that the row rank of

**A**is less than or equal to the column rank of

**A**. This result can be applied to any matrix, so apply the result to the transpose of

**A**. Since the row rank of the transpose of

**A**is the column rank of

**A**and the column rank of the transpose of

**A**is the row rank of

**A**, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of

**A**. (Also see rank factorization.)

**Another proof:**
Let **A** be a matrix of size *m* × *n* (with *m* rows and
*n* columns) with entries from ℝ whose row rank is *r*.
Therefore, the dimension of the row space of **A** is *r*. Let
**x**_{1}, **x**_{2}, ... , **x**_{r}
be a basis of the row space of **A**. We claim that the vectors
**A****x**_{1}, **A****x**_{2}, ... ,
**A****x**_{r} are linearly independent. To see why, consider a linear
homogeneous relation involving these vectors with scalar coefficients
*c*_{1}, *c*_{2}, ... , *c*_{r}:

**v**=

*c*

_{1}

**x**

_{1}+

*c*

_{2}

**x**

_{2}+ ... +

*c*

_{r}

**x**

_{r}. We make two observations: (a)

**v**is a linear combination of vectors in the row space of

**A**, which implies that v belongs to the row space of

**A**, and (b) since

**A**

**v**=

**0**, the vector

**v**is orthogonal to every row vector of

**A**and, hence, is orthogonal to every vector in the row space of

**A**. The facts (a) and (b) together imply that

**v**is orthogonal to itself, which proves that

**v**=

**0**or, by the definition of

**v**,

**x**

_{i}were chosen as a basis of the row space of

**A**and so are linearly independent. This implies that \( c_1 = c_2 = \cdots = c_r =0 . \) It follows that

**A**

**x**

_{1},

**A**

**x**

_{2}, ... ,

**A**

**x**

_{r}are linearly independent.

Now, each

**A**

**x**

_{i}is obviously a vector in the column space of

**A**. So

**A**

**x**

_{1},

**A**

**x**

_{2}, ... ,

**A**

**x**

_{r}is a set of r linearly independent vectors in the column space of

**A**and, hence, the dimension of the column space of

**A**(i.e., the column rank of

**A**) must be at least as big as

*r*. This proves that row rank of

**A**is no larger than the column rank of

**A**. Now apply this result to the transpose of

**A**to get the reverse inequality and conclude as in the previous proof.

*n*-dimensional real space, and it is called the

**standard basis**. Its dimension is

*n*.

*i*-th row and

*j*-th column. Then the set \( {\bf M}_{i,j} \ : \ 1 \le i \le m , \ 1 \le j \le n \) is a basis for the set of all such real matrices. Its dimension is

*mn*.

*n*. It has dimension

*n*+1. ■

Theorem: Let *V* be a vector space and
\( \beta = \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \) be a subset of
*V*. Then β is a basis for *V* if and only if each vector *v* in *V* can be uniquely
decomposed into a linear combination of vectors in β, that is, can be uniquely expressed in the form