5.4. Basis#
We have already met the concept of a basis in the chapter on vectors where we saw that the basis vectors for \(\mathbb{R}^3\) are the vectors \(\vec{i} = (1, 0, 0)^\mathsf{T}\), \(\vec{j} = (0, 1, 0)^\mathsf{T}\) and \(\vec{k} = (0, 0, 1)^\mathsf{T}\) and any vector in \(\mathbb{R}^3\) can be represented as a linear combination of these basis vectors. The concept of a basis is not limited to Euclidean geometry and we can define a basis for any vector space so that an element in the vector space can be expressed as a linear combination of the basis elements.
5.4.1. Spanning sets#
(Spanning set)
Let \(V\) be a vector space over the field \(F\) and \(S\) is a subset of \(V\). Let \(W\) be a subset of \(V\) that are expressible as a linear combination of vectors in \(S\), i.e., for \(u \in W\), \(v_1, \ldots, v_n \in S\) and \(\alpha_1, \ldots, \alpha_n \in F\)
then \(W\) is a subspace of \(V\) and \(S\) is a spanning set for \(W\). We write this as \(\operatorname{span}(S)\).
For example, \(\mathbb{C} = \operatorname{span}(\{1, i\})\) over \(\mathbb{R}\) since every element of \(\mathbb{C}\) can be expressed as a linear combination of 1 and \(i\).
(i) Show that \(\{ \vec{v}_1, \vec{v}_2 \}\) where \(\vec{v}_1 = \begin{pmatrix} 2\\ 1 \end{pmatrix}\) and \(\vec{v}_2 = \begin{pmatrix} 4 \\ 3 \end{pmatrix}\) is a spanning set for \(\mathbb{R}^2\).
Solution
We need to show that \(\alpha_1 \begin{pmatrix} 2\\ 1 \end{pmatrix} + \alpha_2 \begin{pmatrix} 4 \\ 3 \end{pmatrix} = \begin{pmatrix} a\\ b \end{pmatrix}\) for any \(a, b \in \mathbb{R}\), i.e., so that the following system has a non-trivial solution.
A system of linear equations has a solution if it is non-singular (the determinant of the coefficient matrix is non-zero), therefore
so \(\{\vec{v}_1, \vec{v}_2\}\) is a spanning set for \(\mathbb{R}^2\).
(ii) Determine a spanning set for \(\mathbb{R}^3\).
Solution
Lets suggest \(\left\{ \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \right\}\) as a spanning set for \(\mathbb{R}^3\)
therefore \(\{ \vec{i}, \vec{j}, \vec{k} \}\) is a spanning set for \(\mathbb{R}^3\). Note that this is just one of many examples of spanning sets for \(\mathbb{R}^3\).
The vectors \(\vec{i}\), \(\vec{j}\) and \(\vec{j}\) were introduced in basis vectors. This leads to the definition of a basis of a vector space.
5.4.2. Basis of a vector space#
(Basis of a vector space)
A basis of a vector space \(V\) of a field \(F\) is a linearly independent subset of \(V\) that spans \(V\). A subset \(W\) is a basis if it satisfies the following:
linear independence property: for every subset \(\{v_1, \ldots , v_n\}\) of \(W\) the following equation only has the trivial solution \(\alpha_i = 0\).
spanning property: for all \(u \in V\) we can write \(u\) as a linear combination of \(v \in W\), i.e.,
(Orthogonal basis)
An orthogonal basis of a vector space is one in which each of the vectors are orthogonal (perpendicular) to one another.
(Orthonormal basis)
An orthonormal basis of a vector space is one in which each of the vectors are orthogonal to one another and each vector is a unit vector.
(Dimension of a vector space)
The dimension of a vector space \(V\) is denoted by \(\dim(V)\) and is the number of elements in the basis for the vector space.
Show that \(\{ \vec{v}_1, \vec{v}_2, \vec{v}_3\}\) where \(\vec{v}_1 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}\), \(\vec{v}_2 = \begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}\) and \(\vec{v}_3 = \begin{pmatrix} 1 \\ -1 \\ -2 \end{pmatrix}\) is a basis for \(\mathbb{R}^3\).
Solution
We need to show that the three vectors are linearly independent, i.e., show that the only solution to the following system is \(\alpha_1 = \alpha_2 = \alpha_3 = 0\)
Using Gauss-Jordan elimination
So \(\vec{v}_1\), \(\vec{v}_2\) and \(\vec{v}_3\) are linearly independent. We also need to show that \(\{ \vec{v}_1, \vec{v}_2, \vec{v}_3 \}\) spans \(\mathbb{R}^3\).
so \(\{ \vec{v}_1, \vec{v}_2, \vec{v}_3 \}\) is a spanning set for \(\mathbb{R}^3\). Note that in showing that \(\vec{v}_1\), \(\vec{v}_2\) and \(\vec{v}_3\) are linearly independent we have also shown that \(\alpha_1 \vec{v}_1 + \alpha_2 \vec{v}_2 + \alpha_3 \vec{v}_3 = (a, b, c)^\mathsf{T}\) has a solution and \(\{ \vec{v}_1, \vec{v}_2, \vec{v}_3 \}\) is a spanning set. Futhermore
so \(\{ \vec{v}_1, \vec{v}_2, \vec{v}_3 \}\) is an orthogonal basis. It is not an orthonormal basis since \(|\vec{v}_1| = \sqrt{2} \neq 1\).
5.4.3. Change of basis#
\(\mathbb{R}^n\) has a particularly nice basis that is easy to write down
which is called the standard basis. Note that \(\vec{e}_i\) is column \(i\) of the identity matrix and for \(\mathbb{R}^3\) we have \(\vec{i} = \vec{e}_1\), \(\vec{j} = \vec{e}_2\) and \(\vec{k} = \vec{e}_3\).
We can represent a vector \(\vec{u} = (u_1, u_2, \ldots, u_n)^\mathsf{T} \in \mathbb{R}^n\) in the standard basis as a vector with respect to another basis \(W = \{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\}\) which is denoted using \([\vec{u}]_W\). Using the standard basis we have
and to express \(\vec{u}\) with respect to the basis \(W\) we need to solve
for \(w_1, w_2, \ldots, w_n\). This concept is illustrated for \(\mathbb{R}^2\) in Fig. 5.2.
The point with co-ordinates \((u_1, u_2)\) with respect to the standard basis \(\{ \vec{e}_1, \vec{e}_2\}\) can be expressed with respect to the basis \(\{ \vec{w}_1, \vec{w}_2 \}\) by the co-ordinates \((w_1, w_2)\).
Represent the vector \(\vec{u} = \begin{pmatrix} 4 \\ 0 \\ 5 \end{pmatrix}\) with respect to the basis \(W = \left\{ \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}, \begin{pmatrix} 1 \\ -1 \\ -2 \end{pmatrix} \right\}\).
Solution
We need to solve the system
which can be written as the matrix equation
Calculating the inverse of the coefficient matrix
so
Note that in Example 5.8 we can represent any vector \(\vec{u}\) with respect to the basis \(W\) by multiplying by the square matrix in the final equation. This matrix is known as the change of basis matrix.
(Change of basis matrix)
Let \(V\) be a vector space over the field \(F\) and \(u \in V\). If \(E\) and \(W\) are two basis for \(V\) then the change of basis matrix is the matrix \(A_{E \to W}\) such that \([u]_{W} = A_{E \to W} [u]_E\).
So to express the vector \(\vec{u} \in \mathbb{R}^3\) with respect to the basis \(W\) we simply multiply \(\vec{u}\) by the change of basis matrix. Changing from a non-standard basis is a slightly more complicated procedure and will be covered in the more advanced units on linear algebra.