Skip to main content
\(\require{cancel}\newcommand{\domain}[1]{\operatorname{Dom}(#1)} \newcommand{\range}[1]{\operatorname{Range}(#1)} \newcommand{\linearspan}{\operatorname{span}} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\set}[2]{\left\{ #1 \: \middle\vert \: #2 \right\}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \DeclareMathOperator{\Lapl}{\mathcal{L}} \newcommand{\La}[1]{\Lapl \left\{ #1 \right\}} \newcommand{\invLa}[1]{\Lapl^{-1}\left\{ #1 \right\}} \newcommand{\intbyparts}[4]{\begin{tabular}{|rl|rl|}\hline $u$ \amp $#1$ \amp $dv$ \amp $#2$ \\ \hline $du$ \amp $#3$ \amp $v$ \amp $#4$ \\ \hline \end{tabular}} \newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{{\not{\mid}}} \newcommand{\notsubset}{\not\subset} \newcommand{\swap}{\mathrm{swap}} \newcommand{\Null}{\operatorname{Null}} \newcommand{\half}{\text{ \nicefrac{1}{2}}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section5.3Linearly Independent Sets of Vectors

Definition5.3.1

A set of vectors, \(\left\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n \right\}\) is called linearly independent if the only solution to the homogeneous equation \begin{equation} c_1 \vec{v}_1 + c_2 \vec{v}_2 + \cdots + c_n \vec{v}_n = \vec{0} \label{eqn-linear-independence}\tag{5.3.1} \end{equation} is the trivial solution, \(c_1 = c_2 = \cdots = c_n = 0\text{.}\) Otherwise the set is linearly dependent.

Remark5.3.2Linear Dependence Implies Redundancy

Notice that if a set is linearly dependent then there must exist \(c_1, c_2, \ldots, c_n\) not all zero such that equation (5.3.1) has a solution. In other words the above equation has a nontrivial solution and thus we can solve it to write any vector in it as a linear combination of the other vectors. For example, as long as \(c_2 \ne 0\text{,}\) we can express \(\vec{v}_2\) as a linear combination of the other vectors like so: \begin{equation*} \vec{v}_2 = -\frac{c_1}{c_2}\vec{v}_1 - \frac{c_3}{c_2}\vec{v}_3 - \cdots - \frac{c_n}{c_2}\vec{v}_n. \end{equation*} In other words, the vector, \(\vec{v}_2\) is redundant because it can be expressed as a linear combination of the other vectors. In fact, any vector with a nonzero coefficient can be expressed as a linear combination of the others and can thus be considered redundant or unneeded.

Example5.3.3A Set of Linearly Dependent Vectors Has at Least One Redundant Vector

Consider the vectors, \begin{equation*} \vec{v}_1 = \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}, \quad \vec{v}_2 = \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix}, \quad \vec{v}_3 = \begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix} \end{equation*} Notice that \begin{equation} 2\vec{v}_1 - \vec{v}_2 + \vec{v}_3 = \vec{0}, \label{eqn-linearly-dependent-linear-combination}\tag{5.3.2} \end{equation} thus these vectors are linearly dependent. Since none of the coefficients are zero, we can solve equation (5.3.2) for \(\vec{v}_1, \vec{v}_2\) or \(\vec{v}_3\text{.}\) For example, solving for \(\vec{v}_3\) yields: \(\vec{v}_3 = -2\vec{v}_1 + \vec{v}_2\text{.}\) Or equivalently, \begin{equation*} \begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix} = -2\begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix} + \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix} \end{equation*}

So how does one go about determining whether a set is linearly independent or linearly dependent? The answer is rather simple, we use the fact that a linear combination equation can be transformed into an equivalent matrix equation. Once the equation is expressed as a matrix equation equation, then we apply the one algorithm that we employ to answer all questions in Linear Algebra, namely Gaussian elimination or as needs be Gauss—Jordan elimination.

Example5.3.4

Determine whether the following set of vectors is linearly independent or linearly dependent. If the set is linearly dependent, express the first vector as a linear combination of the other vectors. \begin{equation*} \left\{ \, \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix}, \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix}, \begin{bmatrix} 3 \\ 6 \\ 9 \end{bmatrix} \, \right\} \end{equation*}

Solution

SubsectionExercises

1

Use the definition to show that the set of vectors, \(\left\{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ -1 \end{bmatrix} \right\}\) is linearly independent.

2

Use the definition to show that the set of vectors, \(\left\{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \begin{bmatrix} -1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \end{bmatrix} \right\}\) is linearly dependent. Write the third vector, as a linear combination of the first two.\(\)

Hint
3

Is it possible to write the vector \(\vec{w} = (2,-6,3)\) as a linear combination of the vectors \(\vec{v}_1 = (1,-2,-1), \vec{v}_2 = (3,-5,4)\text{?}\)

Hint
4

Sometimes you can determine whether a set of vectors is linearly dependent by inspection. That is, sometimes you can determine a linear combination of the first two which yields the third just by inspecting the numbers in each vector and not solving a system of equations. Find a linear combination of the first two vectors which yields the third. \begin{equation*} \vec{v}_1 = (1, 0, 3, 0), \vec{v}_2 = (0, -1, 1, 1), \vec{v}_3 = (2, 1, 5, -1) \end{equation*}

Hint
5

Determine whether the following set of vectors from \(\R^4\) is linearly independent or linearly dependent. \begin{equation*} \left\{ \: \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \: \begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \end{bmatrix}, \: \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix}, \: \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \: \right\} \end{equation*}

6

Find a nontrivial parametric solution to: \begin{equation*} c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 2 \\ 0 \end{bmatrix} + c_3 \begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix} + c_4 \begin{bmatrix} 1 \\ 3 \\ 4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \end{equation*}

7

Find a nontrivial parametric solution to: \begin{equation*} c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 2 \\ 0 \\ 2 \end{bmatrix} + c_3 \begin{bmatrix} 0 \\ 2 \\ 1 \\ 3 \end{bmatrix} + c_4 \begin{bmatrix} 1 \\ 2 \\ 4 \\ 6 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \end{equation*}

8

Prove the following claim: If a finite set of vectors, \(S\) contains the zero vector, then the set is linearly dependent.

9

The set \(\left\{ \vec{v}_1, \vec{v}_2 \right\}\) is known to be a linearly independent set of vectors, use the definition of linear independence to show that the set \(\left\{ \vec{u}_1,\: \vec{u}_2 \right\}\text{,}\) where \begin{align*} \vec{u}_1 \amp = \vec{v}_1 + \vec{v}_2\\ \vec{u}_2 \amp = \vec{v}_1 - \vec{v}_2 \end{align*} is also linearly independent.

Hint