Most linear systems you will encounter will have exactly one solution. However, it is possible that there are no solutions, or infinitely many. (It is not possible that there are exactly two solutions.)
Let's take a closer look. It never hurts in any investigation to look at the simplest possible case first. So consider again the single equation
and . In this case turns into . Since is impossible there is no solution.
and . In that case turns into which is true for all numbers . There are infinitely many solutions.
It is clear since we cover all possibilities above that it is impossible for the to have precisely , or , or any finite number, of solutions. Compare this for example with quadratic equations which may have , , or , but never any other number of (real) solutions.
Next Consider two equations in two unknowns, let's say
Each of these two equations defines a line in the cartesian plane . All solutions are the coordinates of a point where the two lines intersect. There are again three possibilities.
Figure 1: Unique Solution of two equations in two unknowns.
The lines intersect in one point. There is a unique solution (i..e, the coordinates of that point). An example is provided by
Figure 2: No Solution of two equations in two unknowns.
The lines are parallel but distinct. They never intersect and there is no solution. An example is provided by
Figure 3: Infinitely many solutions of two equations in two unknowns.
The lines are identical. Any point on the lines provides a solution. A trivial example can be obtained by writing the same equation twice. A less trivial example is
More than two Equations. Similar considerations apply to systems of more than two equations, but this is a subject beyond the scope of this class. You will learn more when you take a class on Linear Algebra.
Suppose we want to find a quartic polynomial whose value equals for . The purpose of this exercise might be to to approximate by a polynomial on a calculator that cannot evaluate for non-integer directly. Approximating functions is a huge subject, here we just use this problem as an example for a more complicated linear system.
Let's write our quartic polynomial as
We want it to satisfy the equations
This is a linear system of five equations in the five unknowns , , , , and .
The table below is set up as discussed except that whenever an entry is it is left blank to clarify the reduced systems.
Equation is very special, it tells us right away that . We use that equation to eliminate from the remaining equations which gives us four equations ( through ) in the four unknowns , , and .
Equation is which means . Substituting the value of into equation gives which implies . Substituting and into equation gives the equation which implies . Finally, substituting , and into equation gives which implies .
Putting the underlined results together (and writing everything over the common denominator ) gives
Figure 4: A polynomial approximation of .
Figure 4 shows the graph of (red) as well as the graph of (green). It is apparent that is a good approximation of in the interval from to . There are other and more effective ways of computing polynomials like . However, this example illustrates how Gaussian Elimination and Backward Substitution can be used to solve a linear system. In this particular linear system we were fortunate in that the elimination proceeded in a straightforward way without fractional arithmetic. It happens frequently that linear systems have a special structure that can be effectively exploited.
A final word on computing the row sums. They appear to be a waste of effort when the problem is all solved. However, when I first computed the entries in the above table I made several mistakes that I discovered immediately because of the row sums. There is a good chance you will save yourself a lot of time and aggravation by carrying them along in your own calculations.