Updated S2014 Answer check Back of Book required on all odd problems. Include a separate answer check for each even problem without textbook answer. The XEROX NOTES refer to the 2004 PDF file of Chapter 4 exercises from transparencies, which were for the second edition of the textbook. The old textbook edition 2/E problems are the same 4.1 to 4.4, but in 3/E section 4.5 became sections 4.5 and 4.7, with a new 4.6 added. The hybrid edition of F2013 adds section 4.8, which is used for course 3140 but not for course 2250. =========== Section 4.1 =========== 4.1-3 See xerox notes. 4.1-11 Use the frame sequence method starting with the problem augment(u,v) c = w. Solve for c=(c1,c2). 4.1-16 Apply the determinant test to decide independence (det not zero) or dependence (det zero). See Thm 4, section 4.1. 4.1-17 Reference theorem 4, section 4.1. The determinant should be computed by hand methods and checked by calculator or maple. Maple: A:=<<1,-1,2>|<3,0,1>|<1,-2,2>>; linalg[det](A); 4.1-18 Reference theorem 4, section 4.1. The determinant should be computed by hand methods and checked by calculator or maple. Maple: A:=<<1,1,0>|<4,3,1>|<3,-2,-4>>; linalg[det](A); 4.1-20 Let A be the matrix whose columns are the 3 vectors u, v, w. Augment the zero vector onto A to obtain matrix C. Display a frame seq to rref(C). Solve for the scalar general solution in terms of variable names a, b, c (instead of names x1, x2, x3). Then a u + b v + c w=0 4.1-23 Decide independence or dependence using the frame sequence method as in problem 4.1-20. The vector (a,b,c) is the vector general solution of the system. Independent if a=b=c=0, otherwise the general solution has at least one invented symbol, say t1. Choose t1=1. 4.1-32 Use Thm 2, section 4.2 (called the kernel theorem). This theorem eliminates the need to do proof details. The expected details are the conversion of restriction equations z=2x+3y, 0=0, 0=0 into a matrix-vector equation Au=0, where u is the 3-vector with components x,y,z. 4.1-36 Use the Not A Subspace Theorem, which is Thm 1, section 4.2, translated. This theorem has three checkpoints (a) zero is not in S, (b) There is an example of two vectors x and y in S but x+y is not in S, (c) There is an example of a vector x in S but -x is not in S. The equation xyz=1 satisfies (a). Therefore, S is not a subspace. =========== Section 4.2 =========== 4.2-4 Use the Not A Subspace Theorem. See 4.1-36 above. The equation x1 + x2 + x3 = 1 is not satisfied by x1=x2=x3=0 (the zero vector) hence (a) is satisfied and S is not a subspace. 4.2-6 Use the Kernel Theorem, Theorem 2, section 4.2. The equations x[1]=3x[3], x[2]=4x[4] (x[2] means x with subscript 2, or x_2) can be written as a homogeneous matrix system Ax=0, where x=Vector([x[1],x[2],x[3]]) and A=Matrix([[1,0,-1,0],[0,1,0,-4],[0,0,0,0],[0,0,0,0]]) No details of proof are required, after this re-write of the equations into matrix form. To finish, apply Theorem 2. The conclusion is that S is a subspace. What to write on paper? Write just the conversion to matrix form, including an answer check of the conversion. Then cite Theorem 2 or the use the phrase "The Kernel Theorem" to conclude that S is a subspace. 4.2-17 Reference the frame seq method, scalar general solution, vector solution, definition of basis. The x5 is a typo - change it to x4, then the book's answer is obtained. 4.2-18 This is a frame seq calculation of C=aug(A,0) to rref(C), followed by writing the scalar general solution for the system in terms of invented symbols t1, t2. Then write the general solution in vector form. The partials on t1, t2 are vectors which form the basis of solutions. These are u and v. The book answer uses invented symbols s and t, instead of t1 and t2. 4.2-28 Apply Theorem 2, section 4.2 (the kernel theorem). It says S={ x : Bx=0} is a subspace for any matrix B. Define B=A-kI and apply the theorem, justified by this trick: Ax=kx ==> Ax-kx=0 ==> Ax-kIx = 0 ==> (A-kI)x=0 ==> Bx=0. These steps are reversible showing S={ x : Bx=0}. By the kernel thm, S is a subspace. It is also possible to write a direct proof using Thm 1, section 4.2. 4.2-29 The "if and only if" statement translates to: Prove (a) and (b). (a) Given Ax0=b, Ay=0 and x=y+x0, then Ax=b. (b) Given Ax0=b, Ax=b and y=x-x0, then Ay=0. Other translations might use an equivalence based upon reversible steps, but the presentation is usually opaque. =========== Section 4.3 =========== 4.3-15 Form the augmented matrix C of v1, v2, v3, w. Then C:=<<2,-1,4>|<3,0,1>|<1,2,-1>|<4,5,6>>; Find by the frame sequence method, on paper, the reduced echelon form of C. Answer check (write the details on paper): linalg[rref](C):=Matrix([[1, 0, 0, 3], [0, 1, 0, -2], [0, 0, 1, 4]]); Solve for the solution x thinking of C as the augmented matrix of .x=w. Then there is a unique solution x (rank(C)=3) given by x=<3,-2,4>. Expand by matrix multiply the equation .x=w to obtain the answer (3)v1 + (-2)v2 + (4)v3 = w. 4.3-17 Observe that this is the same kind of problem as 4.2-15, except W=(0,0,0). Form the augmented matrix C:=<<2,-1,4>|<3,0,1>|<1,2,-1>|<0,0,0>>; and do the frame sequence steps on paper that show rref(C) is the identity matrix. Again, there is a unique solution x to .x=w, which is x=<0,0,0>, therefore the three vecctors v1,v2,v3 are linearly independent. 4.3-18 Form the augmented matrix A of the three vectors v1, v2, v3. Let C=aug(A,0). Display a frame seq C to rref(C). Define c1, c2, c3 to be the variable list. Find the scalar general solution in terms of invented symbols t1, t2, .... Then c1 v1 + c2 v2 + c3 v3 = 0. Choose values for symbols t1, t2, ... to obtain a particular solution, matching the book's answer. 4.3-24 The packages v1, v2, u1, u2 do not have known contents and you cannot assume they are fixed vectors. You must do this problem using only toolkit properties, and use no information about how the packages are constructed (we don't know anything about this). The idea is to form the equation c1 u1 + c2 u2 = 0 and solve for c1, c2. This is done by replacing u1, u2 by expressions in v1, v2. Then use independence of v1, v2 to get scalar equations for c1, c2. Solve those scalar equations. The result is proved by showing the unique solution is c1=c2=0. 4.3-25 Xerox notes from 2004 can be found online. The problem is abstract, in the sense that v1, v2, v3 are elements of an abstract vector space, and you may not assume they have components! Progress on the problem is made by observing that the definition of independence d1 v1 + d2 v2 + d3 v3 = 0 implies d1 = d2 = d3 = 0 can be used to convert the relation c1 u1 + c2 u2 + c3 u3 = 0 into three scalar equations for c1, c2, c3. These equations contain none of the symbols u1, u2, u3, v1, v2, v3. Solving them gives c1=c2=c3=0, hence u1,u2,u3 are independent. 4.3-26 The vector packages v1, v2, u1, u2 do not have known contents and you cannot assume they are fixed vectors. You must do this problem using only toolkit properties, and use no information about how the packages are constructed (we don't know anything about this). The idea is to form the equation c1 u1 + c2 u2 = 0 and solve for c1, c2. This is done by replacing u1, u2 by expressions in v1, v2. Then use independence of v1, v2 to get scalar equations for c1, c2. Solve those scalar equations. The result is proved by showing the unique solution is c1=c2=0. The problem is abstract, in the sense that v1, v2 are elements of an abstract vector space, and you may not assume they have components! Progress on the problem is made by observing that the definition of independence d1 v1 + d2 v2 = 0 implies d1 = d2 = 0 can be used to convert the relation c1 u1 + c2 u2 = 0 into two scalar equations for c1, c2. These equations contain none of the symbols u1, u2, v1, v2. Solving them gives c1=c2=0, hence u1,u2 are independent. 4.3-34 This is a proof and it has no answer check, of course, but some steps will be supplied here so that you can check your logic, in constructing the proof details. The way to attack the proof is to specialize the statement to dimension 3. Do the details, then generalize the proof to dimension n. For dimension 3: Let A= be the augmented matrix of 3 independent vectors v1, v2, v3 and let B be any invertible (nonsingular) 3x3 matrix. Prove that AB= has independent columns w1, w2, w3. The idea is to apply Theorem 2, section 4.3, which says that C=AB has independent columns if and only if det(C) is nonzero. By Theorem 2, det(A) is nonzero. Because B is invertible, then det(B) is nonzero. The determinant product theorem then implies det(AB)=det(A)det(B) is nonzero. Applying Theorem 2 again implies AB has independent columns (w1, w2, w3 are independent). Your written proof must use dimension n, you may not assume that A, B are 3x3 matrices. A careful look at the details will tell you how to write the proof of dimension n. =========== Section 4.4 =========== 4.4-6 Use the RANK TEST or the DET TEST to determine independence or dependence. Then use the theorem that n independent vectors in a space of dimension n are necessarily a basis of the space. See Thm 3, section 4.4. 4.4-9 The plane equation x -2y + 5z = 0 is in reduced echelon form. It may help you to add equations 0=0, 0=0 and consider three equations in three unknowns. There is one lead variable and two free variables. Solve for the parametric solution, write the answer in vector form, then strip off the basis vectors from the vector answer. 4.4-12 The problem is a typical one in which four symbols are given plus homogeneous linear algebraic equations among the 4 symbols. The set S of all solutions to these equations is already a subsapce by Theorem 2, section 4.2 (the Kernel Theorem). The idea is to write the linear algebraic equations as a matrix equation Ax=0 where x is a column vector of the symbols. In the present case, this means that the equation a=b+c+d is written as a-b-c-d=0, then in matrix form Ax=0 by the definitions x:=; A:=<<1,-1,-1,-1>|<0,0,0,0>|<0,0,0,0>|<0,0,0,0>>; Solve the system for x, using the last frame algorithm on the augmented matrix C:=>, to obtain the general solution in terms of the LEAD VARIABLE a, FREE VARIABLES b,c,d, and invented symbols t1,t2,t3: a=-t1-t2-t3, b=t1, c=t2, d=t3. The basis of solutions for Ax=0 is found by differentiation on the invented symbols, giving the answer v1:=<-1,1,0,0>, v2:=<-1,0,1,0>, v3:=<-1,0,0,1>. TECHNICAL DETAILS. The reason for writing in the form Ax=0 is to apply Theorem 2, section 4.2 (the Kernel Theorem), which says the set S of the problem statement is a subspace. When we solve the equation Ax=0 we find a basis for the solution space, which is just S! GENERAL PRINCIPLE. If given variables satisfying homogeneous linear algebraic equations, which define a subspace S, then solve for the variables with a frame sequence and take partial derivatives on the invented symbols to find a basis for the subspace S. 4.4-21 This 3x4 system can be solved with rref methods. then write the answer in vector form and strip off the basis vectors. 4.4-24 Similar to 4.3-18. Form the augmented matrix C of the system. Display a frame seq C to rref(C). Define x1, x2, x3, x4, x5 to be the variable list. Find the scalar general solution in terms of invented symbols t1, t2, .... Write the scalar solution in vector form. Then the partials on symbols t1, t2, ... form a basis for the solution set of the equations. =========== Section 4.5 =========== 4.5-6 Let A be the given 3x4 matrix. Display a frame sequence from A to rref(A). A basis of the row space is the set of nonzero rows of rref(A). A basis of the column space is the set of pivot columns of A. Another basis of the row space is the set of pivot columns of the transpose of A. 4.5-8 Let A be the given 4x4 matrix. Display a frame sequence from A to rref(A). A basis of the row space is the set of nonzero rows of rref(A). A basis of the column space is the set of pivot columns of A. Another basis of the row space is the set of pivot columns of the transpose of A. 4.5-22 The redundant columns are exactly the non-pivot columns of A, according to the pivot theorem. Write the system as a homogeneous 3x4 system Ax=0 and then determine the non-pivot columns of A by a frame sequence from A to rref(A). 4.5-24 The redundant columns are exactly the non-pivot columns of A, according to the pivot theorem. Write the system as a homogeneous 5x3 system Ax=0 and then determine the non-pivot columns of A by a frame sequence from A to rref(A). 4.5-28 The rank of A [=number of pivot columns of A=number of leading ones in rref(A)] must be at least 3 because it has three independent rows. The reasoning is that the row space must have dimension at least three, which implies rref(A) has at least 3 leading ones, because an equivalent basis is obtained from the nonzero rows of rref(A). However, the row rank equals the column rank, rank(A)=rank(A^T), which is no more than three, so the rank is exactly 3. In a frame sequence from C=aug(A,b) to rref(C), the hypothesis says that no signal equation occurs. So there must be two rows of zeros in rref(C). Write out what the rref(C) can look like and then argue that the solution is unique, or else argue abstractly that there are zero free variables and therefore there must be a unique solution. =========== Section 4.6 =========== 4.6-2 Evaluate the three dot products v1.v2, v1.v3, v2.v3 and show that each answer is zero. This means that the three vectors form an orthogonal set. The connection with independence is this: THEOREM. An orthogonal set of nonzero vectors is linearly independent. =========== Section 4.7 =========== 4.7-10 The kernel theorem (Thm 2, section 4.2) does not apply because the vector space V is not R^n. You must use the Subspace Criterion, theorem 1, section 4.1. The three check-points: (a) zero is in S, (b) u,v in S implies u+v in S, (c) u in S implies cu in S. Any vector (=package of data items) in S is a function p(x) = a2 x^2 + a3 x^3 which is just a linear combination of x^2 and x^3. Then u = b2 x^2 + b3 x^3, v = c2 x^2 + c3 x^3. Test (a), (b), (c) in your proof details using the notation of the vector packages [draw an arrowhead above a symbol to denote a vector package of data items]. ======== EXAMPLE. A problem similar to 4.7-10, completely solved ======== Problem. Show that the polynomials p(x) = a1 x + a2 x^2 + a5 x^5 form a subspace S of the vector space V of all polynomials of degree <= 5. Proof: (a) The zero vector is in S, because we may select a1=a2=a5=0. (b) Let p1(x) = b1 x + b2 x^2 + b5 x^5 and p2(x) = c1 x + c2 x^2 + c5 x^5 be any two polynomials in S. Define p = p1 + p2. Then p(x)= b1 x + b2 x^2 + b5 x^5 + c1 x + c2 x^2 + c5 x^5 p(x) =(b1+c1) x + (b2 + c2) x^2 + (b5 + c5)x^5 p(x) = a1 x + a2 x^2 + a5 x^5, where a1=b1+c1, a2=b2+c2, a5=b5+c5. Then p is in S. this proves "p1, p2 in S ==> p1+p2 in S". (c) Let p1(x) = b1 x + b2 x^2 + b5 x^5 be any polynomial in s and let c be any constant. Define p(x)= (c p1)(x). Then p(x) = c p1(x) = c (b1 x + b2 x^2 + b5 x^5) = (c b1) x + (c b2) x^2 + (c b5) x^5 = a1 x + a2 x^2 + a5 x^5, where a1=c b1, a2 = c b2, a5 = c b5. Then p is in S. This proves "p1 in S, c=constant ==> c p1 is in S". ==== The proof is complete.==== 4.7-17 The definition of dependence for an abstract vector space, a space of functions for this application, is (1) c1 f1(x) + c2 f2(x) + c3 f3(x) = 0 for all x has a solution c1, c2, c3 with one of c1, c2, c3 nonzero. The trig double angle identity cos(2x) = cos(x)cos(x) - sin(x)sin(x) implies that (1) holds for f1=cos(2x), f2=cos^2(x), f3=sin^2(x) and c1=1, c2= -1, c3=1. 4.7-20 Clear the fractions to get a polynomial equation for A,B,C,x. Substitute roots of the fraction bottom, namely x=0, 1, -1 into this polynomial equation to get 3 equations in the 3 unknowns A,B,C. Solve for A,B,C using what you know about linear equations and frame sequences. You must get a unique solution according to partial fraction theory. 4.7-21 Clear the fractions to get a polynomial equation for A,B,C,x. Substitute roots of the fraction bottom, namely x=0, 2i, -2i into this polynomial equation to get 3 equations in the 3 unknowns A,B,C. Observe that x=-2i does not need to be used, because from x=0, x=2i there are 3 equations in 3 unknowns. Solve for A,B,C using what you know about linear equations and frame sequences. You must get a unique solution according to partial fraction theory. REFERENCE. Xerox notes online [4.5-21 in 2004]. The answer check uses Heaviside's coverup method, for complex roots. 4.7-22 The solution may be done by the sampling method, in which the fractions are cleared followed by substitution of the roots of the denominator x=-1,-2,-3 to obtain 3 equations in the 3 unknowns A,B.C. Then solve for A,B,C. Also possible is the method of atoms, in which the fractions are cleared, then the equation is expanded and collected on the atoms 1, x, x^2. Independence of the atoms is used to obtain 3 equations in the 3 unknowns A,B,C. Solve for A,B,C. 4.7-23 The equation y'''=0 can be solved by quadrature methods from chapter 1 to give y=c1 + c2 x + c3 x^2/2. Show all steps in the quadrature method. Strip off the basis elements, which are 1, x, x^2/2, to report the basis. Alternatively, take partial derivatives of y on the symbols c1, c2, c3 to find the basis. REFERENCE. The method of taking partials is explained in some detail in exercise 4.4-12 (above). It applies to this problem because, like 4.4-12, we take partials on the symbols present in the general solution to identify the basis vectors. 4.7-24 The equation y'''' = 0 can be solved by successive application of the method of quadrature from chapter 1. Your answer will involve 4 symbols c1, c2, c3, c4. Take partials of the general solution y on these four symbols to discover the basis elements, which are functions of x. REFERENCE. The method of taking partials is explained in some detail in exercise 4.4-12 (above). The method assumes y is written as all linear combinations of certain functions using symbols c1, c2, etc (these replace the invented symbols t1, t2, t3 in 4.4-12). Taking partials identifies the basis functions. 4.7-26 The expected basis is 1, exp(-10x). Follow the example in the book to present problem details. Expected is conversion of the second order DE into two first order DEs. Then solve these two DEs by chapter 1 methods [section 1.5]. Finally, obtain the general solution y = c1 (1) + c2 (exp(-10x). Take partials on symbols c1, c2 to obtain the basis. REFERENCE. The method used here, taking partials, is explained in some detail in exercise 4.4-12 (above). See also 4.7-24.