MATH 2250 # MAPLE PROJECT 3 # November 1998 # # This project is available at # http://www.math.utah.edu/~korevaar/2250proj3.txt Save it to your home # directory and then open it from Maple as a Maple Text document # In this project you will use Maple to do linear algebra # computations. Mostly you will be learning the MAPLE command set to do # the sorts of computations which you are also learning to do by hand. # Along the way you will also review the linear algebra concepts from # this part of the course. The project follows the organization of # Chapter 5 of Gustafson-Wilcox, and refers to problems and examples in # the text, so you will want to keep it close by. # To get started, load the linear algebra package. To see the # command set, end the loading command with a semicolon instead of a # colon. # > with(linalg); # 5.1-5.2: Systems of linear equations; reduced row echelon form # To find the solution set to #2 on page 283 you enter the coefficient # matrix row by row, > A1:=matrix([[1,1,-2], > [4,0,1], > [-1,1,0]]); # or you can use another permissible syntax and say that A is a 3 by 3 # matrix, with successive entries read from left to right and from top # to bottom: > A1:=matrix(3,3,[1,1,-2,4,0,1,-1,1,0]); # Then enter the right-side vector > b:=vector([1,2,5]); # Make the augmented matrix, > A1augb:=augment(A1,b); [ 1 1 -2 1] [ ] A1augb := [ 4 0 1 2] [ ] [-1 1 0 5] # and then compute the reduced-row echelon form: > rref(A1augb); [1 0 0 0] [ ] [0 1 0 5] [ ] [0 0 1 2] # from which we should see that the solution set consists of a single # point, [x,y,z]=[0,5,2]. # 5.1.1) Do #4,5 on page 283 using Maple as above. By the way, the # equations you need to get started are at the top of page 686, if you # don't want to rederive them for yourself. Also, remember you can # enter multi-line commands by using shift-enter to move to successive # lines. # 5.2.1) Do #7 on page 292 as above. # # 5.3 Vector spaces and subspaces # # Nullspace of a matrix: This is just the set of vectors x for which # Ax=0, so we can find it by computing rref(A). Let > A:=matrix([[1,2,3,0], > [4,5,6,3], > [1,1,1,1]]); [1 2 3 0] [ ] A := [4 5 6 3] [ ] [1 1 1 1] > rref(A); [1 0 -1 2] [ ] [0 1 2 -1] [ ] [0 0 0 0] # If you had first augmented A with a zero column it would have stayed # all zero as you did the row operations to get rref, so you can imagine # augmenting the rref(A) with a zero column to read off the nullspace. # You should verify by hand that if you backsolve in the usual way you # then get the general solution to homogeneous equation to be X := s [1, -2, 1, 0] + t [-2, 1, 0, 1] # so that the nullspace is two-dimensional, with basis # {[1,-2,1,0],[-2,1,0,1]}. # By the way, Maple will find a nullspace basis for you if you ask # nicely: (Remember, basis vectors aren't unique, but the number of # basis vectors, i.e. dimension, is. So count yourself lucky if Maple # spits out the same two basis vectors as you got above.) > nullspace(A); {[-2, 1, 0, 1], [1, -2, 1, 0]} # # 5.3.1 Find a basis for the nullspace of the matrix B below. Do it # first by finding rref(B) and working by hand from there, and second by # asking Maple directly, as we did above for the matrix A. # > B:=matrix([[1,2,-2,-1,1], > [0,1,1,0,0], > [2,1,-5,-2,2], > [-1,-1,3,1,-1]]); [ 1 2 -2 -1 1] [ ] [ 0 1 1 0 0] B := [ ] [ 2 1 -5 -2 2] [ ] [-1 -1 3 1 -1] # # Range of a matrix: One of the most important things to remember about # the the product Ax of a matrix with a vector is that it is the same as # the linear combination x1*col(A,1) + x2*col(A,2) + ... + xn*col(A,n). # See #16 on page 313. Therefore the range of A, which is defined to be # the set of all possible vectors b=Ax, is exactly the same as the span # of the columns of A. Remember, the span of a collection of vectors is # the set of all possible linear combinations, page 295. # For the matrix A which you were using above to study nullspace # questions, it follows that the range is the span of its four column # vectors. What subspace of R^3 does this give? There are two good # ways to proceed in order to get a meaningful answer to that question. # # Method 1: If you have a set of vectors {v1,v2, ...,vm} in a vector # space the span is not changed if you do one of three ``elementary # operations.'' These are: # (1) replace one vector vi by a non-zero constant multiple c*vi. # (2) interchange vi and vj. # (3) replace vi with vi + c*vj. # Therefore, we may get a better spanning set for the range of A by # doing elementary column operations to the columns of A. This is the # same as doing elementary row operations to the transpose of A, and # then transposing back: (The transpose of a matrix is obtained by # switching rows and columns, see page 319.) > rcef:=M->transpose((rref(transpose(M)))); > #compute the reduced column echelon form of > #a matrix > rcef(A); [ 1 0 0 0] [ ] [ 0 1 0 0] [ ] [-1/3 1/3 0 0] # Therefore the span of the original four columns of A (which was the # range of A), is actually the span of the two non-zero columns of # rcef(A), call them v1 and v2 respectively. Notice these columns are # indpendent, which you can see by looking at their first two entries. # Therefore we have constructed a basis for range(A), a plane in R^3. # It is an especially nice basis because we can express any other vector # in range(A) very easily as a linear combination of these two vectors. # For example, (looking at the first two componants of each vector, we # see that the first column of A must be 1*v1 + 4*v2, the second column # must be 2*v1 + 5*v2, etc. # # Method 2: If you remove dependent vectors from a set, one by one, you # do not shrink the span. You may do this until your collection is # independent. In the case of the columns of A, we can figure out # column dependencies from the nullspace of A. Since [1,-2,1,0] is a # null vector, we know that 1*col(A,1) - 2*col(A,2) + 1*col(A,3) = 0: > evalm(1*col(A,1)-2*col(A,2) +1*col(A,3)); > #for scalars times matrices use *, for > #matrices times matrices use &* [0, 0, 0] # Therefore we see that col(A,3) is dependent on col(A,1) and col(A,2) # and we may remove it from our set without decreasing the span. Since # [-2,1,0,1] is also a null vector, col(A,4) is a linear combination of # col(A,1) and col(A,2), so we may then remove col(A,4) from our # spanning set. Hence the range of A is actually the span of col(A,1) # and col(A,2). Since those two columns are independent, they are a # basis for range(A). # # Finally, Maple will find a basis for the range of A (which it calls # the column space of A) with the command > colspace(A); {[0, 1, 1/3], [1, 0, -1/3]} # 5.3.2) Which of our two methods is Maple using to compute a # range-space basis? # 5.3.3) For your matrix B from 5.3.1, find a basis for the range of B # using each of the two methods explained above. Then compare your # anwers to what Maple gives you with the colspace command. # # General solution to a linear problem: If L is a linear operator from # vector space V to vector space W, and if b is in the range of L, then # the general solution to L(v)=b is of the form v=vp + vH, where vp is a # particular solution and vH is a vector in the nullspace of L. We have # used this theorem many times in order to find the general solution to # linear differential equations. Let's illustrate its validity for # matrix maps from R^n to R^m. # For our matrix A above, f(x)=Ax is a linear map with domain R^4 # and with range a subspace of R^3. Since the first column of A is a # range vector, namely f([1,0,0,0]), the general solution to Ax=col(A,1) # should be of the form [1,0,0,0]+s*[1,-2,1,0]+t*[-2,1,0,1], using our # nullspace computations from above. Maple will solve the linear system # Ax=b with the command ``linsolve'': > b:=col(A,1); b := [1, 4, 1] > linsolve(A,b); [1 + _t[1] - 2 _t[2], -2 _t[1] + _t[2], _t[1], _t[2]] # You may rewrite Maple's answer as [1,0,0,0] + _t1*[1,-2,1,0] + # _t2*[-2,1,0,1], so you see that's what we predicted. By the way, if # you pick a vector b which is not in the range of A, linsolve will give # you no answer: > linsolve(A,[1,1,1]); #[1,1,1] is not in the range of A, > #as you can see from the method 1 basis of the range > #space, since if y1=1 and y2=1 then y3 must equal 0 > #in order for [y1,y2,y3] to be in the range # # 5.3.4) Find the general solution to Bx=col(B,2) using linsolve, and # verify that it has the form of a particular solution plus the general # solution to the homogeneous equation (which you worked out in 5.3.1). # # # 5.4 Matrix Algebra # The product of an m by n matrix with an n by k matrix is an m by k # matrix. Thus if we multiply our A above (which is 3 by 4) with our B # above (which is 4 by 5), we will get a 3 by 5 matrix: > evalm(A&*B); [ 7 7 -15 -7 7] [ ] [13 16 -24 -13 13] [ ] [ 2 3 -3 -2 2] # The product BA is not defined: > evalm(B&*A); Error, (in linalg[multiply]) non matching dimensions for vector/matrix product # We can multiply A with transpose (A) in either order, but we don'e # even get matrices of the same size: > AT:=transpose(A); > evalm(AT&*A);evalm(A&*AT); # # 5.4.1 Compute the products of B (from 5.3.1) with its transpose, in # both orders. Are the product matrix dimensions what you expect? # # Inverse matrices: # If A is a square matrix and has rref(A) equal to the identity, then an # algorithm to find its inverse matrix is to augment A with the identity # matrix and compute the rref of the augmented matrix. The matrix # appearing in the right half of your array is then the inverse to A. # This is not a mysterious algorithm since you are just solving n linear # systems simultaneously to find a matrix X so that AX=I. (Ask your # instructor if this is not clear!) The book makes a brief explanation # on page 311. Here's how it works for example 1 on page 311: > restart;with(linalg): > A:=matrix([[2,-2,1], > [2,-3,2], > [3,-6,4]]); > Id:=array(identity,1..3,1..3); > #the 3 by 3 identity matrix > AaugId:=augment(A,Id); [2 -2 1 1 0 0] [ ] AaugId := [2 -3 2 0 1 0] [ ] [3 -6 4 0 0 1] > A1:=rref(AaugId); [1 0 0 0 2 -1] [ ] A1 := [0 1 0 -2 5 -2] [ ] [0 0 1 -3 6 -2] > Ainv:=delcols(A1,1..3); > #delete cols 1 to 3 [ 0 2 -1] [ ] Ainv := [-2 5 -2] [ ] [-3 6 -2] > evalm(A&*Ainv);evalm(Ainv&*A); > #should get the identity both times! # Of course, Maple will compute the inverse matrix directly, if it # exists: > inverse(A); [ 0 2 -1] [ ] [-2 5 -2] [ ] [-3 6 -2] # 5.4.2. Find the inverse of the matrix A in #25 on page 313, using # augmenting algorithm as above. Check your answer with the inverse # command. # 5.4.3 Solve Ax=b where A is the matrix in 5.4.2, and b=[1,0,0]. # Solve it by computing evalm(inverse(A)&*b) # 5.4.4 Solve the same system by augmenting A with the vector [1,0,0] # and computing rref. # 5.4.5 Solve the same system using linsolve. # # # 5.5 Orthogonality and The Fundamental Theorem of Linear Algebra # The dot product : # The dot product (also called scalar product and inner product) lets # one compute the angle between vectors, as well as the magnitude of # vectors. In particular, seeing whether the dot product of two vectors # is zero is the practical way to tell if vectors are orthogonal (also # called perpendicular). See page 315. Maple has such commands built # in: # > u:=vector([1,2,3]);v:=vector([-3,2,1]); > dotprod(u,v); #compute the dot product. Is this > #what you get by hand? > sqrt(dotprod(u,u)); #should give magnitude sqrt(14) > norm(u,2); # also gives magnitude; this is > #also called the 2-norm of a vector sqrt(14) > arccos(dotprod(u,v)/(norm(u,2)*norm(v,2))); > #this is the formula for the angle between two > #vectors, page 315 > angle(u,v); #Maple's function to compute angles > #is the same > evalf(%); #the angle in decimals, in radians. > #the command evalf(%) computes the decimal > #value of the previous expression 1.281044625 # 5.5.1 Get a decimal value for the angle in radians between the vectors # [1,2,3,4] and [1.1,2.2,2.7,3.9]. # 5.5.2 Find the distance between the two points in 5.5.1 # # Orthogonality and Gram-Schmidt orthogonalization # The Gram Schmidt algorithm is a way to construct an orthogonal basis # from a non-orthogonal one. See page 317. You can do this using the # dotprod command, or you can use Maple's GramSchmidt command: > v1:=vector([1,2,3,0]); #a basis for a subspace W of R^4 > v2:=vector([-3,2,1,0]); > v3:=vector([1,1,1,0]); > y1:=v1; #The Gram-Schmidt basis > y2:=v2 - dotprod(v2,y1)/(dotprod(y1,y1))*y1; > y3:=v3 - dotprod(v3,y1)/(dotprod(y1,y1))*y1 > - dotprod(v3,y2)/(dotprod(y2,y2))*y2; > evalm(y1);evalm(y2);evalm(y3); #get the > #actual entries of the vectors [1, 2, 3, 0] [-23/7, 10/7, 1/7, 0] [ -4 ] [2/15, 1/3, --, 0] [ 15 ] > GramSchmidt([v1,v2,v3]); #Maple will do the computation > #above all at once with this command -4 [[1, 2, 3, 0], [-23/7, 10/7, 1/7, 0], [2/15, 1/3, --, 0]] 15 # 5.5.3 Use Gram-Schmidt to find an orthogonal basis for the span of # {[3,2,1,0],[1,1,0,1],[2,1,1,0]}. # # Projection and orthogonal complements # In physics you learn how to find the projection of one vector (usually # the force vector) onto another vector (usually the direction of # allowed motion.) The formula for the projection of v1 onto u1 is > proj:=(v1,u1)->dotprod(u1,v1)/dotprod(u1,u1)*u1; dotprod(u1, v1) u1 proj := (v1, u1) -> ------------------ dotprod(u1, u1) # For example, if you take the vectors u and v from the start of the dot # product section, you can compute the projection of v onto u as # follows: > u:=vector([1,2,3]);v:=vector([-3,2,1]); > proj(v,u); > evalm(%); u := [1, 2, 3] v := [-3, 2, 1] 2/7 u [2/7, 4/7, 6/7] # The projection of v onto u is the nearest scalar multiple of u to the # vector v. Equivalently, we may write v as the sum of its projection # onto u together with the vector w equal to v minus the projection # vector, and w will be orthogonal to u: > w:=v-proj(v,u); > dotprod(w,u); w := v - 2/7 u 0 # In other words, the projection vector allows us to decompose a given # vector into a part parallel to u and a part perpendicular to u. # It is often useful to project vectors onto higher dimensional # subspaces. This is easy to do if you have an orthogonal basis for # your subspace W. In this case, the projection of v onto W is just the # sum of the projections onto each of the orthogonal basis elements. # For example, consider the subspace of R^4 spanned by v1,v2,v3 at the # start of the Gram-Schmidt section above. Let v be the vector # [1,2,3,6]. Now, we recognize that W is just the 3 dimensional # ``plane'' of points from which the fourth coordinate is zero, so the # projection vector must be [1,2,3,0]. So this is a nice example to # illustrate the general projection technique: > v:=vector([1,2,3,6]); v := [1, 2, 3, 6] > evalm(proj(v,y1)+proj(v,y2)+proj(v,y3)); > #do you get what was predicted? [1, 2, 3, 0] # 5.5.4 Find the projection of the vector v=[1,2,3,6] onto the subspace # you considered in 5.5.3. Show that this gives a way of decomposing v # into the sum of a vector in your subspace with one in its orthogonal # complement. # # # # # # # # The Fundamental Theorem of Linear Algebra # This theorem has several parts. See page 320-321. One way of stating # them is as follows: # Consider the matrix map f(x)=Ax, where A is an m by n matrix. # Then f is a function with domain R^n, and it maps vectors into R^m. # Then the adjoint map is given by g(y)=(transpose(A))y, and it maps R^m # back to R^n. The following results hold: # (1) If the rank of A is defined to be the dimension of the range of # A, and if the nullity of A is defined to be the dimension of its # nullspace, then the rank plus the nullity add up to domain dimension # n. Furthermore, the rank of A equals the rank of transpose(A). # (2) The nullspace of A (which is a subspace of R^n), is the # orthogonal complement to the range of the adjoint map g. Since the # range of g is the column space of transpose(A), it is actually the # rowspace of A (i.e. the span of the rows of A). # (3) Completely analogously, the nullspace of transpose(A) is the # orthogonal complement to the range of A. # # Let's illustrate the theorem with the matrix from the start of our # project: First find bases for the four fundamental subspaces # discussed in the theorem # > restart;with(linalg): > A:=matrix([[1,2,3,0], > [4,5,6,3], > [1,1,1,1]]); > nullspace(A); {[-2, 1, 0, 1], [1, -2, 1, 0]} > colspace(A); #range(A) {[1, 0, -1/3], [0, 1, 1/3]} > rowspace(A); #range(transpose(A)) {[1, 0, -1, 2], [0, 1, 2, -1]} > nullspace(transpose(A)); {[-1, 1, -3]} # Let's check off the facts of the theorem one by one: # (1) The rank of A is the dimension of colspace(A)=2. The nullity of # A is the dimension of nullspace(A)=2. 2+2=4 which is the number of # columns in A. Furthermore, the rank of transpose(A) is the dimension # fo rowspace(A) is equal to the dimension 2 of colspace(A). # (2) To see that the rowspace and the nullspace of A are orthogonal # complements in R^4, you just have to check that the respective basis # elements are orthogonal to each other. This amounts to computing 4 # different dot products and can be done with one matrix multiplication: > N1:=convert(nullspace(A),list); #nullspace(A) was a > #set, we convert it to a list and then a matrix: N1 := [[-2, 1, 0, 1], [1, -2, 1, 0]] > N2:=matrix(N1); [-2 1 0 1] N2 := [ ] [ 1 -2 1 0] > R1:=convert(rowspace(A),list); R1 := [[0, 1, 2, -1], [1, 0, -1, 2]] > R2:=matrix(R1); [0 1 2 -1] R2 := [ ] [1 0 -1 2] > evalm(N2&*transpose(R2)); #this computes all four > #dot products. Hopefully you can see why [0 0] [ ] [0 0] # 3) We only need to compute two dot products to see that the nullspace # of transpose(A) is orthogonal to the column space of A: > n1:=nullspace(transpose(A)); n1 := {[-1, 1, -3]} > c1:=colspace(A); c1 := {[1, 0, -1/3], [0, 1, 1/3]} > dotprod(n1[1],c1[1]);dotprod(n1[1],c1[2]); 0 0 5.5.5 For the matrix B from 5.3.1 verify the three parts of the Fundamental Theorem, following the ideas above. # # 5.6 Determinants # Maple will compute the determinant of a square matrix. It also knows # the adjoint formula for the inverse of a matrix (bottom of page 329). # this formula is useful when the entries of a matrix are functions # instead of numbers, but rapidly becomes a mess for large matrices: > C:=matrix(3,3); C := array(1 .. 3, 1 .. 3, []) > det(C); C[1, 1] C[2, 2] C[3, 3] - C[1, 1] C[2, 3] C[3, 2] - C[2, 1] C[1, 2] C[3, 3] + C[2, 1] C[1, 3] C[3, 2] + C[3, 1] C[1, 2] C[2, 3] - C[3, 1] C[1, 3] C[2, 2] > inverse(C);#watch out! # 5.6.1 Let A be the matrix of problem 31 on page 343 and let B be the # matrix of problem 32. Verify that det(AB)=det(A)det(B). # 5.6.2 For the same A matrix, verify that det(inverse(A))=1/det(A) # # 5.7 Eigenvalues and eigenvectors # Maple will compute eigenvalues and eigenvectors. For small matrices # it will try to compute exactly, for larger matrices it may only # succeed when you ask for decimal approximations. Here is the matrix # from #7 on page 359: > A:=matrix(3,3,[1,2,-1,2,4,-2,3,6,-3]); > eigenvals(A); 0, 0, 2 > eigenvects(A); [2, 1, {[1/2, 1, 3/2]}], [0, 2, {[-2, 1, 0], [1, 0, 1]}] # The eigenvals(A) response is telling us that the characteristic # polynomial of A has zero as a root with multiplicity two (i.e. the # algebraic multiplicity is two), and has 2 as a root with multiplicity # 1. The eigenvects(A) command has output which lists the eigenvalue, # its algebraic multiplicity, and a basis for the corresponding # eigenspace. So, for example, the vector [1/2,1,3/2] should be an # eigenvector with eigenvalue 2: > evalm(A&*[1/2,1,3/2]);2*[1/2,1,3/2]; [1, 2, 3] [1, 2, 3] 5.7.1 Do problem #8 on page 359. (By the way, Maple uses ``I '' for the square root of -1. # 5.7.2 Do problem #24 on page 360 # # By the way, in your final Maple project you will be doing the # Earthquake project from section 5.3 of Edwards-Penney, and you will # need to find eigenvalues and eigenvectors for rather large matrices.