# MATH 2270 # PROJECT 3 # October 1998 # # This project is about the fundamental subspaces associated with # matrices, about coordinates with respect to different bases, and # finally about picking bases related to linear maps. # # First load our tools: # > with(plots):with(linalg): # Here's the matrix we will play with today. We will keep it moderately # sized, but of course we could make it much bigger without causing # Maple any problems. # > A:=matrix([[ 1,1,1,2,6 ], > [ 2,3,-2,1,-3], > [3,5,-5,1,-8], > [4,3,8,2,3]]); [1 1 1 2 6] [ ] [2 3 -2 1 -3] A := [ ] [3 5 -5 1 -8] [ ] [4 3 8 2 3] # We should be able to figure out dimensions of the nullspace, row space # and column space by looking at rref(A): > rref(A); # 1) What are the dimensions of the three spaces, based on rref(A) and # our general theory? Explain. Also, for the theorem that rank plus # nullity equals n, what are the particular numbers in this example? # 2) A basis for the rowspace is staring at you in the rref(A) matrix. # What is this basis? # 3) You can back-solve for the homogeneous system using rref(A), to get # a basis for nullspace(A). Do this by hand on a sheet of paper, and # write your answer here. # 4) You can get a basis for the column space of A by taking a subset of # your original 5 columns. Which columns do you choose, and explain # why! # 5) Another way to get a (better) basis for the column space is to do # column operations, putting your matrix into reduced column echelon # form. You can do this in Maple by transposing, computing the rref of # the transpose, and transposing back: > rcef:=B->transpose(rref(transpose(B))); > #a procedure for computing reduced > #column echelon form > rcef(A); # So, what is a nice basis for the column space of A? Pick any one of # the original columns of A and express it as a linear combination of # this nice basis, to illustrate how easy it is to do. (Do this by # hand, and type in your answer and explanation.) # 6) Maple will compute bases for these spaces with single commands. # Compare the answers you've gotten above with Maple's answers: Since # you know bases are not unique, you can pretty well guess that Maple is # using methods close to the ones we used, since its answers should be # quite similar to some of yours: > rowspace(A);# Gives a basis for the range=space spanned by col > vectors. > nullspace(A);#nullspace basis > colspace(A);#nice column space basis 7) As we remarked in class, the theorem that the nullspace is perpendicular to the rowspace is kind of clear once you realize that x being in the nullspace means that Ax=0 so that each row of A dots with x to give zero. In particular, if we take a basis for the rowspace of A, and also one for the nullspace of A, then elements of the first basis should all be perpendicular to elements of the second one. Let's verify that this is true: > r1:=row(rref(A),1); #r1,r2,r3 basis for rowspace(A) > r2:=row(rref(A),2); > r3:=row(rref(A),3); > S:=convert(nullspace(A),list); #making a list > #keeps track of order in a set > n1:=S[1];n2:=S[2]; #nullspace basis. > rowmat:=stackmatrix(r1,r2,r3); #stack matrices > #or vectors one on top of each other. We could > #have gotten the same matrix by using the delete > #row command, i.e. delrows(rref(A),4..4) > colmat:=augment(n1,n2); #augment matrices or > #vectors from left to right > evalm(rowmat&*colmat); #compute all six dot products > #in one matrix multiplication. You should get: [0 0] [ ] [0 0] [ ] [0 0] # 8) Since the nullspace is 2-dimensional, and the rowspace is 3 # dimensional, and since they are perpendicular to eachother in R^5, it # seems likely that the collection {r1,r2,r3,n1,n2} are a basis for R^5. # Use Maple and your theoretical knowledge to verify that this is true, # by using some test which checks whether 5 vectors in R^5 form a basis. # You have to think of the commands here and from now on. # 9) Let us call E={e1,e2,e3,e4,e5}, i.e. the set of standard basis # vectors in R^5. Let us call our new basis S={r1,r2,r3,n1,n2}. So, # any vector v in R^5 can be uniquely expressed as a linear combintation # of the vectors in the S-basis, v=c1*r1 + c2*r2 + c3*r3 + c4*n1 + # c5*n2, and the list of coefficients (c1,c2,c3,c4,c5) are what we call # the coordinates of v with respect to the S-basis. # Find the coordinates of the following vectors ,with respect to # the S-basis: # 9a) (0,1,-4,0,-3) # 9b) (1,0,0,0,0) # # 10) Find the transition matrices PE <-S and PS <-E, which take S # coordinates to E-coordinates and visa versa. # # Now we will consider the linear map f(x)=Ax, which in our case # maps R^5 to R^4. What we did in problems 8-9 was find a nice basis S # for the domain space R^5. Why was this basis nice? It was nice if we # want to understand the map f(x)=Ax because part of the basis was the # nullspace(A), (which gets squashed to zero), and the rest of it was # perpendicular to the nullspace(A). # What we want to do next is to find a new basis in the range space # R^4 which is also nice for studying the map f(x)=Ax. Notice the column # space of A is exactly the collection of image vectors Ax in R^4, since # Ax=x1*col1(A) + x2*col2(A) + ... + x5*col5(A). So part of our good # basis for the range space should include a basis for the column space. # Since the column space is 3-dimensional that leaves us one vector # short. Now, the column space of A is the row space of transpose(A), # so by the theorem we were talking about in problem 7 (but applied to # transpose(A)), this space is perpendicular to the nullspace of # transpose(A). So we'll get the rest of our good basis by appending a # basis for the nullspace of transpose(A) to the one for the column # space of A. # Let's say that again: The column space of A and the nullspace of # transpose(A) are subspaces of the range space R^4, and are orthogonal # to each other. Furthermore, their dimensions add up to 4. (why?) # # 11) Let F={e1,e2,e3,e4} be the standard basis in R^4. Let # T:={v1,v2,v3,m1}, where {v1,v2,v3} is a nice basis of the column space # of A and {m1} is a basis for the nullspace of transpose(A). Find good # vectors for v1,v2,v3, and m1 to get your new basis T, and check that # it is a basis. # # 12) Find the transition matrices PF <-T and P T<-F. # # 13) Consider the following composition of linear maps: Start with # the S-coordinates of a vector in R^5, call these coordinates c. # Convert them to the standard E-coordinates x, by multiplying: x= (PE # <-S)c. Multiply these standard coordinates by the matrix A to get the # image y=f(x)=Ax=A(PE<-S)c This gives the image vector y in standard # coordinates F. Now take these coordinates and convert them to T # coordinates w= (PT<-F)y = (PT<-F)(A)(PE<-S)c. So the triple product # of matrices (PT<-F)(A)(PE<-S) is a matrix which takes the # S-coordinates of a vector in the domain and maps them to the # T-coordinates of the image vector f(x)=Ax in the range space. In # other words this new matrix (the triple product) is describing the # same linear map from R^5 to R^4 that the matrix A described, just # using different bases in R^5 and R^4 than the standard ones. # Use your transition matrices from problems 10,12 to compute this # triple product. The resulting matrix should have lots of zeros in it. # Can you explain why only the upper left 3 by 3 submatrix is non-zero, # and also why it is non-singular? # # 14) Try to find a different basis for the column space of A (and # hence for T), so that the triple product matrix above, which maps the # S coordinates of a domain vector to the (new) T coordinates of the # image vector, is given by [1 0 0 0 0] [ ] [0 1 0 0 0] [ ] [0 0 1 0 0] [ ] [0 0 0 0 0] # We will continue discussing the ideas related to these problems # in class.