Weatherwax Watkins Solutions[1]

Download Weatherwax Watkins Solutions[1]

Post on 01-Dec-2015

38 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>Solutions to Selected Problems In:</p><p>Fundamentals of Matrix Computations: Second Edition</p><p>by David S. Watkins.</p><p>John L. Weatherwax</p><p>March 13, 2012</p><p>wax@alum.mit.edu</p><p>1</p></li><li><p>Chapter 1 (Gaussian Elimination and Its Variants)</p><p>Exercise 1.1.20 (an example of block partitioning)</p><p>We first compute Ax with (ignoring the partitioning for the time being) is given by</p><p>Ax =</p><p> 1 3 22 1 11 0 1</p><p> 1 0 12 1 11 2 8</p><p> =</p><p> 4 7 43 3 32 2 1</p><p>Now to show that Ai1X1j + Ai2X2j = Bij for i, j = 1, 2 consider each of the possible fourpairs in tern. First consider (i, j) = (1, 1) which is</p><p>A11X11 + A12X21 =</p><p>[12</p><p>][1] +</p><p>[3 21 1</p><p>] [21</p><p>]=</p><p>[12</p><p>]+</p><p>[41</p><p>]=</p><p>[53</p><p>]Which is the same as B11. Now (i, j) = (1, 2) gives</p><p>A11X12 + A12X22 =</p><p>[12</p><p>] [0 1</p><p>]+</p><p>[3 21 1</p><p>] [1 12 0</p><p>]=</p><p>[0 10 2</p><p>]+</p><p>[7 33 1</p><p>]=</p><p>[7 43 3</p><p>]Which is the same as B12. Now (i, j) = (2, 1) gives</p><p>A21X11 + A22X21 = [1] [1] +[0 1</p><p>] [ 21</p><p>]= 1 +1 = 2</p><p>Which is the same as B21. Finally, considering (i, j) = (2, 2), we have</p><p>A21X12 + A22X22 = [1][0 1</p><p>]+[0 1</p><p>] [ 1 12 0</p><p>]=[0 1 ]+ [ 2 0 ] = [ 2 1 ]</p><p>Which is the same as B22. Thus we have shown the equivalence requested.</p><p>Exercise 1.1.25</p><p>Let A be n by m and consider A partitioned by columns so that</p><p>A =[a1|a2| |am] .</p><p>Here ai is the i-th column of A of dimension n by 1. Consider x partitioned into m scalarblocks as</p><p>x =</p><p>x1x2...xm</p><p>Then</p><p>Ax =[a1|a2| |am]</p><p>x1x2...xm</p><p> = x1a1 + x2a2 + x3a3 + + xmam = b (1)</p><p>Showing that b is a linear combination of the columns of A.</p></li><li><p>Exercise 1.2.4</p><p>Lets begin by assuming that there exists a nonzero y such that Ay = 0. But applying A1</p><p>to both sides givesA1Ay = A10 = 0</p><p>or y = 0 which is a contradiction to our initial assumption. Therefore no nonzero y canexist.</p><p>Exercise 1.2.5</p><p>If A1 exists then we haveAA1 = I .</p><p>Taking the determinant of both sides of this expression gives</p><p>|A||A1| = 1 , (2)</p><p>but since |A1| = |A|1 it is not possible for |A| = 0 or else Equation 2 would be incontradiction.</p><p>Exercise 1.2.11</p><p>The equation for the car at x1 is</p><p>4x1 + 1 + 4(x2 x1) = 0 ,</p><p>or8x1 4x2 = 1 .</p><p>Which is the same as given in the book. The equation for the car at x2 is given by</p><p>4(x2 x1) + 2 + 4(x3 x2) = 0 .</p><p>Which is the same as in the book. The equation for the car at x3 is given by</p><p>4(x3 x2) + 3 4x3 = 0 ,</p><p>or4x2 8x3 = 3 .</p><p>Which is the same as given in the book. Since the determinant of the coefficient matrix Ahas value det(A) = 256 6= 0, A is nonsingular.</p></li><li><p>Exercise 1.3.7</p><p>A pseudo-code implementation would look like the following</p><p>% Find the last element of b that is zero ($b_k=0$ and $b_{k+1} \ne 0$)</p><p>for i=1,2,...,n</p><p>if( b_i \ne 0 ) break</p><p>endfor</p><p>k=i-1 % we now have b_1 = b_2 = b_3 = \ldots = b_k = 0</p><p>for i=k+1,...,n</p><p>for j=k+1,...,n</p><p>b_i = b_i - g_{ij} b_j</p><p>endfor</p><p>if( g_{ii} = 0 ) set error flag, exit</p><p>b_i = b_i/g_{ii}</p><p>endfor</p><p>Exercise 1.3.14</p><p>Part (a): To count the operations we first consider the inner for loop which has 2 flopsand is executed n (j + 1) + 1 = n j times. The outer loop is executed once for everyj = 1, 2, . . . , n therefore the total flop count is given by</p><p>nj=1</p><p>ni=j+1</p><p>2 =n</p><p>j=1</p><p>2(n j) = 2n1j=1</p><p>j = 21</p><p>2n(n 1) = n(n 1) ,</p><p>the same as row oriented substitution.</p><p>Part (b): Row oriented forward substitution subtracts just the columns of the row we areworking on as we get to each row. Column oriented forward substitution subtracts from allrows before moving to the next unknown (row).</p><p>Exercise 1.3.14</p><p>Part (b): In row-oriented substitution we first solve for x1, then x2 using the known valueof x1, then x3 using the known values of x1 and x2. This pattern continues until finallywe solve for xn using all of then known x1, x2, down to xn1. The column subtractions (incolumn oriented substitution) occurs one at a time i.e. in row-oriented substitution we have</p><p>a11x1 = b1 x1 = b1a11</p><p>.</p></li><li><p>a21x1 + a22x2 = b2 x2 = b2 a21x1a22</p><p> x2 =b2 a21</p><p>(b1a11</p><p>)a22</p><p>while in column oriented substitution we solve for x1, with</p><p>x1 =b1a11</p><p>.</p><p>and then we get</p><p>a22x2 = b2 a21(b1a11</p><p>),</p><p>and solve for x2. In summary using column oriented substitution we do some of the sub-</p><p>tractions in xi =bi</p><p>i1j=1</p><p>aijyj</p><p>aij, each time we go through the loop. In row-oriented does all</p><p>subtractions at once.</p><p>Exercise 1.3.15</p><p>A pseudocode implementation of row-oriented back substitution would look something likethe following</p><p>for i=n,n-1,. . . ,2,1 dofor j=n,n-1,. . . ,i+2,i+1 do</p><p>b(i) = b(i) - A(i,j)*b(j)endif A(i,i)=0 then</p><p>// return error; system is singular</p><p>endb(i) = b(i)/A(i,i)</p><p>end</p><p>Exercise 1.3.16 (column-oriented back substitution)</p><p>A pseudocode implementation of column-oriented back substitution would look somethinglike the following</p><p>Note that the i loop above could be written backwards as i = j 1, j 2, . . . , 2, 1 if thishelps maintain consistency.</p></li><li><p>for j=n,n-1,. . . ,2,1 doif A(j,j)=0 then</p><p>// return error; system is singular</p><p>endb(j) = b(j)/A(j,j)for i=1,2,. . . ,j-1 do</p><p>b(i) = b(i) - A(i,j)*b(j)end</p><p>end</p><p>Exercise 1.3.16</p><p>Column oriented substitution we factor our problem Ux = y as[U h0 unn</p><p>] [xxn</p><p>]=</p><p>[yyn</p><p>],</p><p>which written out in each</p><p>U x+ hxn = y</p><p>unnxn = yn</p><p>We first solve for xn then solve for x in</p><p>Ux = y hxn .This is a n 1 n 1 upper triangular system where we make the same substitutions, sowe let</p><p>y1 = y1 u1nxny2 = y2 u2nxn</p><p>...</p><p>yn1 = Put xn in yn. The algorithm is given by</p><p>for i:-1:n do</p><p>if u(i,i)=0 set flag</p><p>y(i) = y(i)/u(i,i)</p><p>for j = i-1:-1:1 do</p><p>y(j) = y(j) - u(j,i) y(i)</p><p>The values of x stored in y1, y2, y3, yn. We can check this for the index n. We haveyn yn/unn</p><p>yn1 yn1 un1,nxn...</p><p>y1 y1 u1,nxn .</p></li><li><p>Exercise 1.3.17</p><p>Part (a): Performing row oriented back-substitution on</p><p>3 2 1 00 1 2 30 0 2 10 0 0 4</p><p>x1x2x3x4</p><p> =</p><p>1010112</p><p> ,</p><p>we have</p><p>x4 = 3</p><p>x3 =1 32 = 1</p><p>x2 = 10 2(1) 3(3) = 1x1 =</p><p>10 2(1) 1(1)3</p><p>=10 + 2 1</p><p>3=93</p><p>= 3</p><p>Part (b): Column oriented back-substitution, would first solve for x4 giving x4 = 3, andthen reduce the order of the system by one giving</p><p> 3 2 10 1 20 0 2</p><p> x1x2x3</p><p> =</p><p> 1010</p><p>1</p><p> 3</p><p> 03</p><p>1</p><p> =</p><p> 1012</p><p> .</p><p>This general procedure is then repeated by solving for x3. The last equation above givesx3 = 1, and then reducing the order of the system above gives[</p><p>3 20 1</p><p>] [x1x2</p><p>]=</p><p>[ 101</p><p>] 1</p><p>[12</p><p>]=</p><p>[ 111</p><p>].</p><p>Using the last equation to solve for x2 gives x2 = 1, and reducing the system one final timegives</p><p>[3]x1 = [11] 2[1] = 9 ,which has as its solution x1 = 3.</p><p>Exercise 1.3.20 (block column oriented forward substitution)</p><p>The block variant of column oriented forward substitution on</p><p>G11G21 G22G31 G32 G33...</p><p>.... . .</p><p>Gs1 Gs2 Gs3 Gss</p><p>y1y2y3...ys</p><p>would look like</p></li><li><p>for j=1,2,. . . ,s doif G(j,j) is singular then</p><p>// return error; system is singular</p><p>end// matrix inverse of G(j,j) taken here</p><p>b(j)=inverse(G(j,j))*b(j)for i=j+1,j+2,. . . ,s do</p><p>// matrix multiplication performed here</p><p>b(i) = b(i) - G(i,j)*b(j)end</p><p>end</p><p>Exercise 1.3.21 (row and column back substitution)</p><p>The difference between (block) row and column back substitution is that in row forwardsubstitution as we are processing each row we subtract out multiples of the computed un-knowns. In column substitution we do all the subtractions required for each row at one timeand then no longer need to remember these unknowns. The difference is simply a questionof doing the calculations all at once or each time we access a row.</p><p>Exercise 1.3.23 (the determinant of a triangular matrix)</p><p>The fact that the determinant of a triangular matrix is equal to the product of the diagonalelements, can easily be proved by induction. Lets assume without loss of generality thatour system is lower triangular (upper triangular systems are transposes of lower triangularsystems) and let n = 1 then |G| = g11 trivially. Now assume that for a triangular systemof size n n that the determinant is given by the product of its n diagonal elements andconsider a matrix G of size (n + 1) (n + 1) partitioned into a leading matrix G11 of sizen n.</p><p>G =</p><p>[G11 0hT gn+1,n+1</p><p>].</p><p>Now by expanding the determinant of G along its last column we see that</p><p>|G| = gn+1,n+1|G11| = gn+1,n+1ni=1</p><p>gii =</p><p>n+1i=1</p><p>gii ,</p><p>proving by induction that the determinant of a triangular matrix is equal to the product ofits diagonal elements.</p></li><li><p>Exercise 1.3.29</p><p>Part (b): [G 0hT gnn</p><p>] [yyn</p><p>]=</p><p>[bbn</p><p>],</p><p>Given y1, y2, , yi1 we compute yi from</p><p>yi =bi hT y</p><p>aii.</p><p>As an algorithm we have</p><p>for i:1:n do</p><p>if a(i,i)=0 (set error)</p><p>for j=1:i-1 do</p><p>b(i) = b(i)-a(i,j) y(j)</p><p>b(i) = b(i)/a(i,i)</p><p>end</p><p>end</p><p>Exercise 1.4.15</p><p>If A =</p><p>[4 00 9</p><p>].</p><p>Part (a): xTAx =[x1 x2</p><p>] [ 4 00 9</p><p>] [x1x2</p><p>]= 4x21 +9x</p><p>22 0, which can bet equal to zero</p><p>only if x = 0.</p><p>Part (b): A = GGT we have g11 = a11, we take the positive sign, so g11 = 2. Now</p><p>gi1 =ai1g11</p><p>i = 2, , n</p><p>so we have that</p><p>g21 =a21g11</p><p>=0</p><p>2= 0 .</p><p>Multiplying out GGT we see that</p><p>a11 = g211</p><p>a12 = g11g21</p><p>a22 = g221 + g</p><p>222 .</p><p>ai2 = gi1g21 + gi2g22 .</p><p>a22 = 0 + g222 g22 =</p><p>a22 = 3 .</p></li><li><p>so</p><p>A =</p><p>[4 00 9</p><p>]=</p><p>[2 00 3</p><p>] [2 00 3</p><p>]= GGT</p><p>so the Cholesky factor of A is</p><p>[2 00 3</p><p>].</p><p>Part (c): G2 =</p><p>[ 2 00 3</p><p>], G3 =</p><p>[2 00 3</p><p>], G4 =</p><p>[ 2 00 3</p><p>].</p><p>Part (d): We can change the sign of any of the elements on the diagonal, so there are2 2 2 2 = 2n where n is the number of diagonal elements so 2n lower triangular matrices.</p><p>Exercise 1.4.21</p><p>For A =</p><p>16 4 8 44 10 8 48 8 12 104 4 10 12</p><p>.</p><p>g11 =16 = 4</p><p>g21 =4</p><p>4= 1</p><p>g31 =8</p><p>4= 2</p><p>g41 =4</p><p>4= 1</p><p>g22 =a22 g221 =</p><p>10 12 = 3</p><p>g32 =a32 g31g21</p><p>g22=</p><p>8 2(1)3</p><p>= 2</p><p>g42 =a42 g41g21</p><p>g22=</p><p>4 1(1)3</p><p>= 1</p><p>g33 =a33 g231 g232 =</p><p>12 4 4 = 2</p><p>g43 =a43 g41g31 g42g32</p><p>g33=</p><p>10 1(2) 1(2)2</p><p>= 3</p><p>g44 =a44 g241 g242 g243 =</p><p>12 12 12 32 = 1 .</p><p>G =</p><p>4 0 0 01 3 0 02 2 2 01 1 3 1</p><p> .</p></li><li><p>Since we want to solve GGTx = b, we let y = GTx to solve Gy = b, which is the system</p><p>4 0 0 01 3 0 02 2 2 01 1 3 1</p><p>y1y2y3y4</p><p> =</p><p>32263830</p><p> .</p><p>The first equation gives y1 = 8, which put in the second equation gives 8 + 3y2 = 26 ory2 = 6. When we put these two variables into the third equation we get</p><p>16 + 12 + 2y3 = 38 y3 = 5 .When all of these variables are put into the fourth equation we have</p><p>8 + 6 + 15 + y4 = 30 y4 = 1 .Using these values of yi we now want solve</p><p>4 1 2 10 3 2 10 0 2 30 0 0 1</p><p>x1x2x3x4</p><p> =</p><p>8651</p><p> .</p><p>The fourth equation gives x4 = 1. The third equation is then 2x3 + 3 = 5 x3 = 1. Withboth of these values into the second equation we get</p><p>3x2 + 2 + 1 = 6 x2 = 1 .With these three values put into the first equation we get</p><p>4x1 + 1 + 2 + 1 = 8 x1 = 1 .</p><p>Thus in total we have x =</p><p>1111</p><p>.</p><p>Exercise 1.4.52</p><p>Since A is positive definite we must have for all non-zero xs the condition xTAx &gt; 0. Letx = ei, a vector of all zeros but with a one in the i-th spot. Then x</p><p>TAx = eTi Aei = aii &gt; 0,proving that the diagonal elements of a positive definite matrix must be positive.</p><p>Exercise 1.4.56</p><p>Let v be a nonzero vector and consider</p><p>vTXTAXv = (Xv)TA(Xv) = yTAy &gt; 0 ,</p><p>where we have defined y = Xv, and the last inequality is from the positive definitenessof A. Thus, XTAX is positive definite. Note, if X were singular then XTAX is onlypositive semi definite since there would then exist nonzero v such that Xv = 0, and thereforevTXTAXv = 0, in contradiction to the requirement of positive definiteness.</p></li><li><p>Exercise 1.4.58</p><p>Part (a): Consider the expression for A22, we have</p><p>A22 = A22 RT12R12= A22 (RT11 A12)T (RT11 A12)= A22 AT12R111 RT11 A12= A22 AT12(RT11R11)1A12= A22 AT12A111 A12</p><p>which results from the definition of A11 = RT11R11.</p><p>Part (b): Since A is symmetric we must have that A21 = AT12 and following the discussion</p><p>on the previous page A has a decomposition like the following</p><p>A =</p><p>[A11 A12A21 A22</p><p>]</p><p>=</p><p>[RT11 0</p><p>AT12R111 I</p><p>] [I 0</p><p>0 A22</p><p>] [R11 R</p><p>T11 A12</p><p>0 I</p><p>]. (3)</p><p>This can be checked by expanding the individual products on the right hand side (RHS) asfollows</p><p>RHS =</p><p>[RT11 0</p><p>AT12R111 I</p><p>] [R11 R</p><p>T11 A12</p><p>0 A22</p><p>]=</p><p>[RT11R11 A12AT12 A22 + A</p><p>T12R</p><p>111 R</p><p>T11 A12</p><p>]</p><p>Which will equal A whenA22 = A22 AT12R111 RT11 A12 .</p><p>Part (c): To show that A22 is positive definite we note that by first defining X to be</p><p>XT [</p><p>RT11 0AT12R</p><p>111 I</p><p>],</p><p>we see that X is nonsingular at from Eq. 3 we have that[I 0</p><p>0 A22</p><p>]= XTAX1 .</p><p>By the fact that sinceX1 is invertible and A is positive definite then we have thatXTAX1</p><p>is also positive definite and therefore by the square partitioning of principle submatrices ofpositive definite matrices (Proposition 1.4.53) the submatrix A22 is positive definite.</p><p>Exercise 1.4.60</p><p>We will do this problem using induction on n the size of our matrix A. If n = 1 then showingthat the Cholesky factor is unique is trivial</p><p>[a11] = [+a11][+</p><p>a11] .</p></li><li><p>Assume that the Cholesky factor is unique for all matrices of size less than or equal to n. LetA be a positive definite of size n + 1 and assume (to reach a contradiction) that A has twoCholesky factorizations, i.e. A = RTR and A = STS. By partitioning A and R as follows[</p><p>A11 A12AT12 an+1,n+1</p><p>]=</p><p>[RT11 0RT12 rn+1,n+1</p><p>] [R11 R120 rn+1,n+1</p><p>].</p><p>Here A11 is of dimension n by n, A12 is n by 1, and the elements of R are partitionedconformably with A. Then by equating terms in the above block matrix equation we havethree unique equations (equating terms for AT12 results in a transpose of the equation forA12) The three equations are</p><p>A11 = RT11R11</p><p>A12 = RT11R12</p><p>an+1,n+1 = RT12R12 + r</p><p>2n+1,n+1 .</p><p>From the induction hypotheses since A11 is n by n its Cholesky decomposition (i.e. R11)is unique. This implys that the column vector R12 is unique since R11 and correspondinglyRT11 is invertible. It then follows that since rn+1,n+1 &gt; 0 that rn+1,n+1 must be unique sinceit must equal</p><p>rn+1,n+1 = +an+1,n+1 RT12R12 .</p><p>We know that the expressionan+1,n+1 RT12R12 &gt; 0</p><p>by an equivalent result to the Schur complement result discussed in the book. Since everycomponent of our construction of R is unique, we have proven that the Cholesky decompo-sition is unique for positive definite matrices of size n + 1. By induction it must hold forpositive definite matrices of any size.</p><p>Exercise 1.4.62</p><p>Since A is positive definite it has a Cholesky factorization given by A = RTR. Taking thedeterminant of this expression gives</p><p>|A| = |RT ||R| = |R|2 =(</p><p>ni=1</p><p>rii</p><p>)2&gt; 0 ,</p><p>showing the desired inequality.</p></li><li><p>Exercise 1.5.9</p><p>We wish to count the number of operations required to solve the following banded systemRx = y with semiband width s or</p><p>r1,1 r1,2 r1,3 . . . r1,s</p><p>r2,2 r2,3 . . . r2,s r2,s+1</p><p>rn1,n1 rn1,nrn,n</p><p> x = y</p><p>so at row i we have non-zero elements in columns j = i, i+ 1, i+ 2, . . . i+ s, assuming thati + s is less than n. Then a pseudocode implementation of row-oriented back substitutionwould look something like the following</p><p>for i=n,n-1,. . . ,2,1 do// the following is not executed when j=n</p><p>for j=min(n,i+s),min(n,i+s)-1,. . . ,i+2,i+1 doy(j) = y(j) - R(i,j)*y(j)</p><p>endif R(i,i)=0 then</p><p>// return error; system is singular</p><p>endy(i) = y(i)/A(i,i)</p><p>end</p><p>Counting the number of flops this requires, we have approximately two flops for every exe-cution of the line y(j) = y(j)R(i, j) y(j), giving the following expression for the numberof flops</p><p>1i=n</p><p> i+1</p><p>j=min(n,i+s)</p><p>2</p><p> + 1</p><p> .</p><p>Now sincei+1</p><p>j=min(n,i+s)</p><p>2 = O(2s)</p><p>the above sum simplifies (using order notation) to</p><p>O(n+ 2sn) = O(2sn) ,</p><p>as requested.</p><p>Exercise 1.6.4</p><p>Please see the Matlab file exercise 1 6 4.m for the evaluation of the code to perform thesuggested numerical experiments. From those experiments one can see that the minimum</p></li><li><p>degree ordering produces the best ordering as far as remaining non-zero elements and time...</p></li></ul>