Computational Physics

Download Computational Physics

Post on 21-Nov-2014




3 download

Embed Size (px)


<p>Physics 75.502</p> <p>'</p> <p>$0</p> <p>Physics 75.502/487 Computational Physics Fall/Winter 1998/99Dean Karlen Department of Physics Carleton University</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>Part I: Introduction Part II: Numerical Methods Part III: Monte Carlo Techniques Part IV: Statistics for Physicists Part V: Special Topics</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>$1</p> <p>Part II: Numerical MethodsTopics: Linear Algebra Interpolation and Extrapolation Integration Root Finding Minimization or Maximization Di erential Equations References: Numerical Recipes (in Fortran or C) The Art of Scienti c Computing, Second Edition W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Cambridge University Press, 1992. Numerical Methods for Physics, A.L. Garcia, Prentice Hall, 1994.</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Solving Linear Algebraic Equations</p> <p>Solving Linear Algebraic Equations</p> <p>$2</p> <p>General ProblemN X j =1</p> <p>There are N unknowns, xj and M equations,</p> <p>aij xj = bi i = 1 ::: M :</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>If N = M there can be a solution, unless there is row or column degeneracy (ie. singular). Numerical solutions to this problem can have additional problems: equations are so close to being singular, that round o error renders them so and hence the algorithm fails equations are close to being singular and N is large that roundo errors accumulate and swamp the result Limits on N , if not close to singular: 32 bit ! N up to around 50 64 bit ! N up to few hundred (CPU limited) If coe cients are sparse, the N &gt; 1000 or more can be handled by special methods.Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Common Mistake</p> <p>Common Mistake</p> <p>$3</p> <p>A common mistake when manipulating matrices, is that incorrect logical and physical dimensions are passed to a function: In Fortran for example, one might set up a general purpose matrix as follows:PARAMETER (NP=4,MP=6) REAL A(NP,MP)</p> <p>If a particular problem deals with 3 equations with 4 unknowns, the logical size of the matrix is (3,4) whereas the physical size is (NP, MP). In order for a function to interpret the matrix properly, it needs to know both the logical and physical dimensions. Fortran stores the elements of the matrix as follows: Physical Memory Logical Array 1 2 3 4 5 9 13 17 6 10 14 18 7 11 15 19 8 12 16 20 21 22 23 24</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>!</p> <p>a11 a21 a31 {</p> <p>a12 a22 a32 {</p> <p>a13 a23 a33 {</p> <p>a14 a24 a34 {</p> <p>{ { { {</p> <p>{ { { {</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Typical Linear Algebra Problems</p> <p>$4</p> <p>Typical Linear Algebra Problemsknown vector. The problem is to nd the solution vector, x. Given A, nd A;1 or nd det(A). If A is an N M matrix with M &lt; N , nd the solution space. If M &gt; N nd the \best" result (least squares).</p> <p>Ax = b where A is a known N N matrix, and b is a</p> <p>Basic Methods</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>1. Gauss-Jordan elimination 2. Gaussian elimination with backsubstitution 3. LU decomposition</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Gauss-Jordan Elimination</p> <p>$5</p> <p>Gauss-Jordan Elimination+ an e cient method for inverting A ; 3 times slower than other methods not producing A;1 ; not recommended as a general purpose method</p> <p>Method without pivoting</p> <p>Perform operations that transform A into the identity matrix:</p> <p>0 10 1 0 1 B a11 a12 a13 a14 C B x1 C B b1 C B a21 a22 a23 a24 C B x2 C B b2 C B CB C = B C B B a31 a32 a33 a34 C B x3 C @ b3 C CB C B C @ A@ A B Aa41 a42 a43 a44 x4 b4</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>0 1 0 1 0 b1 1 a12 a13 a14 B 1 a11 a11 a11 C B x1 C B a11 C B a21 a22 a23 a24 C B x2 C B b2 C B CB C = B C B B a31 a32 a33 a34 C B x3 C B b3 C CB C B C @ A@ A @ Aa41 a42 a43 a44 x4 b4</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>Dean Karlen/Carleton University</p> <p>&amp;10 C B x1 C B x2 CB CB C B x3 A@x4 b4 ; b1 a41 a11</p> <p>'1 C C C C C A 1 C C C C C A</p> <p>0 a12 a13 a14 1 a11 a11 a11 B B 0 a22 ; a12 a21 a23 ; a13 a11 a24 ; a14 a21 21 B a11 a a11 B a31 31 B 0 a32 ; a12 a11 a33 ; a13 a11 a34 ; a14 a31 @ a a11 1 0 b1 C B a11 a21 C B b2 ; b1 a11 C=B C B C B b3 ; b1 a31 A @ a11 1 0 b1 C B a11 C B b02 C=B C B a31 C B b3 ; b1 a11 A @x4 b4 ; b1 a41 a11</p> <p>a41 41 0 a42 ; a12 a11 a43 ; a13 a11 a44 ; a14 a41 a a11</p> <p>0 a13 a14 a12 1 a11 a11 a11 B B0 1 a023 a024 B B a31 31 B 0 a32 ; a12 a11 a33 ; a13 a11 a34 ; a14 a31 @ a a11 10 C B x1 C B x2 CB CB C B x3 A@ 1 0 b1 1 C B a11 C C B b02 C C=B C C B 0 C C B b3 C A @ Aa013 a023 a033 a043 a014 a024 a034 a044</p> <p>Gauss-Jordan Elimination</p> <p>a41 41 0 a42 ; a12 a11 a43 ; a13 a11 a44 ; a14 a41 a a11</p> <p>Rev. 1.3</p> <p>0 B1 B0 B B B0 @0 1 0 0 0</p> <p>10 C B x1 C B x2 CB CB C B x3 A@x4</p> <p>b04</p> <p>6</p> <p>1998/99</p> <p>$</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Gauss-Jordan Elimination</p> <p>$7</p> <p>After continuing this process, one gets the following:</p> <p>x4 b04 And hence the solutions are, xi = b0i . Note that the same method could have produced A;1 . That is, replace x by Y and b by the identity matrix: AY = I Then after performing the same operations as above that transforms A;1into the identity, IY = I0 = A;1</p> <p>0 B1 B0 B B B0 @</p> <p>0 1 0 0 0</p> <p>0 0 1 0 1</p> <p>10 0 C B x1 0 C B x2 CB CB 0 C B x3 A@</p> <p>1 0 0 C B b1 C B b02 C=B C B 0 C B b3 A @</p> <p>1 C C C C C A</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Gauss-Jordan Elimination</p> <p>$8</p> <p>What if diagonal element is zero?</p> <p>If a11 = 0 or another derived diagonal element (such as a21 a22 ; a12 a11 in the example above) is zero, then algorithm fails. If instead of being exactly 0, one of these terms is very small, then the remaining equations can become identical, in the presence of round o error.</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>By interchanging rows (partial pivoting) or both rows and columns (full pivoting), this problem can be avoided. To maintain the identity matrix being formed, interchange rows below and columns to the right. If rows are interchanged ! one must also interchange corresponding rows in b. If columns are interchanged ! one must also interchange corresponding rows in x. These rows will have to be restored to the original order at the end. How to decide which rows (or columns) to substitute? Choosing the row with the largest value works quite well.</p> <p>Solution: Pivoting</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Gauss-Jordan Elimination</p> <p>$9</p> <p>To minimize storage requirements: Use b to built up solution. There is no need to have a separate array. Similarly the inverse can be built up in the input matrix. The disadvantage with this is that the input matrix and RHS vector are destroyed by the operation. Numerical Recipes:SUBROUTINE gaussj(a,n,np,b,m,mp)</p> <p>Implementation</p> <p>where a is an nnp b np</p> <p>n m</p> <p>matrix in array of physical dimension matrix in array of physical dimension</p> <p>is an n</p> <p>np</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>Note that</p> <p>mp a is</p> <p>replaced by its inverse, and b by its solutions.</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Gaussian Elimination with Backsubstitution</p> <p>$10</p> <p>Gaussian Elimination with BacksubstitutionThis method reduces the number of operations compared with Gauss-Jordan method (including inverse calculation) by about 3 (if inverse is not required).</p> <p>Method without pivoting</p> <p>Perform operations that transform A into an upper triangular matrix:</p> <p>0 10 1 0 1 B a11 a12 a13 a14 C B x1 C B b1 C B a21 a22 a23 a24 C B x2 C B b2 C B CB C = B C B B a31 a32 a33 a34 C B x3 C B b3 C CB C B C @ A@ A @ Aa41 a42 a43 a44 x4 b4</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>0 10 1 0 1 B a11 a012 a013 a014 C B x1 C B b1 C B 0 a22 a23 a24 C B x2 C B b02 C B CB C = B C B B 0 a032 a033 a034 C B x3 C B b03 C CB C B C @ A@ A @ A0</p> <p>a042 a043 a044</p> <p>x4</p> <p>b04</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Gaussian Elimination with Backsubstitution</p> <p>$11</p> <p>0 10 1 0 1 B a11 a012 a013 a014 C B x1 C B b1 C B 0 a22 a23 a24 C B x2 C B b02 C B CB C = B C B B 0 0 a033 a034 C B x3 C B b03 C CB C B C @ A@ A @ A0 0 0</p> <p>a044</p> <p>x4</p> <p>b04</p> <p>Pivoting is important for this method also. To solve for xi , backsubstitute: b04 x4 = a0 44 x3 = a1 b03 ; x4a034] 033</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>Note that both this method and Gauss-Jordan method require all RHS to be known in advance.</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>LU decomposition</p> <p>LU decomposition</p> <p>$12</p> <p>Any matrix A can be decomposed into into the product of a lower triangular matrix (L) and an upper triangular matrix (U).</p> <p>Ax = b (LU)x = b L(Ux) = bSo solve, Ly = b for y and then solve, Ux = y for x. These are easily solved for. Once the LU decomposition is found, one can solve for as many RHS vectors as needed.</p> <p>How to nd L and U?Crout's algorithm: Note thatN X k=1</p> <p>`ik ukj = aij</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>represents N 2 equations where there are N 2 + N unknowns. Arbitrarily set the terms, `ii = 1, to de ne a unique solution.Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>LU decomposition</p> <p>0 B 1 0 B `21 1 B B` ` B 31 32 @`41 `42</p> <p>0 0 1 `43</p> <p>10 0 C B u11 0 CB 0 CB CB 0 0 CB A@ 1 0 0 B a11 B a21 =B B B a31 @</p> <p>a41 a42 a43 a44</p> <p>1 u12 u13 u14 C u22 u23 u24 C C C 0 u33 u34 C A 0 0 u44 1 a12 a13 a14 C a22 a23 a24 C C C a32 a33 a34 C A</p> <p>$13</p> <p>The terms in L and U can be determined as follows:</p> <p>Dean Karlen/Carleton University</p> <p>&amp;etc.</p> <p>u11 u12 `21 u22 `32 u13 u23 u33</p> <p>= = = = = = = =</p> <p>a11 a12 a21 ` = a31 ` = a41 u11 31 u11 41 u11 a22 ; `21u12 1 (a ; ` u ) ` = 1 (a ; ` u ) u22 32 31 12 42 u22 42 41 12 a13 a23 ; `21u13 a33 ; `31u13 ; `32u23</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'wherea</p> <p>LU decomposition</p> <p>The order above must be followed so the terms `ij and uij are available when necessary. Each aij appears once and only once, when the corresponding `ij or uij terms are calculated. In order to save memory, these terms can be stored in the corresponding aij locations. Pivoting is essential here too, but only the interchange of rows is e cient. Numerical Recipes:SUBROUTINE ludcmp(a,n,np,indx,d)</p> <p>$14</p> <p>is an nnp np</p> <p>n</p> <p>matrix in array of physical dimension</p> <p>keep track of rows permuted by pivoting Note that a is replaced byindx,d</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>1 0 B u11 u12 u13 u14 C B `21 u22 u23 u24 C C B B C B `31 `32 u33 u34 C A @`41 `42 `43 u44</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'where</p> <p>LU decomposition</p> <p>Once the LU decomposition is found, nd solutions using backsubstitution:SUBROUTINE lubksb(a,n,np,indx,b)</p> <p>$15</p> <p>are the results from the call to ludcmp b is RHS on input, is solution on output Note that a and indx are not modi ed by this routine so lubksb can be called repeatedly.a, indx</p> <p>To nd inverse, solve</p> <p>0 1 B1C B0C Ax = B C B C B0C @ A0</p> <p>0 1 B0C B1C B C B C B0C @ A0</p> <p>0 1 B0C B0C B C B C B1C @ A0</p> <p>0 1 B0C B0C B C B C B0C @ A1</p> <p>to nd the columns of A;1 . The determinant is easily found,</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>det(A) =</p> <p>N Y</p> <p>i=1</p> <p>uii</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Iterative Improvement of a Solution</p> <p>Iterative Improvement of a Solution</p> <p>$16</p> <p>The algorithms presented above sometimes yield solutions with precision less than the machine limit (depending on how close equations are to being singular). Improved precision can be made by an iterative approach. Suppose x is the exact solution to</p> <p>Ax = band the resulting numerical solution is instead x + x. Then, A(x + x) = b + b so, A( x) = A(x + x) ; b and so solve for x, subtract it from the previous solution to get an improved solution. Numerical Recipes:</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>SUBROUTINE mprove</p> <p>can be called repeatedly to improve solution (although once is usually enough)</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Singular Value Decomposition</p> <p>Singular Value Decomposition</p> <p>$17</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>If A is an N N matrix, it can be decomposed A = UWVT where U and V are orthogonal (U;1 = UT ), and W is diagonal. The inverse of A is easily found to be 1 A;1 = V diag( w ) UT j If one or more wj is zero, then A is singular. If the ratio min(wj )/max(wj ) is less than the machine precision then the matrix is ill conditioned. In this case it is often better to set such small wj to 0. Note that if A is singular: Ax = 0 for some subspace of x. The space is called the nullspace its dimension is called the nullity. Ax = b the space of all possible b is called the range and its dimension is called the rank. nullity + rank = N nullity = number of zero wi 's The columns of U with non-zero wi 's span the range. The columns of V with zero wi 's span the nullspace.Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Singular Value Decomposition</p> <p>If A is singular or ill-conditioned, a space of vectors may satisfy Ax = b. If the solution with the smallest jxj is 1 desired, this can be found by replacing wj by zero for all wj = 0! Numerical Recipes:</p> <p>$18</p> <p>Sparse Linear Systems</p> <p>SUBROUTINE svdcmp(a,m,n,mp,np,w,v)</p> <p>Systems with many zero matrix elements can be solved with special algorithms that save time and/or space (by not using memory to hold all those zeros). Tridiagonal systems, for example</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>x5 b5 a54 a55 can be LU decomposed much quicker than Crout's method. See SUBROUTINE tridiag. Other forms of sparse matrices have special methods. See Numerical Recipes for details.0 0 0Rev. 1.3</p> <p>0 10 B a11 a12 0 0 0 C B x1 B a21 a22 a23 0 0 C B x2 B CB B B 0 a32 a33 a34 0 C B x3 CB B B 0 0 a a a CB x CB B 43 44 45 C B 4 @ A@</p> <p>1 0 C B b1 C B b2 C B C=B C B b3 C B C Bb C B 4 A @</p> <p>1 C C C C C C C C A</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>'</p> <p>Exercise 1</p> <p>Exercise 1V1 R12 V2 V R13 R14 R23 V3 R24 R34 V4 R15 R25 R35 R45 V5</p> <p>$19 1i 2i 3i 4i</p> <p>Any resistor divider network can be put in the form:</p> <p>This network has 5 voltage points, Vi . To calculate the total current, apply Kircho laws:</p> <p>I =0 = 0 =</p> <p>5 X 5 X</p> <p>i=1 i=1</p> <p>(V1 ; Vi ) R1 (V2 ; Vi ) R1</p> <p>(1)</p> <p>5 X 5 X</p> <p>Dean Karlen/Carleton University</p> <p>&amp;</p> <p>i=1 i=1</p> <p>(V3 ; Vi ) R1 (V4 ; Vi ) R1</p> <p>0 =</p> <p>where R1ii =0.</p> <p>Rev. 1.3</p> <p>1998/99</p> <p>%</p> <p>Physics 75.502</p> <p>Dean Karlen/Carleton University</p> <p>&amp;P51</p> <p>In order to solve for the four unknowns (I , V2 , V3, and V4 ), one can rearrange the last three equations, and identify V1 = V , and V5 = 0.R23 V31</p> <p>'</p> <p>;</p> <p>P5</p> <p>1 i=1 R2i V2 + 1 ; R23 V2 1 + R24 V2 1</p> <p>i=1 R3i V3 1 ; R34 V3</p> <p>+ +</p> <p>P5i=1 R4i V4</p> <p>R24 V4 1 R34 V4</p> <p>1</p> <p>= = =</p> <p>; R1 V 12 ; R1 V 13 ; R1 V 14</p> <p>Exercise 1</p> <p>Rev. 1.3</p> <p>The voltages V2 , V3 , and V4 can then be found by numerical methods, and substituted into the equation 1, in order to nd the total current, I . Write a program that solves this problem for any number of voltage points between 3 and 50. Consider the special case where V=1 Volt, all resistors are present, and the va...</p>


View more >