direct methods for the solution of systems of

25
Direct methods for the solution of systems of linear equations

Upload: marcela-carrillo

Post on 25-May-2015

1.344 views

Category:

Education


5 download

DESCRIPTION

The Engineer of Industrial Universtiy of Santander, Elkin Santafe, give us a little summary about direct methods for the solution of systems of equations

TRANSCRIPT

Direct methods for the solution of systems of linear equations

Definition of the problem

A X B

coefficient matrix

coefficient matrix

vector of independent terms

x =

x =

Augmented form of the matrix

  a11 … a1j … a1n     x1     b 1  

  a21 … a2j … a2n     x2     b 2  

  : : :     :   =  :  

  ai1 … aij … ain     x3     b 3  

  : : :     :       :  

  am1 … amj … amn     xm     b m  

  a11 … a1j … a1n     b 1  

  a21 … a2j … a2n     b 2  

  : : :       =   A b   = B

  ai1 … aij … ain     b 3  

  : : :      

  am1 … amj … amn   b m  

• If b=0, the system is homogeneous•If b!=0, the system isn’t homogeneous

Specials Systems

Specials Systems

Existence and uniqueness

Existence and uniqueness

Singular Systems

ILL- CONDITIONAL SYSTEM

TYPES OF DIRECT METHODS

Gauss Gauss with pivoting Gauss - Jordan Thomas

Gaussian elimination

In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss.

Elementary row operations are used to reduce a matrix to row echelon form. Gauss–Jordan elimination, an extension of this algorithm, reduces the matrix further to reduced row echelon form. Gaussian elimination alone is sufficient for many applications.

Algorithm overviewThe process of Gaussian elimination has two parts. The first part (Forward Elimination) reduces a given system to either triangular or echelon form, or results in a degenerate equation with no solution, indicating the system has no solution. This is accomplished through the use of elementary row operations. The second step uses back substitution to find the solution of the system above.

Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary row operations while the second reduces it to reduced row echelon form, or row canonical form.

Another point of view, which turns out to be very useful to analyze the algorithm, is that Gaussian elimination computes a matrix decomposition. The three elementary row operations used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix with invertible matrices from the left. The first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row-echelon matrix.

Other Applications Finding the inverse of a matrix Suppose A is a matrix and you need to calculate

its inverse. The identity matrix is augmented to the right of A, forming a matrix (the block matrix B = [A,I]). Through application of elementary row operations and the Gaussian elimination algorithm, the left block of B can be reduced to the identity matrix I, which leaves A−1 in the right block of B.

If the algorithm is unable to reduce A to triangular form, then A is not invertible.

General algorithm to compute ranks and bases

The Gaussian elimination algorithm can be applied to any matrix A. If we get "stuck" in a given column, we move to the next column. In this way, for example, some matrices can be transformed to a matrix that has a reduced row echelon form like

(the *'s are arbitrary entries). This echelon matrix T contains a wealth of information about A: the rank of A is 5 since there are 5 non-zero rows in T; the vector space spanned by the columns of A has a basis consisting of the first, third, fourth, seventh and ninth column of A (the columns of the ones in T), and the *'s tell you how the other columns of A can be written as linear combinations of the basis columns.

Gauss with pivoting

Avoids the problem of division by zero orclose to zero.

There are two techniques

1. Keep up the pivot position.2. Ordering the system.

Gauss Jordan Elimination Through Pivoting

A system of linear equations can be placed into matrix form. Each equation becomes a row and each variable becomes a column. An additional column is added for the right hand side. A system of linear equations and the resulting matrix are shown.

The system of linear equations ...3x + 2y - 4z = 3 2x + 3y + 3z = 15 5x – 3y + z = 14

becomes the augmented matrix ...

x y z rhs

3 2 -4 3

2 3 3 15

5 -3 1 14

What is pivoting? The objective of pivoting is to make an element above or below

a leading one into a zero.

The "pivot" or "pivot element" is an element on the left hand side of a matrix that you want the elements above and below to be zero.

Normally, this element is a one. If you can find a book that mentions pivoting, they will usually tell you that you must pivot on a one. If you restrict yourself to the three elementary row operations, then this is a true statement.

However, if you are willing to combine the second and third elementary row operations, you come up with another row operation (not elementary, but still valid).

You can multiply a row by a non-zero constant and add it to a non-zero multiple of another row, replacing that row.

So what? If you are required to pivot on a one, then you must sometimes use the second elementary row operation and divide a row through by the leading element to make it into a one. Division leads to fractions. While fractions are your friends, you're less likely to make a mistake if you don't use them.

What's the catch? If you don't pivot on a one, you are likely to encounter larger numbers. Most people are willing to work with the larger numbers to avoid the fractions.

Thomas’ Method

ADVANTAGES

The machine memory is reduced by not having to store 0's. Vectors are stored only a, b, c. Uses 3n locations instead of nxn (advantageous for n ≥ 50).

Doesn’t require pivoting. Reduces the number of operations

Jacobian Method

Given a square system of n linear equations:

Ax = Bwhere:

Then A can be decomposed into a diagonal component D, and the remainder R:

The system of linear equations may be rewritten as:

and finally:

The Jacobi method is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side.

Analytically, this may be written as:

The element-based formula is thus:

Note that the computation of xi(k+1) requires each element

in x(k) except itself. Unlike the Gauss–Seidel method, we can't overwrite xi

(k) with xi(k+1), as that value will be needed

by the rest of the computation. This is the most meaningful difference between the Jacobi and Gauss–Seidel methods, and is the reason why the former can be implemented as a parallel algorithm, unlike the latter. The minimum amount of storage is two vectors of size n.

Algorithm

Choose an initial guess x0 to the solutionwhile convergence not reached do

for i := 1 step until n doσ = 0

for j := 1 step until n doif j != i then

end if end (j-loop)

end (i-loop) check if convergence is reached

end (while convergence condition not reached loop)

Gauss-Seidel Method

The Gauss-Seidel method (called Seidel's method by Jeffreys and Jeffreys 1988, p. 305) is a technique for solving the equations of the linear system of equations one at a time in sequence, and uses previously computed results as soon as they are available,

There are two important characteristics of the Gauss-Seidel method should be noted. Firstly, the computations appear to be serial. Since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the Jacobi method. Secondly, the new iterate depends upon the order in which the equations are examined. If this ordering is changed, the components of the new iterates (and not just their order) will also change.

Bibliography

Métodos Numéricos en Ingeniería de Petróleos. Elkin Rodolfo Santafé Rangel, lngeniero de Petróleos, Bucaramanga – Colombia © 2008

http://en.wikipedia.org/wiki/Gaussian_elimination

http://people.richland.edu/james/lecture/m116/matrices/pivot.html

http://en.wikipedia.org/wiki/Jacobi_method