week_1_kns_2723

22
1 NUMERICAL METHODS Part of the Numerical Methods and Statistic Course KNS 2723 by Prof. DR. F.J.Putuhena February 2015 & 2016 Department of Civil Engineering Faculty of Engineering University Malaysia Sarawak

Upload: eddy-bong

Post on 14-Jul-2016

213 views

Category:

Documents


1 download

DESCRIPTION

123

TRANSCRIPT

Page 1: Week_1_KNS_2723

1

NUMERICAL METHODS

Part of the Numerical Methods and Statistic CourseKNS 2723

byProf. DR. F.J.Putuhena

February 2015 & 2016Department of Civil Engineering

Faculty of Engineering University Malaysia Sarawak

Page 2: Week_1_KNS_2723

2

NUMERICAL METHODS

Table of Contents:1. Introduction2. Linear Algebraic Equation3. Non Linear Equation4. Interpolation and Extrapolation (Curve Fitting)5. Numerical Integration6. The Numerical Solution of Ordinary Differential

Equation7. Application

Page 3: Week_1_KNS_2723

3

NUMERICAL METHODS1. Introduction

WHAT ARE NUMERICAL METHODS• Numerical methods are a class of methods for solving a wide

variety of mathematical problems• This class of methods in unusual in that only arithmetic

operations and logics are employed• The methods came to age with the introduction of the

electronic digital computers

Page 4: Week_1_KNS_2723

4

NUMERICAL METHODS1. Introduction

Introduction to the course topics, which include:

Course Plan KNS 2723_Old versionCourse Plan KNS 2723_New version

Page 5: Week_1_KNS_2723

5

NUMERICAL METHODS

2. Linear Algebraic Equations

IntroductionSolutions of a set of Linear Equations• Direct Solution (Gaussian Elimination, Pivoting)

– Basic Matrix Terminology and Operations– Matrix Representation and Formal Solution of a set of Linear

Equations– Gauss Elimination and Gauss-Jordan Elimination– Matrix Inversion by Gauss-Jordan Elimination– Ill-Conditioned Matrices and Sets of Equations

• Gauss-Siedel Method (Iteration) and Concepts of Relaxation

Page 6: Week_1_KNS_2723

6

2. Linear Algebraic EquationsBASIC MATRIX TERMINOLOGY AND OPERATIONS

Matrix Terminology• Matrix size: n * m ; n = number of row and

m = number of column• Element: aij ; i = row ; j = column;• Vector:

– Identity or Unit matrix

1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 1

Page 7: Week_1_KNS_2723

7

2. Linear Algebraic EquationsBASIC MATRIX TERMINOLOGY AND OPERATIONS

– Banded Matrixa11 a12

a21 a22 a23

a32 a33 a34

a43 a44 a45

a54 a55

– Symmetric matrix : a ij = a ji

– Triangular matrix :• Upper triangular matrix• Lower triangular matrix

0

0

Page 8: Week_1_KNS_2723

8

2. Linear Algebraic Equations BASIC MATRIX TERMINOLOGY AND OPERATIONS

Matrix OperationAddition & Subtraction: S = A ± B => sij = aij ± bij

(A + B) + C = A + (B + C)A + B = B + A ; A, B have the same n * m

Multiplication: P = AB => Pij = aik bkj

If AB exists then conformable in general AB = BA

Scalar operation: A = aij

Inverse: I = AA-1 ; I = Identical Matrix; A-1 = Inverse M If A-1 is defined then (A-1) –1 = A

k = 1

n

Page 9: Week_1_KNS_2723

9

2. Linear Algebraic Equations BASIC MATRIX TERMINOLOGY AND OPERATIONS

Transpose : AT

Anm => AT ; aij = ajiT

i.e.: exchange rows and columns

Symetric: A = AT

Orthogonal: AT = A-1

Determinant:– The determinant of a matrix A is a scalar quantity, denoted as

Det (A) or |A|;– The determinant of a 1 * 1 matrix is the number within the

matrix itself;– To each element of a matrix with higher dimensions, aij, we

assign a number called its minor, denoted as Mij;

Page 10: Week_1_KNS_2723

10

2. Linear Algebraic Equations BASIC MATRIX TERMINOLOGY AND OPERATIONS

– Mij is a determinant of the matrix that arises by eliminating the elements in the ith row and jth column of matrix A, and the result is a reduced matrix with dimensions (n-1)*(m-1);

– The corresponding cofactor is defined as Cij = (-1) i+j Mij

– The determinant of A is defined as:Det (A) = aij Cij = aik bik ;

where the value of i and k are arbitrary

– The determinant is equal to the sum of the product of the elements of the matrix with the associated cofactors, where the sum is computed across an arbitrary column or row.

n

J=1 i=1

n

Page 11: Week_1_KNS_2723

11

2. Linear Algebraic Equations BASIC MATRIX TERMINOLOGY AND OPERATIONS

Properties of determinant:– Det = 0, if all elements of any row and column equal

to zero; – Value of determinant unchanged if any 2 rows (or

columns) are interchanged (Sign changes); – Det (AT) = Det (A);– If all elements in row (or column) are multiplied by

scalar k, then determinant multiplied by k;– If 2 rows (or columns) are equal or proportional, the

determinant will be equal to zero; and– Etc.

Page 12: Week_1_KNS_2723

12

2. Linear Algebraic Equations Matrix Representation and Formal Solution of a set of Linear Equations

A set of linear equations:C11 x1 + C12 x2+ C13 x3+ C14 x4 = r1

C21 x1 + C22 x2+ C23 x3+ C24 x4 = r2

C31 x1 + C32 x2+ C33 x3+ C34 x4 = r3

C41 x1 + C42 x2+ C43 x3+ C44 x4 = r4

An equivalent representation in matrix form is: C11 C12 C13 C14 x1 = r1

C21 C22 C23 C24 x2 = r2

C31 C32 C33 C34 x3 = r3

C41 C42 C43 C44 x4 = r4

Or CX = R

Page 13: Week_1_KNS_2723

13

2. Linear Algebraic EquationsMatrix Representation and Formal Solution of a set of Linear Equations

Cramer’s Rule:

xk =

det Ck

det CWhere Ck is the matrix C, with its kth column replaced by R

Ci1 Ci2 Cik-1 ri Cik+1 Cin

Ck =

If det C = 0, no unique solution can be obtained, in this eventthe matrix C is term Singular (See Example)

Page 14: Week_1_KNS_2723

14

2. Linear Algebraic Equations Matrix Representation and Formal Solution of a set of Linear Equations

CX = R , can also be solved formally as X = C-1 R, however most of the time the solution is done by finding X directly, without solving for C-1 , as an intermediate step, since this is usually the most efficient approach.

In a certain situation, solutions must be obtained for many different sets of equations in which C stays the same and only R is changed. Thus once C-1 is found, new solution vector X can be obtained with little additional work.

2 (two) approaches or methods:Direct-a small matrix (densely pact)-large banded matrices-“exact” solution (round off error only)-Cramer’s rule-Gauss, Gauss Jordan elimination

Indirect-a large matrix and sparse-Accuracy, function of # of iterations-Gauss -Siedel-Relaxation

Page 15: Week_1_KNS_2723

15

2. Linear Algebraic Equations Gauss Elimination and Gauss- Jordan Elimination

Gauss elimination approach is to obtain triangular matrix

C11 C12 C1n x1 = r1

C21 C22 C2n x2 = r2

Cm1 Cm2 Cmn xm = rm

1 C12/ C11 ……… C1n/ C11 x1 = r1/ C11

0 [C22 – C21C12/ C11] . …….. x2 = Etc.

CX = RAssume |C| = 0

Page 16: Week_1_KNS_2723

16

2. Linear Algebraic Equations Gauss Elimination and Gauss- Jordan Elimination

Gauss elimination approach is to obtain triangular matrix

1 x1 = r’1

1 x2 = r’2

1 xm = r’m

In term of computation, Gauss Elimination reduce the Cramer’s computation from O(n4) to O(n3)

0

C’ij xm = r’m

xm-1 + Cm-1,n xm = r’m-1

xm-1 = r’m-1 - Cm-1,n xm Etc.

Page 17: Week_1_KNS_2723

17

2. Linear Algebraic Equations Gauss Elimination and Gauss- Jordan Elimination

Gauss Jordan elimination is further to Gauss Elimination by making C12 , C13 , C14 , C23 ,C24 , C34 are equal to zero

1 x1 = r’1

1 x2 = r’2

1 x3 = r’3

1 x4 = r’4

In term of computation, Gauss Jordan Elimination O(n3), while the Cramer’s rule O(n4)

See Illustrative Problem

0

0 x1 = r’1

x2 = r’2

x3 = r’3

x4 = r’4

Page 18: Week_1_KNS_2723

18

2. Linear Algebraic Equations Gauss Elimination and Gauss- Jordan Elimination

Gauss Elimination for Banded Matrix b1 c1 0 0 x1 = r1

a2 b2 c2 0 x2 = r2

0 a3 b3 c3 x3 = r3

0 0 a4 b4 x4 = r4

1 c1/b1 0 0 x1 = r1/b1

0 [b2- a2c1/b1] c2 0 x2 = r2- a2r1/b1

0 a3 b3 c3 x3 = r3

0 0 a4 b4 x4 = r4

Page 19: Week_1_KNS_2723

19

2. Linear Algebraic Equations Gauss Elimination and Gauss- Jordan Elimination

Gauss Elimination for Banded Matrix 1 c1/b1 0 0 x1 = r1/b1

0 1 c2 / [ ] 0 x2 = ( r2- a2r1/b1)/[ ] 0 0 1 c3/ [ ] x3 = ( )/[ ] 0 0 0 1 x4 = ( )/[ ]

Page 20: Week_1_KNS_2723

20

2. Linear Algebraic EquationsPivoting

• By the time any given row becomes the pivot row, the diagonal element in that row will have been modified from its original value, with the element in the lower rows being recomputed the most times.

• Under certain circumstances, the diagonal elements can become very small in magnitude compared to the rest of the elements in the pivot rows.

• For various reasons, this can create a very unfavorable situation in terms of round off error problem which can result in an inaccurate solution vector.

• The problem can be effectively treated by interchanging column or row of the matrix to shift the largest element in the pivot row to the diagonal position.

• Strategies to maximize pivot elements are sometimes called “positioning for size” or “pivoting”.

Page 21: Week_1_KNS_2723

21

2. Linear Algebraic EquationsPivoting

Row Operation for pivoting3 6 9 6 2 11 4 3 = 1 4 36 2 1 3 6 9

Shift column for pivoting

9 6 3 x3

10 4 1 x2

1 2 6 x1

change place

Page 22: Week_1_KNS_2723

2. Linear Algebraic EquationsPivoting

• Solve the following set of equations with Gauss- Jordan Elimination:a) without maximizationb) With maximization

4 3 -1 x1 67 -2 3 x2 = 95 -18 13 x3 3

Solution