appendixa.matrixmethods
TRANSCRIPT
-
7/29/2019 AppendixA.MatrixMethods
1/24
Appendix AMatrix Methods
A.1 REVIEW SUMMARY
A.1.1 Sets
A set of points is denoted by
S x1; x2; x3 A:1
This shows a set of three points, x1, x2, and x3. Some properties may be assigned to
the set, i.e.,
S fx1; x2; x3jx3 0g A:2
Equation (A.2) indicates that the last component of the set x3 = 0. Members of a set
are called elements of the set. If a point x, usually denoted by "xx, is a member of the
set, it is written as
"xx 2 S A:3
If we write:
"xx =2 S A:4
then point x is not an element of set S. If all the elements of a set S are also the
elements of another set T, then S is said to be a subset of T, or S is contained in T:
S & T A:5
Alternatively, this is written as
T ' S A:6The intersection of two sets S1 and S2 is the set of all points "xx such that "xx is an
element of both S1 and S2. If the intersection is denoted by T, we write:
T S1 \ S2 A:7
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
2/24
The intersection of n sets is
T S1 \ S2 \ . . . \ Sn \ni1Si A:8
The union of two sets S1 and S2 is the set of all points "xx such that "xx is an element of
either S1 or S2. If the union is denoted by P, we write:
P S1 [ S2 A:9
The union of n sets is written as:
P S1 [ S2 [ . . . [ Sn Uni1Si A:10
A.1.2 Vectors
A vector is an ordered set of numbers, real or complex. A matrix containing only one
row or column may be called a vector:
"xx
x1x2
xn
A:11
where x1, x2, . . ., xn are called the constituents of the vector. The transposed form is
"xx 0 jx1; x2; . . . ; xnj A:12
Sometimes the transpose is indicated by a superscript letter t. A null vector"
00 has allits components equal to zero and a sum vector "11 has all its components equal to 1.
The following properties are applicable to vectors
"xx "yy "yy "xx
"xx "yy "zz "xx "yy "zz
12 "xx 12 "xx
1 2 "xx 1 "xx 2 "xx
"00 "xx "00
A:13
Multiplication of two vectors of the same dimensions results in an inner or scalar
product:
"xx 0 "yy Xni1
xiyi "yy0
"xx
"xx 0 "xx j "xxj2
cos "xx 0 "yy
jxjjyj
A:14
where is the angle between vectors and |x| and |y| are the geometric lengths. Two
vectors "xx1 and "xx2 are orthogonal if:
"xx1 "xx02 0 A:15
Matrix Methods 713
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
3/24
A.1.3 Matrices
1. A matrix is a rectangular array of numbers subject to certain rules of
operation, and usually denoted by a capital letter within brackets [A], a capital letter
in bold, or a capital letter with an overbar. The last convention is followed in this
book. The dimensions of a matrix indicate the total number of rows and columns.
An element aij lies at the intersection of row i and column j.2. A matrix containing only one row or column is called a vector.
3. A matrix in which the number of rows is equal to the number of columns is
a square matrix.
4. A square matrix is a diagonal matrix if all off-diagonal elements are zero.
5. A unit or identity matrix "II is a square matrix with all diagonal elements
=1 and off-diagonal elements = 0.
6. A matrix is symmetric if, for all values of i and j, aij = aji.7. A square matrix is a skew symmetric matrix ifaij = aji for all values of i
and j.
8. A square matrix whose elements below the leading diagonal are zero is
called an upper triangular matrix. A square matrix whose elements above the leading
diagonal are zero is called a lower triangular matrix.
9. If in a given matrix rows and columns are interchanged, the new matrix
obtained is the transpose of the original matrix, denoted by "AA 0.
10. A square matrix "AA is an orthogonal matrix if its product with its trans-
pose is an identity matrix:
"AA "AA 0 "II A:16
11. The conjugate of a matrix is obtained by changing all its complex ele-
ments to their conjugates, i.e., if
"AA 1 i 3 4i 5
7 2i i 4 3i
A:17
then its conjugate is
"AA 1 i 3 4i 5
7 2i i 4 3i A:18A square matrix is a unit matrix if the product of the transpose of the conjugate
matrix and the original matrix is an identity matrix:
"AA 0 "AA "II A:19
12. A square matrix is called a Hermitian matrix if every ij element is equal
to the conjugate complex ji element, i.e.,
"AA "AA 0 A:20
13. A matrix, such that:
"AA2 "AA A:21
is called an idempotent matrix.
714 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
4/24
14. A matrix is periodic if
"AAk1 "AA A:22
15. A matrix is called nilpotent if
"AAk 0 A:23
where k is a positive integer. Ifk is the least positive integer, then k is called the index
of nilpotent matrix.
16. Addition of matrices follows a commutative law:
"AA "BB "BB "AA A:24
17. A scalar multiple is obtained by multiplying each element of the matrix
with a scalar. The product of two matrices "AA and "BB is only possible if the number of
columns in "AA equals the number of rows in "BB.
If "AA is an m n matrix and "BB is n p matrix, the product "AA "BB is an m p
matrix wherecij ai1b1j ai2b2j ainbnj A:25
Multiplication is not commutative:
"AA "BB 6 "BB "AA A:26
Multiplication is associative if confirmability is assured:
"AA "BB "CC "AA "BB "CC A:27
It is distributive with respect to addition:
"AA "BB "CC "AA "BB "AA "CC A:28
The multiplicative inverse exists if jAj 6 0. Also,
"AA "BB 0 "BB 0 "AA 0 A:29
18. The transpose of the matrix of cofactors of a matrix is called an adjoint
matrix. The product of a matrix "AA and its adjoint is equal to the unit matrix multi-
plied by the determinant of A.
"AA "AAadj "IIjAj A:30
This property can be used to find the inverse of a matrix (see Example A.4).
19. By performing elementary transformations any nonzero matrix can be
reduced to one of the following forms called the normal forms:
Ir Ir 0Ir0
Ir 00 0
A:31
The number r is called the rank of matrix "AA. The form:
Ir 0
0 0 A:32is called the first canonical form of "AA. Both row and column transformations can be
used here. The rank of a matrix is said to be r if (1) it has at least one nonzero minor
of order r, and (2) every minor of "AA of order higher than r = 0. Rank is a nonzero
row (the row that does not have all the elements =0) in the upper triangular matrix.
Matrix Methods 715
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
5/24
Example A.1
Find the rank of the matrix:
"AA
1 4 5
2 6 8
3 7 22
This matrix can be reduced to an upper triangular matrix by elementary row opera-
tions (see below):
"AA
1 4 5
0 1 1
0 0 12
The rank of the matrix is 3.
A.2 CHARACTERISTICS ROOTS, EIGENVALUES, AND
EIGENVECTORS
For a square matrix "AA, the "AA "II matrix is called the characteristic matrix; is
a scalar and "II is a unit matrix. The determinant jA Ij when expanded gives
a polynomial, which is called the characteristic polynomial of "AA and the equation
jA Ij 0 is called the characteristic equation of matrix "AA. The roots of the
characteristic equation are called the characteristic roots or eigenvalues.Some properties of eigenvalues are:
. Any square matrix "AA and its transpose "AA 0 have the same eigenvalues.
. The sum of the eigenvalues of a matrix is equal to the trace of the matrix
(the sum of the elements on the principal diagonal is called the trace of the
matrix).
. The product of the eigenvalues of the matrix is equal to the determinant of
the matrix. If
1; 2;. . .
; n
are the eigenvalues of "AA, then the eigenvalues of
k "AA are k1; k2; . . . ; kn
"AAm are m1 ; m2 ; . . . ;
mn
"AA1 are 1=1; 1=2; . . . ; 1=n
A:33
. Zero is a characteristic root of a matrix, only if the matrix is singular.
.
The characteristic roots of a triangular matrix are diagonal elements of thematrix.
. The characteristics roots of a Hermitian matrix are all real.
. The characteristic roots of a real symmetric matrix are all real, as the real
symmetric matrix will be Hermitian.
716 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
6/24
A.2.1 CayleyHamilton Theorem
Every square matrix satisfies its own characteristic equation:
If j "AA "IIj 1nn a1n1 a2
n2 an A:34
is the characteristic polynomial of an n n matrix, then the matrix equation:
"XXn a1 "XXn1 a2 "XXn2 an "II 0
is satisfied by "XX "AA
"AAn a1 "AAn1 a2 "AA
n2 an "II 0
A:35
This property can be used to find the inverse of a matrix.
Example A.2
Find the characteristic equation of the matrix:
"AA 1 4 23 2 2
1 1 2
and then the inverse of the matrix.
The characteristic equation is given by
1 4 2
3 2 2
1 1 2
0
Expanding, the characteristic equation is
3 52 8 40 0
then, by the CayleyHamilton theorem:
"AA2 5 "AA 8 "II 40 "AA1 0
40 "AA1 "AA2 5 "AA 8 "II
We can write:
40A
1
1 4 2
3 2 21 1 2
2
5
1 4 2
3 2 21 1 2 8
1 0 0
0 1 00 0 1 0
The inverse is
A1
0:05 0:25 0:3
0:2 0 0:2
0:125 0:125 0:25
This is not an effective method of finding the inverse for matrices of large dimen-
sions.
A.2.2 Characteristic Vectors
Each characteristic root has a corresponding nonzero vector "xx which satisfies the
equation j "AA "IIj "xx 0. The nonzero vector "xx is called the characteristic vector or
eigenvector. The eigenvector is, therefore, not unique.
Matrix Methods 717
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
7/24
A.3 DIAGONALIZATION OF A MATRIX
If a square matrix "AA ofn n has n linearly independent eigenvectors, then a matrix "PP
can be found so that
"PP1 "AA "PP A:36
is a diagonal matrix.The matrix "PP is found by grouping the eigenvectors of "AA into a square matrix,
i.e., "PP has eigenvalues of "AA as its diagonal elements.
A.3.1 Similarity Transformation
The transformation of matrix "AA into P1 "AA "PP is called a similarity transformation.
Diagonalization is a special case of similarity transformation.
Example A.3
Let "AA
2 2 3
2 1 6
1 2 0
Its characteristics equation is
3 2 21 45 0
5 3 3 0
The eigenvector is found by substituting the eigenvalues:
7 2 3
2 4 6
1 2 3
x
y
z
0
0
0
As eigenvectors are not unique, by assuming that z 1, and solving, one eigenvector
is
1; 2; 1t
Similarly, other eigenvectors can be found. A matrix formed of these vectors is
"PP
1 2 3
2 1 0
1 0 1
and the diagonalization is obtained:
"PP1 "AA "PP 5 0 00 3 0
0 0 3
This contains the eigenvalues as the diagonal elements.
718 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
8/24
A.4 LINEAR INDEPENDENCE OR DEPENDENCE OF VECTORS
Vectors "xx1; "xx2; . . . ; "xxn are dependent if all vectors (row or column matrices) are of the
same order, and n scalars 1; 2; . . . ; n (not all zeros) exist such that:
1 "xx1 2 "xx2 3 "xx3 n "xxn 0 A:37
Otherwise they are linearly independent. In other words, if vector "xxK 1 can bewritten as a linear combination of vectors x1; "xx2; . . . ; "xxn, then it is linearly depen-
dent, otherwise it is linearly independent. Consider the vectors:
"xx3
4
2
5
"xx1
1
0:5
0
"xx2
0
0
1
then
"xx3 4 "xx1 5 "xx2
Therefore, "xx3 is linearily dependent on "xx1 and "xx2.
A.4.1 Vector Spaces
If "xx is any vector from all possible collections of vectors of dimension n, then for any
scalar , the vector "xx is also of dimension n. For any other n-vector "yy, the vector
"xx "yy is also of dimension n. The set of all n-dimensional vectors are said to form a
linear vector space En. Transformation of a vector by a matrix is a linear transfor-
mation:
"AA "xx "yy "AA "xx "AA "yy A:38
One property of interest is
"AA "xx 0 A:39
i.e., whether any nonzero vector "xx exists which is transformed by matrix "AA into a
zero vector. Equation (A.39) can only be satisfied if the columns of "AA are linearly
dependent. A square matrix whose columns are linearly dependent is called a sin-
gular matrix and a square matrix whose columns are linearly independent is called a
nonsingular matrix. In Eq. (A.39) if "xx "00, then columns of "AA must be linearly
independent. The determinant of a singular matrix is zero and its inverse does not
exist.
A.5 QUADRATIC FORM EXPRESSED AS A PRODUCT OF
MATRICES
The quadratic form can be expressed as a product of matrices:
Quadratic form "xx 0A "xx A:40
where
"xx
x1x2x3
"AA
a11 a12 a13a21 a22 a23a31 a32 a33
A:41
Matrix Methods 719
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
9/24
Therefore,
"xx 0A "xx x1 x2 x3 a11 a12 a13a21 a22 a23
a31 a32 a33
x1
x2
x3
a11x
2
1 a22x
2
2 a33x
2
3 2a12x; x2 2a23x2x3 2a13x1x3
A:42
A.6 DERIVATIVES OF SCALAR AND VECTOR FUNCTIONS
A scalar function is defined as
y ffi fx1; x2; . . . ; xn A:43
where x1; x2; . . . ; xn are n variables. It can be written as a scalar function of an n-
dimensional vector, i.e., y f "xx, where "xx is an n-dimensional vector:
"xx
x1x2
xn
A:44In general, a scalar function could be a function of several vector variables, i.e.,
y f "xx; "uu; "pp, where "xx; "uu, and "pp are vectors of various dimensions. A vector function
is a function of several vector variables, i.e., "yy f "xx; "uu; "pp.
A derivative of a scalar function with respect to a vector variable is defined as
@f
@x
@f=@x1@f=@x2
@f=@xn
A:45The derivative of a scalar function with respect to a vector ofn dimensions is a vector
of the same dimension. The derivative of a vector function with respect to a vector
variable x is defined as
@f=@x @f1=@x1 @f1=@x2 @f1=@xn@f2=@x2 @f2=@x2 @f2=@xn
@fm=@x1 @fm=@x2 @fm=@xn
@f1=@x1
T
@f2=@x2T
@fm=@xnT
A:46
If a scalar function is defined as
s Tf "xx "uu "pp
1 f1 "xx; "uu; "pp 2 f2 "xx; "uu; "pp m fm "xx; "uu; "ppA:47
then @s=@ is
@s
@
f1 "xx; "uu; "pp
f2 "xx; "uu; "pp
::
fm "xx; "uu; "pp
f"xx; "uu; "pp A:48
720 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
10/24
and @s=@x is
@s@x
1@f1
@x1 2
@f2
@x1 . . . m
@fm
@x1
1@f1
@x2 2
@f2
@x2 . . . m
@fm
@x2
. . . . . .
1@f1@xn
2@f2@xn
. . . m@fm@xn
@f1
@x1
@f2
@x1 . . .
@fm
@x1@f1
@x2
@f2
@x2 . . .
@fm
@x2
. . . . . .
@f1@xn
@f2@xn
. . . @fm@xn
1
2::
m
A:49
Therefore,
@s
@x
@f
@x
T
A:50
A.7 INVERSE OF A MATRIX
The inverse of a matrix is often required in the power system calculations, though it
is rarely calculated directly. The inverse of a square matrix "AA is defined so that
"AA1 "AA "AA "AA1 "II A:51
The inverse can be evaluated in many ways.
A.7.1 By Calculating the Adjoint and Determinant of the Matrix
"AA1 "AAadj
jAjA:52
Example A.4
Consider the matrix:
"AA
1 2 3
4 5 6
3 1 2
Its adjoint is
"AAadj 4 1 3
10 7 6
11 5 3
and the determinant of "AA is equal to 9.
Matrix Methods 721
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
11/24
Thus, the inverse of "AA is
"AA1
4
9
1
9
1
3
10
9
7
9
2
311
9
5
9
1
3
A.7.2 By Elementary Row Operations
The inverse can also be calculated by elementary row operations. This operation is as
follows:
1. A unit matrix of n n is first attached to the right side of matrix n n
whose inverse is required to be found.
2. Elementary row operations are used to force the augmented matrix so
that the matrix whose inverse is required becomes a unit matrix.
Example A.5
Consider a matrix:
"AA 2 6
3 4
It is required to find its inverse.
Attach a unit matrix of 2 2 and perform the operations as shown:
2 6
3 4
1 00 1
R12 1 33 4
12 00 1
R2 3R1 1 30 5
1
20
3
21
R1
5
3R2
1 0
0 5
2
5
3
53
21
R2 1
5
1 0
0 1
2
5
3
53
10
1
5
Thus, the inverse is
"AA1
2
5
3
53
10
1
5
Some useful properties of inverse matrices are:
The inverse of a matrix product is the product of the matrix inverses taken in
reverse order, i.e.,
"AA "BB "CC1 "CC1 "BB1 "AA1 A:53
The inverse of a diagonal matrix is a diagonal matrix whose elements are the respec-
tive inverses of the elements of the original matrix:
722 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
12/24
A11B22
C33
1
1
A111
B221
C33
A:54
A square matrix composed of diagonal blocks can be inverted by taking the inverse
of the respective submatrices of the diagonal block:
block A
block B
block C
1
block A1
block B1
block C1
A:55
A.7.3 Inverse by PartitioningMatrices can be partitioned horizontally and vertically, and the resulting submatrices
may contain only one element. Thus, a matrix "AA can be partitioned as shown:
"AA
a11 a12 a13 a14a21 a22 a23 a24a31 a32 a33 a34
a41 a42 a43 a44
"AA1 "AA2"AA3 "AA4
A:56
where
"AA
a11 a12 a13a21 a22 a23a31 a32 a33
A:57
"AA2
a14a24a34
"AA3 a41 a42 a43 "AA4 a44 A:58
Partitioned matrices follow the rules of matrix addition and subtraction. Partitioned
matrices "AA and "BB can be multiplied if these are confirmable and columns of "AA androws of "BB are partitioned exactly in the same manner:
"AA1122"AA1221
"AA2112"AA2211
"BB1123"BB1221
"BB2113"BB2211
"AA11 "BB11 "AA12 "BB21 "AA11 "BB12 "AA12 "BB22"AA21 "BB11 "AA22 "BB21 "AA21 "BB12 "AA22 "BB22
A:59
Example A.6
Find the product of two matrices A and B by partitioning:
"AA
1 2 3
2 0 1
1 3 6
"BB
1 2 1 0
2 3 5 1
4 6 1 2
Matrix Methods 723
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
13/24
-
7/29/2019 AppendixA.MatrixMethods
14/24
"AA 003 1
7
!1 2 1 3
1 2
17 17
"AA 004 1 2 1 3
1 2 0
3 4 !1
1
7
"AA1
2
7
12
7
9
71
7
8
7
6
7
1
7
1
7
1
7
A.8 SOLUTION OF LARGE SIMULTANEOUS EQUATIONSThe application of matrices to the solution of large simultaneous equations consti-
tutes one important application in the power systems. Mostly, these are sparse
equations with many coefficients equal to zero. A large power system may have
more than 3000 simultaneous equations to be solved.
A.8.1 Consistent Equations
A system of equations is consistent if they have one or more solutions.
A.8.2 Inconsistent Equations
A system of equations that has no solution is called inconsistent, i.e., the following
two equations are inconsistent:
x 2y 4
3x 6y 5
A.8.3 Test for Consistency and Inconsistency of Equations
Consider a system of n linear equations:
a11x1 a12x2 a1nx1 b1
a21x1 a22x2 A2nx2 b2
an1x1 an2x2 amnxn bn
A:64
Form an augmented matrix "CC:
"CC "AA; "BB
a11 a12 a1n b1a21 a22 a2n b2
an1 an2 ann bn
A:65
Matrix Methods 725
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
15/24
The following holds for the test of consistency and inconsistency:
. A unique solution of the equations exists if: rank of "AA = rank of "CC n,
where n is the number of unknowns.
. There are infinite solutions to the set of equations if: rank of "AA = rank of"CC r, r < n.
. The equations are inconsistent if rank of"
AA is not equal to rank of"
CC.
Example A.8
Show that the equations:
2x 6y 11
6x 20y 6z 3
6y 18z 1
are inconsistent.The augmented matrix is
"CC "AA "BB
2 6 0 11
6 20 6 3
0 6 18 1
It can be reduced by elementary row operations to the following matrix:
2 6 0 11
0 2 6 300 0 0 91
The rank of A is 2 and that of C is 3. The equations are not consistent.
The equations (A.64) can be written as
"AA "xx "bb A:66
where "AA is a square coefficient matrix, "bb is a vector of constants, and "xx is a vector of
unknown terms. If "AA is nonsingular, the unknown vector "xx can be found by
"xx "AA1 "bb A:67
This requires calculation of the inverse of matrix "AA. Large system equations are not
solved by direct inversion, but by a sparse matrix techniques.
Example A.9
This example illustrates the solution by transforming the coefficient matrix to an
upper triangular form (backward substitution). The equations:
1 4 6
2 6 3
5 3 1
x1x2x3
2
1
5
726 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
16/24
can be solved by row manipulations on the augmented matrix, as follows:
1 4 6
2 6 3
5 3 1
2
1
5
R2 2R1
1 4 6
0 2 9
5 3 1
2
3
5
R3 5R1
1 4 60 2 9
0 17 29
23
5
R3
17
2R2
1 4 60 2 9
0 0 47:5
23
20:5
Thus,
47:5x3 20:5
2x2 9x3 3
x1 4x2 6x3 2
which gives
"xx
1:179
0:442
0:432
A set of simultaneous equations can also be solved by partitioning:
a11; ; a1k a1m; ; a1n:: ::
ak1; ; akk akm; ; akn
am1; ; amk amm; ; amn:: ::
an1; ; ank anm; ; ann
x1
xk
xm
xn
b1
bk
bm
bn
A:68
Equation (A.68) is horizontally partitioned and rewritten as
"AA1 "AA2"AA3 "AA4
"XX1"XX2
"BB1"BB2
A:69Vectors "xx1 and "xx2 are given by
"XX1 "AA1 "AA2 "AA14
"AA3 1
"BB1 "AA2 "AA14
"BB2
A:70
"XX2 "AA14 "BB2 "AA3 "XX1
A:71
A.9 CROUTS TRANSFORMATION
A matrix can be resolved into the product of a lower triangular matrix "LL and an
upper unit triangular matrix "UU, i.e.,
a11 a12 a13 a14a21 a22 a23 a24a31 a32 a33 a34a41 a42 a43 a44
l11 0 0 0
l21 l22 0 0
l31 l32 l33 0
l41 l42 l43 l44
1 u12 u13 u140 1 u23 u240 0 1 u340 0 0 1
A:72
Matrix Methods 727
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
17/24
The elements of "UU and "LL can be found by multiplication:
l11 a11
l21 a21
l22 a22 l21u12
l31 a31l32 a32 l31u12
l33 a33 l31u13 l32u23
l41 a41
l42 a42 l41u12
l43 a43 l41u13 l42u23
l44 a44 a41u14 l42u24 l43u3
A:73
and
u12 a12=l11
u13 a13=l11
u14 a14=l11
u23 a23 l21u13=l22
u24 a24 l21u14=l22
u34 a34 l31u14 l32u24l33
A:74
In general:
lij aij Xkj1k1
likukj i ! j A:75
for j 1; . . . ; n
uij 1
liiaij
Xkj1k1
likukj
i < j A:76
Example A.10
Transform the following matrix into LU form:
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2
From Eqs. (A.75) and (A.76):
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2
1 0 0 0
0 3 0 0
2 4 4 0
1 2 1 2:33
1 2 1 0
0 1 1 0:33
0 0 1 0:33
0 0 0 1
728 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
18/24
The original matrix has been converted into a product of lower and upper triangular
matrices.
A.10 GAUSSIAN ELIMINATION
Gaussian elimination provides a natural means to determine the LU pair:
a11 a12 a13a21 a22 a23a31 a32 a33
x1x2x3
b1b2b3
A:77
First, form an augmented matrix:
a11 a12 a13 b1a21 a22 a23 b2a31 a32 a33 b3
A:78
1. Divide the first row by a11. This is the only operation to be carried out on
this row. Thus, the new row is
1 a 012 a013 b
01
a 012 a12=a11; a013 a13=a11; b
01 b1=a11
A:79
This gives
l11 a11; u11 1; u12 a012; u13 a
013 A:80
2. Multiply new row 1 by a21 and add to row 2. Thus, a21 becomes zero.
0 a 022 a023 a
033b
02
a 022 a22 a21a012
a 023 a23 a21a013
b 02 b2 a21b01
A:81
Divide new row 2 by a 022. Row 2 becomes
0 1 a 0023 b002
a 0023 a 023=a 022
b 002 b02=a
022
A:82
This gives
l21 a21; l22 a022; u22 1; u23 a
023 A:83
3. Multiply new row 1 by a31 and add to row 3. Thus, row 3 becomes:
0 a 032 a033b
03
a 032 a32 a32a012
a 033 a33 a31a013
A:84
Multiply row 2 by a32 and add to row 3. This row now becomes
0 0 a 0033 b003 A:85
Matrix Methods 729
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
19/24
Divide new row 3 by a 0033. This gives
0 0 1 b 0 003
b 0 003 b003 =a
0033
A:86
From these relations:
l33 a0033; l31 a31; l32 a
032; u33 1 A:87
Thus, all the elements of LU have been calculated and the process of forward
substitution has been implemented on vector "bb.
A.11 FORWARDBACKWARD SUBSTITUTION METHOD
The set of sparse linear equations:
"AA "xx "bb A:88
can be written as
"LL "UU"xx "bb A:89
or
"LL "yy "bb A:90
where
"yy "UU"xx A:91
"LL "yy "bb is solved for "yy by forward substitution. Thus, "yy is known. Then "UU"xx "yy is
solved by backward substitution.
Solve "LL "yy "bb by forward substitution:
l11 0 0 0
l21 l22 0 0
l31 l32 l33 0
l41 l42 l43 l44
y1y2y3y4
b1b2b3b4
A:92
Thus,
y1 b1=l11
y2 b2 l21y1=l22
y3 b3 l31y1 l32y2=l33
y4 b4 l41y1 l42y2 l43y3=l44
A:93
Now solve "UU"xx "yy by backward substitution:
1 u12 u13 u140 1 u23 u240 0 1 u340 0 0 1
x1x2x3x4
y1y2y3y4
A:94
730 Appendix A
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
20/24
Thus,
x4 y4
x3 y3 u34x4
x2 y2 u23x3 u24x4
x1 y1 u12x2 u13x3 u14x4
A:95
The forwardbackward solution is generalized by the following equation:
"AA "LL "UU "LLd "LLl "II "UUu A:96
where "LLd is the diagonal matrix, "LLl is the lower triangular matrix, "II is the identity
matrix, and "UUu is the upper triangular matrix.
Forward substitution becomes
"LL "yy "bb
"LLd "LLl "yy "bb
"LLd "yy "bb "LLl "yy
"yy "LL1d "bb "LLl "yy
A:97
i.e.,
y1y2y3
y4
1=l11 0 0 0
0 1=l22 0 0
0 0 1=l33 0
0 0 0 1=l44
x
b1b2b3
b4
0 0 0 0
l21 0 0 0
l31 l32 0 0
l41 l42 l43 l44
y1y2y3
y4
2
664
3
775A:98
Backward substitution becomes
"II "UUu "xx "yy
"xx "yy "UUu "xxA:99
i.e.,
x1x2x3x4
y1y2y3y4
0 u12 u13 u140 0 u23 u240 0 0 u340 0 0 0
x1x2x3x4
A:100
A.11.1 Bifactorization
A matrix can also be split into LU form by sequential operation on the columns and
rows. The general equations of the bifactorization method are
lip a1p for ! p
upj apj
appfor j > p
aij a1 j lipupj for i > p;j > p
A:101
Matrix Methods 731
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
21/24
Here, the letter p means the path or the pass. This will be illustrated with an
example.
Example A.11
Consider the matrix:
"AA
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2
It is required to convert it into LU form. This is the same matrix of Example A.10.
Add an identity matrix, which will ultimately be converted into a U matrix and
the "AA matrix will be converted into an L matrix:
1 2 1 00 3 3 1
2 0 2 0
1 0 0 2
1 0 0 00 1 0 0
0 0 1 0
0 0 0 1
First step, p=1:
The shaded columns and rows are converted into L and U matrix column and row
and the elements of "AA matrix are modified using Eq. (A.101), i.e.,
a32 a32 l31u12
0 22 4
a33 a33 l31u13
2 21 0
Step 2, pivot column 2, p=2:
732 Appendix A
1 1 2 1 0
0 3 3 0 0 1 0 0
2 4 0 0 0 0 1 0
1 2 1 2 0 0 1 0
1 0 0 0 1 2 1 0
0 3 0 0 0 1 1 0.33
2 4 4 1.32 0 0 1
1 2 1 2.66 0 0 0 1
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
22/24
Third step, pivot column 3, p=3:
This is the same result as derived before in Example A.10.
A.12 LDU (PRODUCT FORM, CASCADE, OR CHOLESKI FORM)
The individual terms of L, D, and U can be found by direct multiplication. Again,
consider a 4 4 matrix:
a11 a12 a13 a14a21 a22 a23 a24a31 a32 a33 a34a41 a42 a43 a44
1 0 0 0
l21 1 0 0
l31 l32 1 0
l41 l42 l43 1
d11 0 0 0
0 d22 0 0
0 0 d33 0
0 0 0 d44
1 u12 u13 u140 1 u23 u240 0 1 u340 0 0 1
A:102
The following relations exist:
d11 a11
d22 a22 l21d11u12
d33 a33 l31d11u13 l32d22u23
d44 a44 l41d11u14 l42d22u24 l43d33u34
u12 a12=d11
u13
a13
=d11
u14 a14=d11
u23 a23 l21d11u13=d22
u24 a24 l21d11u14=d22
u34 a34 l31d11u14 l32d22u24=d33
l21 a21=d11
l31 a31=d11
l32
a32
l31
d11
u12
=d22
l41 a41=d11
l42 a42 l41d11u12=d22
l43 a43 l41d11u13 l42d22u23=d33
A:103
Matrix Methods 733
1 0 0 0 1 2 1 0
0 3 0 0 0 1 1 0.33
2 4 4 0 0 0 1 0.33
1 2 1 2.33 0 0 0 1
02 by Marcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
23/24
734 Appendix A
In general:
dii a11 Xi1j1
lijdjjuji for i 1; 2; . . . ; n
uik aik Xi1
j1
lifdjjujk" #=dii for k i 1 . . . ; n i 1; 2; . . . ; nlki aki
Xi1j1
lkjdjjuji
" #=dii for k i 1; . . . ; n i 1; 2; . . . ; n
A:104
Another scheme is to consider A as a product of sequential lower and upper matrices
as follows:
A L1L2; . . . ; LnUn; . . . ; U2U1 A:105
a11 a12 a13 a14a21 a22 a23 a24a31 a32 a33 a34a41 a42 a43 a44
l11 0 0 0
l21 1 0 0l31 0 1 0
l41 0 0 1
1 0 0 0
0 a222 a232 a2420 a322 a332 a3420 a422 a432 a442
1 u12 u13 u140 1 0 00 0 1 0
0 0 0 1
A:106
Here the second step elements are denoted by subscript 2 to the subscript.
l21 a21 l31 a31 l41 a41
u12 a12=l11 u13 a13=l11 u14 a14=l11
aij2
a1j
l1i
u1j
i;j 2; 3; 4
A:107
All elements correspond to step 1, unless indicated by subscript 2.
In general for the kth step:
dkkk akkk k 1; 2; . . . ; n 1
lkik akik=a
kkk
ukj akkj=a
kkk
ak1ij akij a
kika
kkj=a
kkk
k 1; 2; . . . ; n 1i;j k 1; . . . ; n
A:108
Example A.12
Convert the matrix of Example A.10 into LDU form:
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2
l1 l2 l3 D u3 u2 u1
The lower matrices are
l1 l2 l3
1 0 0 0
0 1 0 0
2 0 1 0
1 0 0 1
1 0 0 0
0 1 0 0
0 4=3 1 0
1 2=3 0 1
1 0 0 0
0 1 0 0
0 0 1 0
0 0 1=4 0
arcel Dekker, Inc. All Rights Reserved.
-
7/29/2019 AppendixA.MatrixMethods
24/24
The upper matrices are
u3 u2 u1
1 0 0 0
0 1 0 1=3
0 0 1 1=3
0 0 0 1
1 0 0 0
0 1 1 0
0 0 1 0
0 0 0 1
1 2 1 0
0 1 0 0
0 0 1 0
0 0 0 1
The matrix D is
D
1 0 0 0
0 3 0 0
0 0 4 0
0 0 0 7=3
Thus, the LDU form of the original matrix is
1 0 0 0
0 1 0 0
2 4=3 1 01 2=3 1=4 1
1 0 0 0
0 3 0 0
0 0 4 00 0 0 7=3
1 2 1 0
0 1 1 1=3
0 0 1 1=30 0 0 1
If the coefficient matrix is symmetrical (for a linear bilateral network), then
L Ut
A:109
Because
lipnew aip=app
upi api=appaip apiA:110
The LU and LDU forms are extensively used in power systems.
BIBLIOGRAPHY
1. PL Corbeiller. Matrix Analysis of Electrical Networks. Cambridge, MA: Harvard
University Press, 1950.
2. WE Lewis, DG Pryce. The Application of Matrix Theory to Electrical Engineering.
London: E&F N Spon, 1965.
3. HE Brown. Solution of Large Networks by Matrix Methods. New York: Wiley
Interscience, 1975.
4. SA Stignant. Matrix and Tensor Analysis in Electrical Network Theory. London:
Macdonald, 1964.
5. RB Shipley. Introduction to Matrices and Power Systems. New York: Wiley, 1976.
Matrix Methods 735