extensions of faddeev’s algorithms to polynomial matrices
TRANSCRIPT
Applied Mathematics and Computation 214 (2009) 246–258
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier .com/ locate /amc
Extensions of Faddeev’s algorithms to polynomial matrices q
Predrag S. Stanimirovic a, Milan B. Tasic a,*, Ky M. Vu b
a University of Niš, Faculty of Science and Mathematics, 18000 Niš, Serbiab AuLac Technol. Inc., Ottawa, Ontario, Canada
a r t i c l e i n f o a b s t r a c t
Keywords:Leverrier–Faddeev algorithmMoore–Penrose inverseDrazin inversePolynomial matricesMATHEMATICA
0096-3003/$ - see front matter � 2009 Elsevier Incdoi:10.1016/j.amc.2009.03.076
q
The authors gratefully acknowledge support fro* Corresponding author.
E-mail addresses: [email protected] (P.S. Stanim
Starting from algorithms introduced in [Ky M. Vu, An extension of the Faddeev’s algo-rithms, in: Proceedings of the IEEE Multi-conference on Systems and Control on September3–5th, 2008, San Antonio, TX] which are applicable to one-variable regular polynomialmatrices, we introduce two dual extensions of the Faddeev’s algorithm to one-variablerectangular or singular matrices. Corresponding algorithms for symbolic computing theDrazin and the Moore–Penrose inverse are introduced. These algorithms are alternativewith respect to previous representations of the Moore–Penrose and the Drazin inverse ofone-variable polynomial matrices based on the Leverrier–Faddeev’s algorithm. Complexityanalysis is performed. Algorithms are implemented in the symbolic computational packageMATHEMATICA and illustrative test examples are presented.
� 2009 Elsevier Inc. All rights reserved.
1. Introduction
As usual, by Rm�n we denote the set of m� n complex matrices. Similarly, R½s� (resp. RðsÞ) denotes the polynomials (resp.rational functions) with real coefficients in the indeterminate s. The set of m� n matrices with elements in R½s� (resp. RðsÞ) isdenoted by R½s�m�n (resp. RðsÞm�n). By I is denoted an appropriate identity matrix, O denotes zero matrix of adequate dimen-sions and by 0 is denoted the zero polynomial. The trace of given square matrix is denoted by TrðAÞ.
For any matrix A 2 Cm�n, the following system of matrix equations
ð1Þ AXA ¼ A; ð2Þ XAX ¼ X; ð3Þ ðAXÞT ¼ AX; ð4Þ ðXAÞT ¼ XA
has unique solution with respect to matrix X 2 Rn�m, known as the Moore–Penrose generalized inverse of matrix A and de-noted by Ay.
Let A 2 Rn�n be arbitrary matrix and let k ¼ indðAÞ. Then the following system of matrix equations
ð1kÞ AkXA ¼ Ak; ð2Þ XAX ¼ X; ð5Þ AX ¼ XA
has unique solution. This solution is called the Drazin inverse of matrix A and denoted by AD.The algorithms to calculate the determinant and adjoint polynomials of the matrix inverse ðsI � AÞ�1, known as the resol-
vent of A, are discussed in [3,4,9,25], for example. In Kailath [9], the author gave corresponding algorithms by calling under-lying formulas as the Leverrier–Souriau–Faddeeva–Frame formulas.
. All rights reserved.
m the research project 144011 of the Serbian Ministry of Science.
irovic), [email protected] (M.B. Tasic), [email protected] (K.M. Vu).
P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258 247
An extension of the Leverrier–Faddeev algorithm which computes the Moore–Penrose inverse of the constant rectangularmatrix A 2 Cm�n is given in [2]. An analogous algorithm for computing the Drazin inverse of a constant, square, possibly sin-gular matrix A 2 Cn�n is introduced in [6].
Computation of the Moore–Penrose inverse of one-variable polynomial and/or rational matrices, based on the Leverrier–Faddeev algorithm, is investigated in [5,8,11,15,23]. Implementation of this algorithm, in the symbolic computationallanguage MAPLE, is described in [8]. Algorithm for computing the Moore–Penrose inverse of two-variable rational andpolynomial matrix is introduced in [16]. An effective (quicker and less memory-expensive) algorithm for computing theMoore–Penrose inverse of one-variable and two-variable sparse polynomial matrix, with respect to those introduced in[11], is presented in [13]. This algorithm is efficient when elements of the input matrix are sparse polynomials with onlyfew nonzero addends.
Representations and corresponding algorithms for computing the Drazin inverse of a nonregular polynomial matrix of anarbitrary degree is introduced in [7,21,23]. These algorithms are also extensions of the Leverrier–Faddeev algorithm. Bu andWei in [1] proposed a finite algorithm for symbolic computation of the Drazin inverse of two-variable rational and polyno-mial matrices. Also, a more effective three-dimensional version of this algorithm is presented in the paper [1]. Implementa-tion of these algorithms in the programming language MATLAB is also presented in [1].
The algorithm introduced in [22] generalizes the Leverrier–Faddeev algorithm and generates the class of outer inverses ofa rational or polynomial matrix.
An interpolation algorithm for computing the Moore–Penrose inverse of a given one-variable polynomial matrix, basedon the Leverrier–Faddeev method, is presented in [17]. Corresponding algorithms based on the interpolation and Leverri-er–Faddeev algorithms, for computing the Drazin inverse and outer inverses of one-variable polynomial matrix, are intro-duced in [18,19], respectively. Algorithms for computing the Moore–Penrose and the Drazin inverse of one-variablepolynomial matrices based on the evaluation–interpolation technique and the discrete Fourier transform (DFT) are intro-duced in [14]. Corresponding algorithms for two-variable polynomial matrices are introduced in [24].
We are directly motivated by an (independent) approach for computing the usual inverse, which also starts from theLeverrier–Faddeev’s algorithm, but it is applicable to square invertible one-variable polynomial matrices. This approach isinitiated by Vu in the papers [26,27]. This approach uses derivative of the matrix powers.
Guided by this motivation, we are going to accomplish the following goals:
1. To extend algorithms introduced in [26,27] for the set of rectangular or singular polynomial matrices. In this way wederive two similar algorithms for computing the Moore–Penrose and the Drazin inverse, respectively;
2. To compare computational complexity and memory space requirements of two different approaches.
In the present paper we will derive an algorithm to calculate the Moore–Penrose and an analogous representation of theDrazin inverse of one-variable polynomial matrix. These algorithms are alternative to known algorithms for computing theMoore–Penrose inverse [8,10,11,15] and the Drazin inverse of polynomial matrices [7,12,21,23]. On the other side, thesealgorithms generalize algorithms for computing the usual inverse of polynomial matrices, introduced in [26,27].
The paper is organized as follows. The Faddeev’s algorithms for computing the Moore–Penrose inverse and the Drazininverse of rational matrices are reviewed in Section 2. The extension algorithms to one-variable rectangular or singular poly-nomial matrices are derived in Section 3. Two similar algorithm for computing the Drazin inverse and the Moore–Penroseinverse of polynomial matrices are introduced. Therefore, we implemented our first goal in the third section. In Section 4we examine complexity analysis of known and introduced algorithms. A comparison between the complexity of introducedand known algorithms is presented. In this way, we implemented our second goal in the fourth section. Some test examplesfrom [30] are verified in Section 5 to verify additionally correctness of introduced algorithms.
2. Faddeev’s algorithms for rational matrices
Consider a square matrix constant A 2 Rn�n. Assume that the characteristic polynomial of A is equal to
aðzÞ ¼ det zIn � A½ � ¼ a0zn þ a1zn�1 þ � � � þ an�1zþ an; a0 ¼ 1:
Representation of the Drazin inverse AD which is based on the usage of the characteristic polynomial aðzÞ of the matrix A isintroduced in [6].
The following representation of the Drazin inverse is valid for both rational and polynomial square matrices [7,12,21,23]and it is derived as a natural extension of the corresponding representation from [6], applicable to constant square matrices.
Lemma 2.1. Consider a nonregular one-variable n� n rational matrix AðsÞ. Assume that
aðz; sÞ ¼ det zIn � AðsÞ½ � ¼ a0ðAÞzn þ a1ðAÞzn�1 þ � � � an�1ðAÞzþ anðAÞ; a0ðAÞ � 1; aiðAÞ 2 R½s�; z 2 R ð2:1Þ
is the characteristic polynomial of AðsÞ. Also, consider the following sequence of n� n polynomial matrices
BiðAÞ ¼ a0ðAÞAðsÞi þ a1ðAÞAðsÞi�1 þ � � � ai�1ðAÞAðsÞ þ aiðsÞIn; a0ðAÞ ¼ 1; i ¼ 0; . . . ;n: ð2:2Þ
248 P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258
Let
anðAÞ � 0; . . . ; atþ1ðAÞ � 0; atðAÞ– 0;BnðAÞ � O; . . . ; BrðAÞ � O; Br�1ðAÞ– O
and let k ¼ r � t. The Drazin inverse of AðsÞ is given by
AðsÞD ¼ ð�1Þkþ1atðAÞ�k�1AðsÞkBt�1ðAÞkþ1: ð2:3Þ
Algorithm 2.1 for computing the Drazin inverse yields from Lemma 2.1.
Algorithm 2.1. Leverrier–Faddeev method for computing the Drazin inverse AðsÞD
Require: Matrix AðsÞ 2 RðsÞn�n.1: a0ðAÞ :¼ 12: A0ðAÞ :¼ O
3: B0ðAÞ :¼ In
4: for i :¼ 1 to n do5: AiðAÞ :¼ AðsÞBi�1ðAÞ6: aiðAÞ :¼ �TrðAiðAÞÞ=i7: BiðAÞ :¼ AiðAÞ þ aiðAÞIn
8: end for9: t :¼ maxfijaiðAÞ – 0; i ¼ 0; . . . ; ng10: r :¼minfijBiðAÞ ¼ O; i ¼ 0; . . . ;ng11: k :¼ r � t12: return AðsÞD :¼ ð�1Þkþ1atðAÞ�k�1AðsÞkBt�1ðAÞkþ1.
Representation of the Moore–Penrose inverse of constant n�m matrix based on the Leverrier–Faddeev algorithm is givenby Decell [2]. A new proof for the Decell’s finite algorithm for the generalized inverse is given in [10]. Since this represen-tation remains valid when A 2 Rn�m is replaced by a rational matrix AðsÞ 2 RðsÞn�m, Karampetakis in [11] introduced the fol-lowing representation of the Moore–Penrose inverse AðsÞy.
Lemma 2.2 [11]. Let AðsÞ 2 RðsÞn�m be arbitrary rational matrix and
aðz; sÞ ¼ det½zIn � AðsÞAðsÞT � ¼ a0ðHÞzn þ a1ðHÞzn�1 þ � � � þ an�1ðHÞzþ anðHÞ; a0ðHÞ ¼ 1
be the characteristic polynomial of HðsÞ ¼ AðsÞAðsÞT . Let k be a maximal index such that akðAÞ– 0 (i.e. akðAÞ – 0 andakþ1ðAÞ ¼ � � � ¼ anðAÞ ¼ 0). If k > 0 then the Moore–Penrose inverse of AðsÞ is given by
AðsÞy ¼ �a�1k ðHÞAðsÞ
T ðHðsÞk�1 þ a1ðHÞðHðsÞk�2 þ � � � þ ak�1ðHÞIn
h i¼ �a�1
k ðHÞAðsÞT Bk�1ðHÞ: ð2:4Þ
Otherwise if k ¼ 0 then holds Ay ¼ O.
Using this representation, Karampetakis in [11] introduced the following algorithm for computing the Moore–Penrose in-verse inverse AðsÞy 2 RðsÞm�n (Algorithm 2.2).
Algorithm 2.2. Leverrier–Faddeev method for computing the Moore–Penrose inverse AðsÞy
Require: Matrix A 2 Rn�m.1: a0ðHÞ :¼ 12: A0ðHÞ :¼ O
3: B0ðHÞ :¼ In
4: for i :¼ 1 to n do5: AiðHÞ :¼ AðsÞAðsÞT Bi�1ðHÞ6: aiðHÞ :¼ �TrðAiðHÞÞ=i7: BiðHÞ :¼ AiðHÞ þ aiðHÞIn
8: end for9: k :¼maxfijai – 0; i ¼ 0; . . . ;ng10: if k ¼ 0 then11: return Ay :¼ O
12: else13: return Ay :¼ �a�1
k ðHÞAðsÞT Bk�1ðHÞ
14: end if
P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258 249
3. Faddeev’s algorithms for polynomial matrices
Consider square or rectangular matrix A ¼ AðsÞ given in the matrix polynomial form with respect to unknown s:
AðsÞ ¼ A0 þ A1sþ � � � þ Aqsq þ Aqsq ¼Xq
i¼0
Aisi; ð3:1Þ
where Ai, i ¼ 0; . . . ; q are constant matrices of appropriate dimensions. If Aq is not a zero matrix, then number q is called thedegree of AðsÞ, and is denoted by q ¼ degðAðsÞÞ.
3.1. Computing the Drazin inverse
A computational procedures for computing the Drazin inverse of one-variable polynomial matrix AðsÞ 2 R½s�n�n is derivedfrom Lema 2.1 [7,12,21,23]. We restate this Algorithm for the sake of completeness (Algorithm 3.1). It is not difficult to verify,using the principle of mathematical induction, that both the degree of the scalar polynomial apðAÞ, p ¼ 1; . . . ;n and of matrixpolynomial BpðAÞ, p ¼ 1; . . . ;n (defined in Algorithm 2.1) are at most equal to pq. In accordance with these facts, Algorithm3.1 is derived applying the following representations in Algorithm 2.1:
apðAÞ ¼Xpq
l¼0
ap;lsl; ap;l 2 R; p ¼ 1; . . . ;n; ð3:2Þ
BpðAÞ ¼Xpq
l¼0
Bp;lsl; Bp;l 2 Rn�n; p ¼ 1; . . . ;n: ð3:3Þ
Algorithm 3.1. Computing the Drazin inverse AðsÞD using algorithm from [7,12,21,23].
Require The sequence of n� n constant matrices fA0;A1; . . . ;Aqg.1: B0;0 :¼ In;B0;j :¼ O 8j 2 N
2: Bi;j :¼ O; i ¼ 0; . . . ;n� 1; j ¼ iqþ 1; . . . ; ðn� 1Þq:3: Aj :¼ O; j ¼ qþ 1; . . . ;nq4: for i :¼ 0 to n� 1 do5: for j :¼ 0 to ðiþ 1Þq do6: aiþ1;j :¼ � 1
iþ1 TrPj
l¼0Aj�lBi;l
� �7: Biþ1;j :¼
Pjl¼0Aj�lBi;l þ aiþ1;jIn
8: end for9: end for10: t :¼maxfijð9j 2 f0;1; . . . ;nqgÞai;j – 0g11: r :¼ minfijBi;j ¼ O; i ¼ 0; . . . ;n; j ¼ 0;1; . . . ;nqg12: k :¼ r � t13: return AðsÞD :¼ ð�1Þkþ1 Ptq
j¼0at;jsj� ��k�1 Pq
i¼0Aisi� �k Pðt�1Þq
l¼0 Bt�1;lsl� �kþ1
:
In the following theorem we introduce an alternative representation for the Drazin inverse of one-variable polynomialmatrix. This representation is motivated by the algorithm introduced in [26], whose domain is the set of one-variable regularpolynomial matrices.
Theorem 3.1. Consider a singular one-variable square polynomial matrix AðsÞ 2 R½s�n�n in the form (3.1). Consider the sequenceof n� n constant matrices fA0;0;A0;1; . . . ;Anq;ng defined by the following two recurrence relations:
Aj;i ¼Xi
b¼1
Xj
a¼1
Aj�a;i�b � AaAb�10 ; ð3:4Þ
A0;i ¼ Ai0; i ¼ 1; . . . ;n; j ¼ 0; . . . ;nq: ð3:5Þ
Let us consider the following sequence of real numbers
ap;l ¼ �1p
Xp
i¼1
Xl
j¼0
ap�i;l�j � TrðAj;iÞ; a0;0 ¼ 1 ð3:6Þ
and the sequence of real polynomials apðAÞ defined as in (3.2). Assume that integer t satisfies
t ¼maxfpjap – 0g ¼maxfpjð9lÞap;l – 0g:
Also, consider the following sequence of n� n constant matrices
Bp;l ¼Xp
i¼0
Xl
j¼0
ap�i;l�j � Aj;i; p ¼ 0; . . . ; t � 1; l ¼ 0; . . . ; ðt � 1Þq; ð3:7Þ
250 P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258
the set of matrix polynomials BpðAÞ, defined as in (3.3), as well as the integer r satisfying
r ¼minfpjBp � Og ¼ minfpjð8lÞBp;l � Og
and k ¼ r � t.Then the Drazin inverse of AðsÞ is the following polynomial matrix:
AðsÞD ¼ ð�1Þkþ1Xtq
j¼0
at;jsj
!�k�1 Xq
j¼0
Ajsj
!k Xðt�1Þq
j¼0
Bt�1;jsj
!kþ1
: ð3:8Þ
Proof. Characteristic coefficients aiðAÞ, i ¼ 0;1; . . . ;n in (2.1) are given recursively as (see, for example, [20]):
apðAÞ ¼ �1p
Xp
i¼1
ap�iðAÞa1ðAiÞ: ð3:9Þ
The coefficients a1ðAjÞ are the Newton sums of the function aðz; sÞ, and therefore a1ðAjÞ ¼ TrðAjÞ. On the other hand, charac-teristic coefficient apðAÞ is a polynomial of degree pq in the parameter s. Therefore, we can alternatively write apðAÞ in theform (3.2). Then the coefficients ap;l can be calculated as in the following
ap;l ¼1l!
dlapðAÞdsl
�����s¼0
; l ¼ 0; . . . ;pq: ð3:10Þ
It is clear that initial values for these coefficients are given by
ap;0 ¼ apðA0Þ; a0;0 ¼ 1:
In the next step of the proof we will prove that remaining coefficients ap;l can be calculated recursively as in (3.6), using rep-resentations (3.9) and (3.10). We follow the technique used in [26].
Using (3.9), (3.10) and the derivative of product of two polynomials, we can write after some algebraic transformations
ap;l ¼ �1
l!p
Xp
i¼1
Xl
j¼0
l
j
� �dl�jap�iðAÞ
dsl�j
dja1ðAiÞdsj
�����s¼0
¼ �1p
Xp
i¼1
Xl
j¼0
dl�jap�iðAÞðl� jÞ!dsl�j
dja1ðAiÞj!dsj
�����s¼0
¼ �1p
Xp
i¼1
Xl
j¼0
ap�i;l�jdja1ðAiÞ
j!dsj
�����s¼0
:
ð3:11Þ
By interchanging the order of the operators trace and derivative, we write derivatives of the characteristic coefficienta1ðAiÞ as
dja1ðAiÞj!dsj
¼djTr Ai
� �j!dsj
¼ TrdjAi
j!dsj
!
and then obtain from (3.11)
ap;lðAÞ ¼ �1p
Xm
i¼1
Xl
j¼0
ap�i;l�j � TrdjAi
j!dsj
!�����s¼0
: ð3:12Þ
Denote by
Aj;i ¼djAi
j!dsj
�����s¼0
: ð3:13Þ
With constant matrices Aj;i obtained, we can take their traces and calculate, using Eq. (3.12), all the coefficients ap;l of thecharacteristic coefficient apðAÞ recursively, as in (3.6).
The derivatives of the matrix power Ai can be calculated recursively, as in [26]. We use Ai ¼ Ai�1A and then differentiateboth sides of this identity with respect to s to obtain
djAi
dsj¼Xj
a¼0
j
a
� �dj�aAi�1
dsj�adaAdsa ¼
djAi�1
dsjAþ
Xj
a¼1
j
a
� �dj�aAi�1
dsj�adaAdsa ¼
djAi�2
dsjA2 þ
X2
b¼1
Xj
a¼1
j
a
� �dj�aAi�b
dsj�adaAdsa Ab�1
: ð3:14Þ
We will continue substitutions to reduce the power of the matrix Ai�2 with matrices of lower powers until we know the jthderivative of this matrix is zero.
One can verify the following identity by means of the mathematical induction:
djAi
dsj¼Xi
b¼1
Xj
a¼1
j
a
� �dj�aAi�b
dsj�adaAdsa Ab�1
: ð3:15Þ
P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258 251
The ath derivative of the matrix A can be evaluated as
daAdsa ¼
da
dsa
Xq
c¼0
Acsc
" #¼Xq
c¼acðc� 1Þ � � � ðc� aþ 1ÞAcsc�a: ð3:16Þ
Now, after the setting s ¼ 0, Eq. (3.16) becomes
daAdsa
����s¼0¼Xq
c¼acðc� 1Þ � � � ðc� aþ 1ÞAcsc�a
�����s¼0
¼ c!Aa ¼ a!Aa:
Therefore, by setting s ¼ 0 in Eq. (3.15), in view of the last equality, we can calculate matrices Aj;i recursively as
Aj;i ¼Xi
b¼1
Xj
a¼1
dj�aAi�b
ðj� aÞ!dsj�a
�����s¼0
� AaAb�10 ;
which is equivalent with (3.4). Initial condition (3.5) for the sequence Aj;i is now evident.According to (2.2) we have
Bp ¼Xp
i¼0
ap�iðAÞAðsÞi; p ¼ 0; . . . ;n:
If we write the matrix Bp in the form (3.3), then immediately follows
Xp
i¼0
ap�iðAÞAðsÞi ¼Xqp
l¼0
Bp;lsl:
By differentiating both sides l times, dividing both sides by l! and setting s ¼ 0, we get
Bp;l ¼Xp
i¼0
1l!
dl
dslap�iðAÞAðsÞi� ������
s¼0
¼Xp
i¼0
Xl
j¼0
1l!
l
j
� �dl�jap�iðAÞ
dsl�j
djAðsÞi
dsj
�����s¼0
¼Xp
i¼0
Xl
j¼0
ap�i;l�jdjAðsÞi
j!dsj
�����s¼0
:
According to known representation of the Drazin inverse from [7], the Drazin inverse of AðsÞ is equal to
AðsÞD ¼ Ak½qðAÞ�kþ1; qðAÞ ¼ ð�atðAÞÞ�1Bt�1:
Therefore, we have
AðsÞD ¼ ð�1Þkþ1 atðAÞð Þ�k�1AkBkþ1t�1 : ð3:17Þ
Finally, by applying Eqs. (3.1), (3.2) and (3.17), we immediately obtain (3.8). h
In view of Theorem 3.1 we state the following algorithm for computing the Drazin inverse of one-variable polynomialmatrix (Algorithm 3.2). Polynomial matrix AðsÞ 2 R½s�n�n is represented as three-dimensional list fA0;A1; . . . ;Anqg, whereA0; . . . ;Aq are defined in (3.1) and Aqþ1 ¼ � � � ¼ Anq ¼ O.
Algorithm 3.2. Computing the Drazin inverse AðsÞD.
Require Polynomial matrix AðsÞ 2 R½s�n�n in the form fA0;A1; . . . ;Anqg.1: a0;0 ¼ 1; A0;i :¼ Ai
0, i ¼ 1; . . . ;n2: for p ¼ 0 to n� 1 do3: for l ¼ 0 to ðpþ 1Þq do4: Al;pþ1 :¼
Ppþ1b¼1
Pla¼1Al�a;pþ1�b � Aa Ab�1
0
5: apþ1;l :¼ � 1pþ1
Ppþ1i¼1
Plj¼0apþ1�i;l�j � TrðAj;iÞ
6: Bp;l :¼Pp
i¼0
Plj¼0ap�i;l�j � Aj;i
7: end for8: end for9: t :¼maxfpjð9lÞap;l – 0g10: r :¼minfijBi;j ¼ O; i ¼ 0; . . . ; t � 1; j ¼ 0;1; . . . ; ðt � 1Þqg11: k :¼ r � t12: return AðsÞD ¼ ð�1Þkþ1 Ptq
j¼0at;jsj� ��k�1 Pq
i¼0Aisi� �k Pðt�1Þq
l¼0 Bt�1;lsl� �kþ1
:
252 P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258
3.2. Computing the Moore–Penrose inverse
Representation and corresponding algorithm for computing the Moore–Penrose inverse of given polynomial matrixAðsÞ 2 R½s�n�m is given by Karampetakis in [11]. For the sake of completeness, we restate known algorithm from [11] (Algo-rithm 3.3).
Algorithm 3.3 is derived from Algorithm 2.2 using the following polynomial representations:
apðAATÞ ¼X2pq
l¼0
ap;lsl; ap;l 2 R; p ¼ 1; . . . ;n; ð3:18Þ
BpðAATÞ ¼X2pq
l¼0
Bp;lsl; Bp;l 2 Rn�m; p ¼ 1; . . . ;n: ð3:19Þ
Algorithm 3.3. Computing Moore–Penrose inverse AðsÞy using algorithm from [11].
Require Polynomial matrix AðsÞ 2 R½s�n�m in the form fA0;A1; . . . ;Aqg.1: B0;0 :¼ In;Ak ¼ 0; k ¼ qþ 1; . . . ;nq:2: B0;j :¼ O 8j > 03: Bi;j :¼ O; i ¼ 0; . . . ;n� 1; j ¼ 2iqþ 1; . . . ;2ðn� 1Þq4: for i ¼ 0 to n� 1 do5: for j ¼ 0 to 2ðiþ 1Þq do
6: aiþ1;j ¼ � 1iþ1 Trð
Pjk¼0
Pj�kl¼0Aj�k�lA
Tl
� �Bi;kÞ
7: Biþ1;j ¼Pj
k¼0
Pj�kl¼0ðAj�k�lA
Tl ÞBi;k
� �þ aiþ1;jIn
8: end for9: end for10: k :¼maxfijð9jÞai;j – 0; i ¼ 0; . . . ;ng11: return AðsÞy ¼ �
P2kqj¼0ak;jsj
� ��1Pð2k�1Þqj¼0
Pjl¼0ðA
Tj�lBlÞsj:
In the following theorem we introduce an alternative algorithm for computing the Moore–Penrose inverse of one-variablepolynomial matrix AðsÞ 2 R½s�n�m.
Theorem 3.2. Consider a singular one-variable polynomial matrix AðsÞ 2 R½s�n�m in the form (3.1). Denote by
HðsÞ ¼ AðsÞAðsÞT ¼X2q
j¼0
Hjsj; Hj ¼Xj
l¼0
ðAj�lATl Þ: ð3:20Þ
Assume that
aðz; sÞ ¼ det zIn � HðsÞ½ � ¼ zn þ a1ðHÞzn�1 þ � � � þ an�1ðHÞzþ anðHÞ; aiðHÞ 2 R½s�; z 2 R ð3:21Þ
is the characteristic polynomial of HðsÞ. Assume that k is the first integer satisfying akðHÞ– 0. Consider the list of constant n� nmatrices fA0;0;A0;1; . . . ;Anq;ng defined as in the following:
Aj;i ¼Xi
b¼1
Xj
a¼1
Aj�a;i�b � HaHb�10 ; ð3:22Þ
A0;i ¼ Hi0; H0 ¼ A0AT
0; i ¼ 1; . . . n; j ¼ 0; . . . ;nq; ð3:23Þ
as well as the sequence of polynomials apðAÞ and real numbers ap;l defined as in (3.6). Assume that integer k satisfies
k ¼maxfpjap – 0g ¼ maxfpjð9lÞap;l – 0g:
Let us define the following sequence of n� n constant matrices:
Bk�1;l ¼Xk�1
i¼0
Xl
j¼0
ak�1�i;l�j � Aj;i; l ¼ 0; . . . ; ðk� 1Þq: ð3:24Þ
Then the Moore–Penrose inverse of AðsÞ is the following matrix polynomial:
AðsÞy ¼ �X2kq
j¼0
ak;jsj
!�1 Xð2k�1Þq
j¼0
Xj
l¼0
ðATj�lBk�1;lÞsj: ð3:25Þ
P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258 253
Proof. The characteristic coefficients apðHÞ in (3.21) are given recursively as (see [20]):
apðHÞ ¼ �1p
Xp
i¼1
ap�iðHÞa1ðHiÞ: ð3:26Þ
The coefficients a1ðHjÞ are the Newton sums of the function aðz; sÞ. Characteristic coefficient apðHÞ is a polynomial of degree2pq and it is of the form (3.18).
Using the same method as in Theorem 3.1 it is not difficult to verify that
ap;l ¼ �1p
Xp
i¼1
Xl
j¼0
ap�i;l�jdja1ðHiÞ
j!dsj
�����s¼0
:
Initial values for these coefficients are
ap;0 ¼ apðH0Þ; a0;0 ¼ 1:
If we now denote by
Aj;i ¼djHi
j!dsj
�����s¼0
; ð3:27Þ
one can verify the following, in a similar way as in Theorem 3.1:
Aj;i ¼Xi
b¼1
Xj
a¼1
dj�aHi�b
ðj� aÞ!dsj�a
�����s¼0
HaHb�10 ;
which is equivalent with (3.22). Initial condition (3.23) for the sequence Aj;i is now evident. With the derivatives obtained, wecan take their traces and calculate all the coefficients ap;l of the characteristic coefficient apðHÞ recursively, using the equation
ap;lðHÞ ¼ �1p
Xm
i¼1
Xp
j¼0
ap�i;l�j � TrdjHi
j!dsj
!�����s¼0
: ð3:28Þ
On the other hand, using the relation between the characteristic coefficients of a matrix and its powers, we have
Hn þ a1ðHÞHn�1 þ � � � þ akðHÞHn�k ¼ O;
and later
Hn�k Hk þ a1ðHÞHk�1 þ � � � þ akðHÞIn
� �¼ O:
Using the same principle as in [2] we get
Hk þ a1ðHÞHk�1 þ � � � þ akðHÞIn ¼ Y1 � HyHY1;
where Y1 is arbitrary matrix of appropriate size. From the above equation, multiplying both sides with Hy from the left, wehave
AðsÞy ¼ �akðHÞ�1Xk�1
i¼0
ak�1�iðHÞHi; a0ðHÞ ¼ 1;
provided that akðHÞ is not zero. If we use (3.19), one can verify
Bk�1ðHÞ ¼Xk�1
i¼0
ak�1�iðHÞHi ¼X2qðk�1Þ
l¼0
Bk�1;lsl:
By differentiating both sides of the last equation l times, dividing both sides by l! and setting s ¼ 0, we get
Bk�1;l ¼Xk�1
i¼0
1l!
dl
dslak�1�iðHÞHi
�����s¼0
¼Xk�1
i¼0
Xl
j¼0
1l!
l
j
� �dl�jak�1�iðHÞ
dsl�j
djHi
dsj
�����s¼0
¼Xk�1
i¼0
Xl
j¼0
ak�1�i;l�jdjHi
j!dsj
�����s¼0
:
By applying (3.27) together with the last equation we obtain (3.24).Finally, by applying (3.1), (3.2) and (3.24) in (2.4) we get the following representation of the Moore–Penrose inverse:
AðsÞy ¼ �X2qk
j¼0
ak;jsj
!�1Xq
j¼0
ATj sj
X2ðk�1Þq
j¼0
Bk�1;jsj;
which can be simply transformed into (3.25). h
254 P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258
According to Theorem 3.2 we state the following algorithm for computing the Moore–Penrose inverse of one-variablepolynomial matrix (Algorithm 3.4). We represent the matrix HðsÞ ¼ AðsÞATðsÞ in the form (3.20) and assume the followingconditions:
Hqþ1 ¼ � � � ¼ H2nq ¼ O: ð3:29Þ
Algorithm 3.4. Computing the Moore–Penrose inverse AðsÞy.
Require: AðsÞ 2 R½s�n�m and HðsÞ ¼ AðsÞATðsÞ in the form (3.20), (3.29).1: a0;0 :¼ 1; A0;i :¼ Hi
0; i ¼ 1; . . . ; n2: for p ¼ 0 to n� 1 do3: for l ¼ 0 to 2ðpþ 1Þq do4: Al;pþ1 :¼
Ppþ1b¼1
Pla¼1Al�a;pþ1�b � HaHb�1
0
5: apþ1;l :¼ � 1pþ1
Ppþ1i¼1
Plj¼0apþ1�i;l�j � TrðAj;iÞ
6: end for7: end for8: k :¼maxfpjð9lÞap;l – 0g:9: for l ¼ 0 to 2ðk� 1Þq do10: Bk�1;l ¼
Pk�1i¼0
Plj¼0ak�1�i;l�j � Aj;i
11: end for12: return AðsÞy ¼ �
P2kqj¼0ak;jsj
� ��1Pð2k�1Þqj¼0
Pjl¼0ðA
Tj�lBk�1;lÞsj:
4. Complexity analysis
Denote by addðnÞ the complexity of the matrix addition on n� n matrices and by mulðm;n; kÞ the complexity of multi-plying m� n matrix with n� k matrix, and let mulðnÞ ¼mulðn;n;nÞ. By stðn; kÞ is denoted the complexity of exponentiationof n� n matrix to power k.
Algorithm 3.1 in steps 4 and 5 requires two cycles of complexity n2q. Also, complexity of step 6 inside the cycles is equalto nþ nq �mulðnÞ. Therefore, complexity of the algorithm is equal to c31 ¼ Oðn3q2 �mulðnÞÞ. Similarly, Algorithm 3.2 requiresin steps 2 and 3 two cycles of complexity n2q. Complexity of Step 4 inside these cycles is equal to n � nq �mulðnÞ � stðn;nÞ.Therefore, complexity of Algorithm 3.2 is equal to c32 ¼ Oðn4q2 �mulðnÞ � stðn;nÞÞ.
On the other hand, Algorithms 3.3 and 3.4 require two cycles of complexity 2n2q. Step 6 in Algorithm 3.3 is of complexitynþ 2nqð2nq �mulðn;m;nÞÞmulðnÞ. Similarly, step 4 in Algorithm 3.4 possesses complexity nð2nq �mulðnÞÞstðn;nÞ. Therefore,complexity of Algorithm 3.3 is c33 ¼ Oð8n4q3 �mulðn;m; nÞ �mulðnÞÞ and Algorithm 3.4 possesses the complexityc34 ¼ Oð4n4q2 �mulðnÞ � stðn;nÞÞ.
We assumed that the mth degree of n� n matrix may be calculated in the time stðn;mÞ ¼ Oðlog2ðmÞ �mulðnÞÞ using recur-sive formulae A2l ¼ ðAlÞ2 and A2lþ1 ¼ ðAlÞ2A. Therefore, we have
c32 ¼ O n4q2 �mul2ðnÞ � log2ðnÞ� �
;
c34 ¼ O 4n4q2 �mulðnÞ �mulðnÞ � log2ðnÞ� �
:
MATHEMATICA performs the matrix multiplication with complexity mulðnÞ ¼ Oðn3Þ, mulðn;m;nÞ ¼ Oðnm2Þ so that we have
c31 ¼ Oðn6q2Þ;c32 ¼ O n10q2 � log2ðnÞ
� �;
c33 ¼ O 8n8m2q3� �;
c34 ¼ O 4n10q2log2ðnÞ� �
:
ð4:1Þ
It is clear that c31 corresponds to the minimal computational time. Also, we observe that c33c34� ðmn Þ
2 2qlog2ðnÞ
. Finally, under theassumption m ¼ n one can verify that c33 < c34 in the case 4q < n and c33 > c34 in the case 4q > n.
5. Examples
Example 5.1. The polynomial matrix
AðsÞ ¼1þ s s 1þ s
s2 �1þ s s
1þ s s 1þ s
264
375 ¼
1 0 10 �1 01 0 1
264
375þ
1 1 10 1 11 1 1
264
375sþ
0 0 01 0 00 0 0
264
375s2
P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258 255
is represented by the following three-dimensional list
fff1;0;1g; f0;�1; 0g; f1;0;1gg; ff1;1;1g; f0;1;1g; f1;1;1gg; ff0;0;0g; f1;0;0g; f0;0; 0ggg:
Applying Algorithm 3.2, we get t ¼ 2, r ¼ 3 and the following Drazin inverse of AðsÞ:
AðsÞD ¼
1�sþ2s3�2s4
ð2�s2þs3Þ2s
2�s2þs31�s�s2þs4
ð2�s2þs3Þ2
sð�1þs2þs3Þð1þsÞð2�2sþs2Þ2
� 22�2sþs2
3s�2s3
ð1þsÞð2�2sþs2Þ2
1�sþ2s3�2s4
ð2�s2þs3Þ2s
2�s2þs31�s�s2þs4
ð2�s2þs3Þ2
26664
37775:
Note that we obtain the following values for the coefficients ai;j:
a[1,0]=-1; a[1,1]=-3; a[1,2]=0;
a[2,0]=-2; a[2,1]=0; a[2,2]=1; a[2,3]=-1; a[2,4]=0;
a[3,0]=a[3,1]=� � �=a[3,6]=0.
and the following values for the matrices Bi;j:
B[1,0]={{0,0,1},{0,-2,0},{1,0,0}}; B[1,1]={{-2,1,1},{0,-2,1},{1,1,-2}};B[1,2]={{0,0,0},{1,0,0},{0,0,0}};B[2,0]={{-1,0,1},{0,0,0},{1,0,-1}}; B[2,1]={{0,0,0},{1,0,-1},{0,0,0}};B[2,2]={{0,0,0},{0,0,0},{-1,0,1}}; B[2,3]={{0,0,0},{-1,0,1},{1,0,-1}};B[2,4]={{0,0,0},{-1,0,1},{1,0,-1}};B[3,0]=B[3,1]=� � �=B[3,6]={{0,0,0},{0,0,0},{0,0,0}}.
Example 5.2. Test matrix
x 1 0 0 0 0 0 0 0 0 0 0x2 x 1 0 0 0 0 0 0 0 0 0x3 x2 x 1 0 0 0 0 0 0 0 0x4 x3 x2 x 1 0 0 0 0 0 0 0x5 x4 x3 x2 x 1 0 0 0 0 0 0x6 x5 x4 x3 x2 x 1 0 0 0 0 0x7 x6 x5 x4 x3 x2 x 1 0 0 0 0x8 x7 x6 x5 x4 x3 x2 x 1 0 0 0x9 x8 x7 x6 x5 x4 x3 x2 x 1 0 0x10 x9 x8 x7 x6 x5 x4 x3 x2 x 1 0x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x 1x12 x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x
26666666666666666666666664
37777777777777777777777775
proposed in [30] has the following Moore–Penrose inverse:
xx2þ1 0 0 0 0 0 0 0 0 0 0 01
x2þ1 0 0 0 0 0 0 0 0 0 0 0
�x 1 0 0 0 0 0 0 0 0 0 0
0 �x 1 0 0 0 0 0 0 0 0 0
0 0 �x 1 0 0 0 0 0 0 0 0
0 0 0 �x 1 0 0 0 0 0 0 0
0 0 0 0 �x 1 0 0 0 0 0 0
0 0 0 0 0 �x 1 0 0 0 0 0
0 0 0 0 0 0 �x 1 0 0 0 0
0 0 0 0 0 0 0 �x 1 0 0 0
0 0 0 0 0 0 0 0 �x 1 0 0
0 0 0 0 0 0 0 0 0 �x 1x2þ1
xx2þ1
2666666666666666666666666664
3777777777777777777777777775
:
Table 2CPU time for computing the Drazin inverse.
q Alg. 3.1 Alg. 3.2 Alg. 7 [14]
1 0.031 s 0.031 s 0.031 s5 0.063 0.109 0.12510 0.094 0.188 0.26515 0.125 0.344 0.46930 0.281 1.171 1.34455 0.703 3.625 3.93880 1.438 7.594 7.641
Table 1CPU time for computing the Moore–Penrose inverse.
q Alg. 3.3 Alg. 3.4 Alg. 3 [14]
1 0.047 s 0.031 s 0.031 s5 0.547 0.172 0.07810 3.422 0.422 0.17215 10.860 0.813 0.26630 83.185 2.875 0.62555 502.265 9.391 1.57980 – 19.891 2.843
256 P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258
Example 5.3. In the following example we compute the Moore–Penrose and the Drazin inverse of the matrix
AðsÞ ¼1þ s s 1þ s
sq �1þ s s
1þ s s 1þ s
264
375
for various values of the parameter q. We compare the effectiveness of Algorithms 3.1–3.4 as well as Algorithms 3 and 7 from[14], using CPU time as the comparative criterion. Testing was done on Intel Pentium 4 processor at 1.6 GHz and MATHEM-
ATICA 5.2.In Table 1 we arrange results of the comparison of Algorithm 3.4 with respect to Algorithm 3.3 from [11] and Algorithm 3
from [14].It is evident that Algorithm 3.4 requires significantly smaller CPU times with respect to Algorithm 3.3 in the case for
sufficiently great values for q. This fact is also in accordance with the results of complexity analysis, performed in theprevious section (since m ¼ n and n is significantly less than 2q). Moreover, it is clear that the best performancesdemonstrates DFT Algorithm 3 from [14].
In Table 2 are compared Algorithm 3.1 from [7,12,21,23] with respect to Algorithm 3.2 and Algorithm 7 from [14].These result show that Algorithm 3.1 gives better performances with respect to Algorithm 3.2 in all cases, which is in
accordance with derived complexities in (4.1). It is surprising observation that Algorithm 7 from [14] produces worst results.Also, comparing both Tables 1 and 2, we can notice that Algorithm 3.1 possesses the best performances and working
times are significantly less in computation of the Drazin inverse with respect to the Moore–Penrose inverse.
6. Conclusion
Motivated by the goals described into the introductory section, we derived representations and corresponding algorithmsto calculate the Drazin inverse and the Moore–Penrose inverse of one-variable polynomial matrix. Algorithms for computingthe Drazin inverse are continuation of the papers [7,12,21,23], while algorithms for computing the Moore–Penrose inverseare continuation of the papers [8,23,15]. At the same time, these algorithms generalize algorithms for computing the usualinverse of polynomial matrices, introduced in [26]. In this way, we actualize our first goal.
In order to implement our second goal, a comparison between the known and introduced methods for computing theDrazin inverse and the Moore–Penrose inverse is presented.
The following general conclusions with respect to computational complexity of introduced algorithms and from thenumerical results arranged in Tables 1 and 2 are observed:
– Algorithm 3.2 is harder with respect to Algorithm 3.1.– Algorithm 3.1 shows the best computational complexity and the best computational performances.– Algorithm 3.4 is significantly simpler than Algorithm 3.3 in the case 4q � n, where q is degree of the input polynomial
matrix.– Algorithm 3 from [14] produces the best results in computing the Moore–Penrose inverse, while Algorithm 7 from [14]
produces the worst results in computing the Drazin inverse.
P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258 257
Later, algorithms are implemented in programming package MATHEMATICA and several test examples from [30] areverified.
Acknowledgement
The authors are grateful to Professors N.P. Karampentakis and S. Vologianidis for the original source code of the DFTalgorithm.
Appendix
We present the MATHEMATICA code for the implementation of Algorithms 3.2 and 3.4. About the package see, for exam-ple [28,29].
FrmPoly[M_List] :¼(* Form the polynomial matrix as tree-dimensional list *)Module[{L={}, i, M1=M, v, s}, v=Variables[M];
If[v=!={}, s=v[[1]];For[i=1, i <=Max[Exponent[M, s]], i++,
AppendTo[L, Coefficient[M, sˆi]];
M1=M1 - Coefficient[M, sˆi]*sˆi;];
M1={M1};For[i=1, i <=Length[L], i++, AppendTo[M1, L[[i]]]]];
Return[Simplify[M1/. 0. -> 0]];]
DopZero[L_List, i_] :¼(* Complete the matrix L by zero rows *)Module[L1=L, j, nul, nul=L1[[1]] 0;
*For[j=1, j <=i - Length[L], j++, AppendTo[L1, nul]];Return[L1]];AA[j_, i_] :¼AA[j, i]=If[j==0,Return[MatrixPower[A1, i]],If[i==0, Return[0 A1],
Return[Sum[Sum[AA[j-l,i-k].B[[l]].MatrixPower[A1,k-1],{l,1,j}],{k,1,i}]]]];
Fa[m_, k_] :¼Fa[m,k]=If[((m==0) && (k==0)), Return[1],
Return[-Sum[Sum[Fa[m - i, k - j] Tr[AA[j, i]]/m, {j, 0, k}], {i, 1, m}]]];
Fadj[m_,k_] :¼Fadj[m,k]=Sum[Sum[Fa[m-i,k-j]AA[j,i],{j,0,k}],{i,0,m}];Drz[L_List] :¼(* Implementation of Algorithm 3.2.*)Block[{i, j, nul, k, r=t=0, A={}, L1={},log1=log2=True, var}, {n, m}=Dimensions[L]; var=Variables[L];nul=Table[0, {n}, {n}]; L1=FrmPoly[L]; q=Max[Exponent[L, var[[1]]]];
If[var=={}, B={L}, B=FrmPoly[L]];A1=B[[1]]; nul=0 A1;
B=DopZero[Drop[B, 1], n*q+1];For[i=0, i <=n - 1, i++,
For[j=0, j <=(i + 1)*q, j++,
If[Fa[i + 1, j]==0, log1=log1 && True, log1=log1 && False];If[Fadj[i + 1, j]===nul,log2=log2 && True, log2=log2 && False];
If[! log1, t=i + 1; log1=True];
If[! log2, r=i + 1]]];
k=r - t; B=Join[{A1}, B];rez1=(-1)ˆ(k + 1)*Sum[Fa[t,j]*First[var]ˆj, {j,0,tq}]ˆ(-k-1);rez2=MatrixPower[Sum[B[[i + 1]] First[var]ˆi, {i,0,q}],k];rez3=MatrixPower[Sum[Fadj[t-1,l] First[var]ˆl, {l,0,(t-1)q}],k+1];rez=rez1*rez2.rez3;Return[Simplify[rez]]];
Moore[L_List] :¼(* Implementation of Algorithm 3.4.*)Block[i, j, nul, q1,k=r=t=0, A={}, L1={},
log1=log2=True, var, {n, m}=Dimensions[L]; var=Variables[L];nul=Table[0, {n}, {n}]; L1=FrmPoly[L]; q=Max[Exponent[L, var[[1]]]];
If[var=={}, B={L.Transpose[L]}, B=FrmPoly[L.Transpose[L]]];A1=B[[1]]; nul=0 A1;
B=DopZero[Drop[B, 1], 2*q*n+1];For[i=0, i <=n - 1, i++,
258 P.S. Stanimirovic et al. / Applied Mathematics and Computation 214 (2009) 246–258
For[j=0, j <=2*(i + 1)*q, j++,If[Fa[i + 1, j]==0, log1=log1 && True, log1=log1 && False];
If[! log1, k=i + 1; log1=True]]];
rez1=-Sum[Fa[k, j] First[var]ˆj, {j, 0, 2*k*q}]ˆ(-1);rez2=Transpose[Sum[L1[[i + 1]] First[var]ˆi, {i, 0, q}]];rez3=Sum[Fadj[k - 1, j] First[var]ˆ j, {j, 0, 2*(k-1)*q}];rez=rez1*rez2.rez3;Return[Simplify[rez]]];
References
[1] F. Bu, Y. Wei, The algorithm for computing the Drazin inverses of two-variable polynomial matrices, Appl. Math. Comput. 147 (2004) 805–836.[2] H.P. Decell, An application of the Cayley–Hamilton theorem to generalized matrix inversion, SIAM Rev. 7 (4) (1965) 526–528.[3] V.N. Faddeeva, Computational Methods of Linear Algebra, Dover Publications Inc., New York, 1959.[4] D.K. Faddeev, I.S. Sominskii, Collection of Problems on Higher Algebra, Gostekhizdat, Moscow, second ed., 1949, fifth ed., 1954.[5] G. Fragulis, B.G. Mertzios, A.I.G. Vardulakis, Computation of the inverse of a polynomial matrix and evaluation of its Laurent expansion, Int. J. Control 53
(1991) 431–443.[6] T.N.E. Greville, The Souriau–Frame algorithm and the Drazin pseudoinverse, Linear Algebra Appl. 6 (1973) 205–208.[7] J. Ji, A finite algorithm for the Drazin inverse of a polynomial matrix, Appl. Math. Comput. 130 (2002) 243–251.[8] J. Jones, N.P. Karampetakis, A.C. Pugh, The computation and application of the generalized inverse via Maple, J. Symb. Comput. 25 (1998) 99–124.[9] T. Kailath, Linear Systems, Prentice-Hall Inc., New Jersey, 1980.
[10] R.E. Kalaba et al, A new proof for Decell’s finite algorithm for the generalized inverse, Appl. Math. Comput. 12 (1983) 199–211.[11] N.P. Karampetakis, Computation of the generalized inverse of a polynomial matrix and applications, Linear Algebra Appl. 252 (1997) 35–60.[12] N.P. Karampetakis, P.S. Stanimirovic, M.B. Tasic, On the computation of the Drazin inverse of a polynomial matrix, Far East J. Math. Sci. (FJMS) 26 (1)
(2007) 1–24.[13] N.P. Karampetakis, P. Tzekis, On the computation of the generalized inverse of a polynomial matrix, IMA J. Math. Contr. Inform. 18 (2001) 83–97.[14] N.P. Karampentakis, S. Vologianidis, DFT calculation of generalized and Drazin inverse of polynomial matrix, Appl. Math. Comput. 143 (2003) 501–521.[15] N.P. Karampetakis, P. Tzekis, On the computation of the generalized inverse of a polynomial matrix, in: 6th Medit. Symposium on New Directions in
Control and Automation, 1998, pp. 1–6.[16] N.P. Karampetakis, Generalized inverses of two-variable polynomial matrices and applications, Circ. Syst. Signal Process. 16 (1997) 439–453.[17] M.D. Petkovic, P.S. Stanimirovic, Computing generalized inverse of polynomial matrices by interpolation, Appl. Math. Comput. 172 (2006) 508–523.[18] M.D. Petkovic, P.S. Stanimirovic, Interpolation algorithm for computing Drazin inverse of polynomial matrices, Linear Algebra Appl. 422 (2007) 526–
539.[19] M.D. Petkovic, P.S. Stanimirovic, Interpolation algorithm of Leverrier–Faddev type for polynomial matrices, Numer. Algorithms 42 (2006) 345–361.[20] D. Serre, Matrices, Theory and Applications, Springer-Verlag, New York/Berlin/Heidelberg, 2002.[21] P.S. Stanimirovic, M.B. Tasic, Drazin inverse of one-variable polynomial matrices, Filomat, Niš 15 (2001) 71–78.[22] P.S. Stanimirovic, A finite algorithm for generalized inverses of polynomial and rational matrices, Appl. Math. Comput. 144 (2003) 199–214.[23] P.S. Stanimirovic, N.P. Karampetakis, Symbolic implementation of Leverrier–Faddeev algorithm and applications, in: 8th IEEE Medit. Conference on
Control and Automation, Patra, Greece, 2000.[24] S. Vologiannidis, N.P. Karampetakis, Inverses of multivariable polynomial matrices by discrete Fourier transforms, Multidim. Syst. Sign Process. 15
(2004) 341–361.[25] W.A. Wolovich, Linear Multivariable Systems, Springer-Verlag Inc., New York, 1974.[26] Ky M. Vu, An extension of the Faddeev’s algorithms, in: Proceedings of the IEEE Multi-conference on Systems and Control on September 3–5th, 2008,
San Antonio, TX.[27] Ky M. Vu, Pencil characteristic coefficients and their applications in control, IEE P-Contr. Theor. Appl. 146 (1999) 450–456.[28] S. Wolfram, The Mathematica Book, fourth ed., Wolfram Media/Cambridge University Press, 1999.[29] S. Wolfram, The Mathematica Book, fifth ed., Wolfram Media, Inc., Champaign, 2004.[30] G. Zielke, Report on test matrices for generalized inverses, Computing 36 (1986) 105–162.