[ieee sice annual conference 2007 - takamatsu, japan (2007.09.17-2007.09.20)] sice annual conference...
TRANSCRIPT
SICE Annual Conference 2007Sept. 17-20, 2007, Kagawa University, Japan
A New Efficient Matrix Spectral Factorization Algorithm
Lasha Ephremidze1 Gigla Janashia and Edem Lagvilava2
1Department of Mathematics, Tokai University, Shizuoka, Japan(Tel : +81-543-37-0140; E-mail: [email protected])
2Razmadze Mathematical Institute, Tbilisi, Georgia(Tel : +995-32-334596; E-mail: [email protected])
Abstract: An absolutely new method of matrix spectral factorization is proposed which leads to the most simple com-putational algorithm. A demo version of the software implementation is located at www.ncst.org.ge/MSF-algorithm.
Keywords: Matrix spectral factorization algorithm.
1. INTRODUCTION
Spectral factorization plays a prominent role in a widerange of fields in system theory and control engineering.In the scalar case, which arises in systems with singleinput and single output, the factorization problem is rela-tively easy and several classical methods exist to performthis task (see a survey paper [5]). The matrix spectralfactorization, which arises in multi-dimensional systems,is significantly more difficult. Since Wiener’s originalefforts ([6]) to create a sound computational method ofsuch factorization, dozens of papers addressed the de-velopment of appropriate algorithms. Nevertheless, theproblem was far from satisfactory solution.
It should be mentioned that though the branch of math-ematics where the spectral factorization problem is posedis the theory of complex functions (namely, a familiaritywith some introductory facts from the theory of Hardyspaces is required for a strict formulation of the problemin its general non-rational setting), none of the existingmethods used this theory for solution. Mathematicians,including Wiener and his followers, were using meth-ods of Functional Analysis, while engineers introduceda state-space model and reduced the problem to the solu-tion of algebraic Riccati equation.
A completely new approach to the matrix spectral fac-torization problem was developed by the last two au-thors in [2], where the two dimensional case is consid-ered. In the present paper we extend the same methodto arbitrary dimensional matrices. This is the first timethat the theories of Complex Analysis and Hardy spacesare used for solution of the problem which turned out tobe very effective. Namely, the decisive role of unitarymatrix-functions in the factorization process is revealedwhich, by flexible manipulations, completely absorbs allthe technical difficulties of the problem leaving very fewand simple procedures for computation.
The algorithm can be applied to any matrix spectraldensities which satisfy the necessary and sufficient Paley-Wiener condition (see (4)) for the existence of factoriza-tion, though further simplifications in the calculation pro-cedure are available in the rational case (see [1]).
2. SOME BASIC FACTS FROM THETHEORY OF HARDY SPACES HP
Let D = {z ∈ C : |z| < 1} and T = ∂D.The Hardy space Hp = Hp(D), p > 0, is the set of
analytic functions f(z), z ∈ D, such that
‖f‖pHp
= supr<1
∫ 2π
0
|f(reiθ)|p dθ < ∞.
H∞ is the set of bounded analytic functions on D.If f ∈ Hp, p > 0, then there exist the boundary values
f(z) = f(eiθ) := limr→1−
f(reiθ)
for almost all z = eiθ ∈ T. Furthermore f(eiθ) ∈ Lp(T).The boundary values of f ∈ Hp, p > 0, cannot be 0 ona set of positive measure. Furthermore, log |f(eiθ)| ∈L1(T) which is equivalent in this case to∫ 2π
0
log |f(eiθ)| dθ > −∞.
In particular, if the boundary values of two functions fromHp, p > 0, coincide (almost everywhere) then these func-tions are the same.
For p ≥ 1, if f ∈ Hp, then the negative Fourier coef-ficients of f(eiθ) are equal to 0, i.e. f(eiθ) ∈ L+
p (T) :={f ∈ Lp(T) : cn(f) = 0, n < 0}. Conversely, eachf(eiθ) ∈ L+
p (T) can be (uniquely) extended in D to somefunction from Hp. Thus Hp(D) and L+
p (T) can be natu-rally identified when p ≥ 1.
If f(z) ∈ L+p (T), then f(z) ∈ L−
p (T) := {f ∈Lp(T) : cn(f) = 0, n > 0}. Obviously L+
P ∩ L−P con-
sists only of constant (a.e.) functions. Consequently, iff(z) ∈ L+
1 (T) is real (a.e.), then f is constant (a.e.).An analytic function
Q(z) = c · exp(
12π
∫ 2π
0
eiθ + z
eiθ − zlog q(eiθ) dθ
), (1)
where |c| = 1 and a positive (a.e.) function q is suchthat log q ∈ L1(T), is called outer. The class of outerfunctions from Hp will be denoted by HO
p . For boundaryvalues, we have
|Q(eiθ)| = q(eiθ) a.e. on T, (2)
and if boundary values of two outer functions coincidewith absolute values a.e., then these functions differ fromeach other by a constant multiplier with absolute value 1.
If a function I ∈ H∞ is such that 0 ≤ I(z) ≤ 1 on D,and |I(z)| = 1 a.e. on T, then it is called inner. Everyfunction f ∈ Hp, p > 0, can be represented as a prod-uct f(z) = Q(z)I(z), where Q ∈ HO
P and I is a innerfunction (see Riesz factorization Theorem, e.g., in [4], p.105; I itself can be factorized as a Blaschke product anda singular inner function, but we do not need to considerthis for current purposes). Obviously |f(z)| ≤ |Q(z)|for z ∈ D, and |f(z)| = |Q(z)| for a.a. z ∈ T. Thus,the outer functions are the ones that take maximal possi-ble absolute values in D whenever absolute values on theboundary T are fixed.
We will use the following generalization of Smirnov’stheorem (see [4], p. 109).
Theorem A. Let f(z) = g(z)/h(z) where g ∈ Hp1
and h ∈ HOp2
. If the boundary values f(eiθ) ∈ Lp(T) ,then f ∈ Hp .
3. FORMULATION OF THE PROBLEM
Let
S(z) =
f1,1(z) f1,2(z) · · · f1,m(z)f2,1(z) f2,2(z) · · · f2,m(z)
......
......
fm,1(z) fm,2(z) · · · fm,m(z)
, (3)
z ∈ T, be a positive definite (a.e.) matrix-function withintegrable entries, fkj ∈ L1(T). If S satisfies the Paley-Wiener condition
log detS(z) ∈ L1(T), (4)
then it admits the spectral factorization, i.e.
S(z) = χ+(z)(χ+(z))∗
for a.a. z ∈ T, where (χ+)∗ = (χ+)T is the adjointof χ+. A matrix-function χ+ is analytic (i.e. it can beextended in D by
χ+(z) =∞∑
k=0
ρkzk, |z| < 1, (5)
where ρk are matrix-coefficients) with entries from H2
(we say that χ+ ∈ H2 in similar situations) and it has anouter determinant, det χ+(z) ∈ HO
2/m.
A spectral factor χ+ is unique up to a constant rightunitary multiplier. With a suitable constraint on χ+(0)we can ensure that the spectral factor χ+ is unique.
In the scalar case, m = 1, a spectral factor can beexplicitly written by the formula (see (1), (2))
χ+(z) = exp(
14π
∫ 2π
0
eiθ + z
eiθ − zlog S(eiθ) dθ
),
which is a core of Kolmogorov spectral factorizationmethod. There is no analog of this formula in the ma-trix case, since in general eA+B �= eAeB for non-commutative matrices A and B. This is the main reasonthat the approximate computation of the spectral factor(5) for a given matrix-function (3) is significantly moredifficult.
Our method does not contain any improvement in thescalar spectral factorization, but uses it to perform thematrix spectral factorization.
4. DESCRIPTION OF THE METHOD
First we perform the lower-upper triangular factoriza-tion of S(z).
S(z) = M(z)(M(z))∗, (6)
where
M =
f+1 0 · · · 0 0
ϕ21 f+2 · · · 0 0
......
......
...ϕm−1,1 ϕm−1,2 · · · f+
m−1 0ϕm1 ϕm2 · · · ϕm,m−1 f+
m
. (7)
We take f+k , k = 1, 2, . . . ,m, equal to a scalar spectral
factor of the positive function det Sk(z)/det Sk−1(z),where S0(z) = 1 and Sk(z) is the left upper k×k subma-trix of S(z). Thus ϕkj ∈ L2(T), 2 ≤ k ≤ m, 1 ≤ j < kand f+
k ∈ HO2 in (7). Note that
det M(z) = f+1 (z)f+
2 (z) . . . f+m(z), |z| = 1. (8)
We pose the factorization problem in an equivalent form:Given a matrix-function M, find a unitary matrix-functionU such that the product MU is a spectral factor of S.
Lemma 1: If U(z) is a unitary matrix-function withdeterminant 1, i.e. if U(z)(U(z))∗ = Im and det U(z) =1 for a.a. z ∈ T, such that
M(z)U(z) ∈ L+2 (T), (9)
then χ+ = MU, i.e. MU is a spectral factor of S.Proof: We have S = MU ·(MU)∗ since (6) holds and
U is unitary. By virtue of (9), M(z)U(z) can be extendedto a analytic matrix-function from H2 whose determinantwill be the outer function f+
1 (z)f+2 (z) . . . f+
m(z), z ∈ D,because of Eq. (8) and the uniqueness property of theboundary values of functions from H2/m.
We recurrently represent a unitary matrix-function Uof Lemma 1 as a product U = U2U3 . . . Um, where Uk,k = 2, 3, . . . m, are unitary matrix-functions with deter-minant 1 of a block matrix form
Uk =(
U ′k 0
0 Im−k
)(10)
such that the k×k left upper submatrix of the productMk := MU2U3 . . . Uk belongs to L+
2 (T) for each k =2, 3, . . . m. Taking the last product, k = m, we come torelation (9).
If we factorize the left upper k×k submatrix of Mk−1
as
µ(k)11 µ
(k)12 · · · µ
(k)1,k−1 0
µ(k)21 µ
(k)22 · · · µ
(k)2,k−1 0
......
......
...
µ(k)k−1,1 µ
(k)k−1,2 · · · µ
(k)k−1,k−1 0
ϕ1 ϕ2 · · · ϕk−1 f+
=
µ(k)11 µ
(k)12 · · · µ
(k)1,k−1 0
µ(k)21 µ
(k)22 · · · µ
(k)2,k−1 0
......
......
...
µ(k)k−1,1 µ
(k)k−1,2 · · · µ
(k)k−1,k−1 0
0 0 · · · 0 1
·F,
where F is the matrix-function (11) below, µ(k)ij ∈
L+2 (T), i, j = 1, 2, . . . , k − 1, by assumption, ϕj :=
µ(k)kj ∈ L2(T), j = 1, 2, . . . , k− 1, and f+ := f+
k ∈ HO2
(see (7)), then it becomes clear that to accomplish ourpurpose it is important the following
Lemma 2: For each k×k matrix-function F of form
F =
1 0 0 · · · 0 00 1 0 · · · 0 00 0 1 · · · 0 0...
......
......
...0 0 0 · · · 1 0ϕ1 ϕ2 ϕ3 · · · ϕk−1 f+
, (11)
where
ϕj ∈ L2(T), j = 1, 2, . . . , k − 1, and f+ ∈ HO2 , (12)
there exists a unitary matrix-function U of form
U =
u11 u12 · · · u1k
u21 u22 · · · u2k
......
......
uk−1,1 uk−1,2 · · · uk−1,k
uk1 uk2 · · · ukk
, (13)
uij ∈ L+∞(T), i, j = 1, 2, . . . , k, (14)
with determinant 1 such that FU ∈ L+2 (T).
Proof: The existence of a unitary matrix-function Ufor which χ+
F = FU is a spectral factor of FF ∗ followsfrom the existence of the spectral factorization of FF ∗.Since det F (z) = f+(z), z ∈ T, and f+ is an outerfunction, we have det χ+
F (z) = cf+(z), z ∈ D, where|c| = 1. Hence det U(z) = c for a.a. z ∈ T and we canassume without lose of generality that c = 1. Since FUdoes not alter the first k − 1 rows of U and the entriesof the last row of U are equal to the conjugates of theircofactors, (14) holds as well.
It is clear that for each k = 2, 3, . . . , m the unitarymatrix function U ′
k in (10) is the one whose existenceis claimed in Lemma 2 for a matrix-function F , whereϕ1, ϕ2, ϕ3, · · · , ϕk−1, f
+ in (11) are the first k nonzeroterms in the k-th row of the product MU2U3 . . . Uk−1.
In order to compute χ+ = MU2U3 . . . Um approx-imately we should be able to approximate a unitarymatrix-function (13) for each F of form (11), (12).For this reason, we approximate F in L2 cutting thetails of negative Fourier coefficients of functions ϕj ,j = 1, 2, . . . , k − 1, and compute a corresponding uni-tary matrix-function in the explicit form. Namely, letϕ
(N)j (z)=
∑∞n=−N cn(ϕj)zn, j = 1, 2, . . . , k−1, where
ϕ ∼ ∑∞n=−∞ cn(ϕ)zn, z ∈ T, is the Fourier series ex-
pansion of ϕ ∈ L2(T), and let, for a matrix-function Fof form (11), (12), FN be the matrix-function where thelast row in F is replaced by (ϕ(N)
1 , ϕ(N)2 , . . . , ϕ
(N)k−1, f
+).Denote by UN a corresponding unitary matrix-functionof form (13) which existence is claimed in Lemma 2, i.e.det UN = 1 and FNUN ∈ L+
2 (T).Lemma 3: The functions uij , 1 ≤ i, j ≤ k, in the
representation of the unitary matrix-function UN by theform (13) are analytic polynomials of order N , i.e.
uij(z) =N∑
n=0
a(ij)n zn. (15)
Proof: By virtue of Lemma 2 we know that uij ∈L+∞(T). Since
∑k−1i=1 ϕ
(N)i uij + f+ukj ∈ L+
2 (T) and
zN∑k−1
i=1 ϕ(N)i uij ∈ L+
2 (T), we have zNukj = Φj/f+
for some Φj ∈ L+2 (T) = H2. Since zNukj ∈ L∞(T),
by virtue of Theorem A, we can conclude that zNukj ∈L+∞(T), j = 1, 2, . . . , k. Thus the representation (15)
is valid for the last row, i = k, in (13). We can thenclaim that the cofactor of every entry of UN has at mostN negative Fourier coefficients, and since co(uij) = uij ,the representation (15) is valid for the entries of the upperrows in (13) as well.
Obviously, ‖FN − F‖L2 → 0 and we can prove thatUN → U in measure, which guarantees that ‖FNUN −FU‖H2 → 0 as N → ∞. Thus χ+
F = FU can be com-puted approximately.
A certain system of algebraic linear equations of or-der N leads to finding the coefficients a
(ij)n in (15). This
system is never ill conditioned and always enjoys somenice structure which gives an opportunity to accelerateits solution. An explicit form of the system is a core ofthe calculation procedure of the proposed matrix spectralfactorization algorithm. We do not disclose these equa-tions in the present paper since the algorithm is currentlyunder the intellectual property management in order tobe commercially used in the industry. Instead, we com-pletely demonstrate the main idea in the two dimensionalcase and offer for evaluation a software implementationof the algorithm to the interested reader.
5. ILLUSTRATION OF THE MAIN IDEAON TWO-DIMENSIONAL MATRICES
The main idea of the method is contained in followingTheorem 1: Suppose
F (z) =(
1 0ϕ(z) f+(z)
), (16)
z ∈ T, where ϕ ∈ L2(T), and f+ ∈ HO2 . If a matrix-
function
U(z) =
(α(z) β(z)
−β(z) α(z)
), (17)
z ∈ T, where α, β ∈ L+∞(T), is such that
F (z)U(z) ∈ L+2 (T), (18)
then U is unitary with determinant 1 times some positiveconstant.
Proof: We have to show that
|α(z)|2 + |β(z)|2 = C a.e. on T. (19)
It follows from (16)-(18) that{ϕ(z)α(z) − f+(z)β(z) =: Ψ1(z) ∈ L+
2 (T),ϕ(z)β(z) + f+(z)α(z) =: Ψ2(z) ∈ L+
2 (T).(20)
Hence f+(|α|2 + |β|2) = Ψ2α−Ψ1β ∈ L+2 (T). There-
fore |α(z)|2 + |β(z)|2 = Φ(z)/f+(z) for some Φ ∈L+
2 (T) = H2. Since |α(z)|2 + |β(z)|2 ∈ L∞(T), wecan use Theorem A to conclude that |α(z)|2 + |β(z)|2 ∈L+∞(T) and consequently (19) follows since the function
is positive.Theorem 1 suggests that we need only to care about
the condition (20) to be fulfilled. The matrix (17) will beautomatically unitary (after normalization). This simpli-fies the process of explicit construction of U(z) wheneverϕ(z) = ϕ(N)(z) =
∑∞n=−N cn(ϕ)zn in (16). Namely,
α(z) =∑N
n=0anzn and β(z) =
∑N
n=0bnzn (21)
are polynomials of order N in this case according toLemma 3, and equating N negative coefficients of func-tions Ψ1 and Ψ2 in (20) to 0 (to avoid the trivial solutionwe take c0(Ψ2) = 1), we lead to a system of linear al-gebraic equations with respect to coefficients an and bn.For simplicity, we use the matrix notation{
Γ · A − G · B = O,
Γ · B + G · A = E,(22)
where (γn = c−n(ϕN ) and ln = cn(f+) below)
Γ =
γ0 γ1 γ2 · · · γN−1 γN
γ1 γ2 γ3 · · · γN 0γ2 γ3 γ4 · · · 0 0...
......
......
...γN 0 0 · · · 0 0
,
G =
l0 l1 l2 · · · lN−1 lN0 l0 l1 · · · lN−2 lN−1
0 0 l0 · · · lN−3 lN−2
......
......
......
0 0 0 · · · 0 l0
,
A = (a0, a1, a2, . . . , aN )T , B = (b0, b1, b2, . . . , bN )T ,
O = (0, 0, 0, . . . , 0)T and E = (1, 0, 0, . . . , 0)T .
Determining B from the first equation of (22), B =G−1ΓA, and then substituting it into the second equation,we get the following system of algebraic linear equations
R · A = l−10 · E, (23)
where R = G−1Γ · G−1Γ + IN+1. It is easy to checkthat G−1Γ is symmetric. Thus R is positive definite withall eigenvalues more than or equal to 1, so that the system(23) is solvable. Furthermore, R always has a displace-ment structure (see [3], p. 808). Namely, R−ZRZ∗ hasrank 2, where Z is the upper triangular (N +1)×(N +1)matrix with ones on the first subdiagonal and zeroes else-where. This further accelerates a solution process of (23).Whenever we find the coefficients a0, a1, . . . , aN from(23), and then b0, b1, . . . , bN , we can normalize them sothat to get the matrix-function (17), where α(z) and β(z)are defined from (21), unitary and with determinant 1.
The proof of a convergence of the algorithm as N →∞ see in [2].
6. CONCLUSIONS
We present an absolutely new algorithm of matrixspectral factorization which actually reduces the problemof m×m matrix factorization to a LU triangular factoriza-tion of the matrix, to m times scalar spectral factorization,and to the solution of m − 1 linear systems of equationswhich may be of high orders, in order to achieve a goodaccuracy, but always have a positive definite coefficientmatrices with displacement structure. All these three con-sistent components are extremely well developed to com-pare with original problem.
7. ACKNOWLEDGEMENT
We are grateful to IT specialist G. Modebadze for cre-ating a software implementation of our algorithm.
REFERENCES
[1] L. Ephremidze, G. Janashia and E. Lagvilava, “Anew computational algorithm of spectral factor-ization for polynomial matrix-functions”, Proc. A.Razmadze Math. Inst., Vol. 136, pp. 41-46, 2004.
[2] G. Janashia and E. Lagvilava, “A method of approx-imate factorization of positive definite matrix func-tions”, Studia Mathematica, Vol. 137, No. 1, pp. 93-100, 1999.
[3] T. Kailath, A. H. Sayed and B. Hassibi, Linear Es-timation, Prentice Hall, New Jersey, 2000.
[4] P. Koosis, Introduction to Hp spaces, CambridgeUniversity Press, 1980.
[5] A. H. Sayed and T. Kailath, “A survey of Spec-tral Factorization Methods”, Numer. Linear AlgebraAppl., Vol. 8, pp. 467-496, 2001.
[6] N. Wiener and P. Masani, “The prediction theoryof multivariate stochastic processes”, I, Acta Math.Vol. 98, pp. 111-150, 1957, II, Acta Math. Vol. 99,pp. 93-137, 1958.