extending results from orthogonal matrices to the class of p-orthogonal matrices

18
Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices Jo˜ao R. Cardoso * Instituto Superior de Engenharia de Coimbra Quinta da Nora 3040-228 Coimbra – Portugal [email protected] F. Silva Leite Departamento de Matem´atica Universidade de Coimbra 3000 Coimbra–Portugal [email protected] fax: (351) 239-832568 April, 2002 Abstract We extend results concerning orthogonal matrices to a more general class of ma- trices that will be called P -orthogonal. This is a large class of matrices that includes, for instance, orthogonal and symplectic matrices as particular cases. We study the elementary properties of P -orthogonal matrices and give some exponential representa- tions. The role of these matrices in matrix decompositions, with particular emphasis on generalized polar decompositions, is analysed. An application to matrix equations is presented. Key-words: P -orthogonal, P -symmetric, P -skew-symmetric, generalized polar decom- positions, primary matrix functions * Work supported in part by ISR and a PRODEP grant, under Concurso n. 4/5.3/PRODEP/2000. Work supported in part by ISR and research network contract ERB FMRXCT-970137. 1

Upload: independent

Post on 25-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Extending Results from Orthogonal Matrices to the

Class of P -orthogonal Matrices

Joao R. Cardoso∗

Instituto Superior de Engenharia de Coimbra

Quinta da Nora

3040-228 Coimbra – Portugal

[email protected]

F. Silva Leite†

Departamento de Matematica

Universidade de Coimbra

3000 Coimbra–Portugal

[email protected]

fax: (351) 239-832568

April, 2002

Abstract

We extend results concerning orthogonal matrices to a more general class of ma-trices that will be called P -orthogonal. This is a large class of matrices that includes,for instance, orthogonal and symplectic matrices as particular cases. We study theelementary properties of P -orthogonal matrices and give some exponential representa-tions. The role of these matrices in matrix decompositions, with particular emphasison generalized polar decompositions, is analysed. An application to matrix equations ispresented.

Key-words: P -orthogonal, P -symmetric, P -skew-symmetric, generalized polar decom-positions, primary matrix functions

∗Work supported in part by ISR and a PRODEP grant, under Concurso n. 4/5.3/PRODEP/2000.†Work supported in part by ISR and research network contract ERB FMRXCT-970137.

1

1 Introduction

If IK is the set of real numbers IR or the set of complex numbers C, we denote by GL(n, IK)

the Lie group of all n × n nonsingular matrices with entries in IK and by gl(n, IK) the Lie

algebra of all n × n matrices with entries in IK. Throughout the paper P ∈ GL(n, IR) is a

fixed matrix. Let A ∈ gl(n,C). The P -transpose of A is defined by AP := P−1AT P and the

P -adjoint of A by AP∗ := P−1A∗P , where A∗ = AT .

Definition 1.1 Let A ∈ gl(n,C).

1. The matrix A is called P -orthogonal if AP A = I, i.e., AT PA = P ;

2. The matrix A is called P -symmetric if AP = A, i.e., AT P = PA;

3. The matrix A is P -skew-symmetric if AP = −A, i.e., AT P = −PA;

4. The matrix A is P -unitary if AP∗A = I, i.e., A∗PA = P ;

5. The matrix A is P -Hermitian if AP∗ = A, i.e., A∗P = PA;

6. The matrix A is P -skew-Hermitian if AP∗ = −A, i.e., A∗P = −PA.

The set of complex P -orthogonal matrices

GP = {A ∈ GL(n,C) : AT PA = P}

and the set of P -unitary matrices

G∗P = {A ∈ GL(n,C) : A∗PA = P}

are Lie groups that include important particular cases, such as:

• the orthogonal group GI = O(n,C),

• the unitary group G∗I = U(n),

• the symplectic group GJ = SP (2m,C), with J =

0 Im

−Im 0

, 2m = n, where Im

denotes the m×m identity matrix, and

• the group GD = O(p, q), with D = diag(Ip,−Iq), p + q = n, which is the well known

Lorentz group for p = 3 and q = 1.

The set of P -skew-symmetric matrices

LP = {A ∈ gl(n,C) : AT P = −PA}

2

is the Lie algebra of GP , with respect to the commutator [A,B] = AB − BA, and the set of

P -symmetric matrices

JP = {A ∈ gl(n,C) : AT P = PA},equipped with the Jordan product {A,B} = AB + BA, forms a Jordan algebra.

The set of P -skew-Hermitian matrices, denoted by L∗P , is a Lie Algebra over IR (not over

C) and the set of P -Hermitian matrices J ∗P is a Jordan algebra over IR. Moreover,

L∗P = iJ ∗P .

The sets of real P -orthogonal, P -skew-symmetric and P -symmetric will be denoted, respec-

tively, by GP (IR), LP (IR) and JP (IR). Thus, for example, with the terminology of Lie theory,

we have: GI(IR) = O(n) and LI(IR) = o(n).

Some results about the orthogonal group O(p, q) or the symplectic group SP (2m) have

been stated and proved separately. However, if we think in terms of groups GP , we observe

that there are important features shared by both groups. One example is the generalized

polar decomposition of a nonsingular matrix A: A = QG, where Q ∈ GP and G ∈ JP . This

decomposition also holds for many others choices of P .

The idea of working in a general setting has been used in some recent papers such as [2],

[5], [12] and [21]. Cardoso and Leite [5] have shown through unifying proofs that diagonal

Pade approximants method for computing matrix logarithms and exponentials is structure

preserving for P -orthogonal matrices. Leite and Crouch [21] have shown that important

features related to the spectrum of a matrix in LP are independent of P . In the same spirit,

Iserels [12] have studied the discretization of equations in the Lie groups GP (that were called

quadratic Lie groups).

Our main goal is to extend some well known results involving orthogonal, symmetric and

skew-symmetric matrices to P -orthogonal, P -symmetric and P -skew-symmetric matrices. We

also aim to show how the theory associated to these latter matrices may be used to identify

solutions of two particular algebraic matrix equations.

The organization of this paper is as follows. In Section 2 we state some elementary

properties of P -orthogonal, P -unitary and matrices in the corresponding algebras. In the

most cases the condition “P is orthogonal and P 2 = ±I” is needed. When this condition

does not hold but P is symmetric or skew-symmetric, we will show that there exists an

isomorphism between GP and a certain group GP1 , where P1 is orthogonal and P 21 = ±I. In

Section 3 we generalize the exponential representations given in [7], ch. XI, to P -orthogonal

and P -unitary matrices. In Section 4, well known matrix decompositions are extended to

P -orthogonal and P -unitary matrices. Special attention will be paid to the generalized polar

decomposition. Finally, in Section 5, we solve two algebraic matrix equations using the theory

stated previously.

3

2 Properties of P -orthogonal matrices

In this section we shall give elementary properties of the group Gp and the corresponding

algebras LP and JP . Since most of the results have immediate proofs, they will be omitted.

Similar results may be adapted for P -unitary, P -Hermitian and P -skew-Hermitian matri-

ces.

Theorem 2.1 Let P ∈ GL(n, IR) and A ∈ gl(n,C). Then:

(i) If A is P -symmetric (resp. P -skew-symmetric) and nonsingular then A−1 is P -symmetric

(resp. P -skew-symmetric);

(ii) GP , LP and JP are closed under P -orthogonal similarities, that is, if S ∈ GP and

A ∈ GP (resp. LP , JP ) then S−1AS ∈ GP (resp. LP , JP );

(iii) (AB)P = BP AP .

Moreover, if P is orthogonal and P 2 = ±I, then:

(iv) If A is P -orthogonal (resp. P -skew-symmetric, P -symmetric) then AT is P -orthogonal

(resp. P -skew-symmetric, P -symmetric);

(v) (AP )P = A;

(vi) AP A and A + AP are P -symmetric;

(vii) A− AP is P -skew-symmetric.

The properties (vi) and (vii) generalize well known results about symmetric and skew-

symmetric matrices. In particular, it follows that, if P is orthogonal and P 2 = ±I, every

matrix can be written as a sum of a P -symmetric with a P -skew-symmetric matrix. This

result, which has already appeared in [21], is an immediate consequence of (vi) and (vii) since

A = 12(A + AP ) + 1

2(A− AP ).

The next theorem relates P -symmetric and P -skew-symmetric matrices with symmetric

and skew-symmetric matrices.

Theorem 2.2 Let P ∈ GL(n, IR) be orthogonal.

(i) If P 2 = I, then every P -symmetric (resp. P -skew-symmetric) matrix A can be written

in the form A = PS, where S is symmetric (resp. skew-symmetric);

(ii) If P 2 = −I, then every P -symmetric (resp. P -skew-symmetric) matrix A can be written

in the form A = PS, where S is skew-symmetric (resp. symmetric).

4

Some properties of the spectrum of orthogonal and skew-symmetric matrices are shared by

P -orthogonal and P -skew-symmetric matrices. For instance, the spectrum of a P -orthogonal

matrix A, which will be denoted by σ(A), contains the inverse of each eigenvalue. Indeed,

since AT PA = P and every matrix is similar to its transpose, it follows that A is similar to

its inverse. Thus, the equivalence

λ ∈ σ(A) ⇔ λ−1 ∈ σ(A)

holds and also det(A) = ±1. However, we shall note that σ(A) does not necessarily lie over

the unit circumference.

Given a P -skew-symmetric matrix A, its spectrum σ(A) is symmetric with respect to the

origin, i.e.,

λ ∈ σ(A) ⇔ −λ ∈ σ(A).

In fact, A is similar to−A and, as consequence, trace(A) = 0, which means that LP ⊂ sl(n,C),

where sl(n,C) denotes the special linear Lie algebra, i.e., the Lie Algebra consisting of all

matrices with trace equal to zero.

We have seen in the previous theorem that some results require the restriction

P T = P−1 and P 2 = ±I. (1)

Examples of groups GP , with P satisfying (1), are the orthogonal group O(p, q) and the

symplectic group SP (2m). In the next theorem, we generalize some ideas of [16] and show

that, if either P is symmetric or skew-symmetric, then GP is isomorphic to O(p, q) or SP (2m),

respectively.

Theorem 2.3 Let P ∈ GL(n, IR).

(a) If P = P T , p and q are, respectively, the number of positive and negative eigenvalues of

P (p + q = n) and D = diag(Ip,−Iq), then

(i) For complex P -orthogonal, P -skew-symmetric and P -symmetric matrices, the fol-

lowing isomorphisms hold:

GP∼= GI , LP

∼= LI , JP∼= JI ;

(ii) For P -unitary, P -skew-Hermitian and P -Hermitian matrices, we have:

G∗P∼= G∗

D, L∗P ∼= L∗D, J ∗P∼= J ∗

D;

(iii) For real P -orthogonal, P -skew-symmetric and P -symmetric matrices, it holds:

GP (IR) ∼= GD(IR), LP (IR) ∼= LD(IR), JP (IR) ∼= JD(IR).

5

(b) Let P be a 2m× 2m matrix such that P T = −P and suppose that J is the matrix that

defines the symplectic group. Then

(i) GP∼= GJ , LP

∼= LJ , JP∼= JJ ;

(ii) G∗P∼= G∗

J , L∗P ∼= L∗J , J ∗P∼= J ∗

J ;

(iii) GP (IR) ∼= GJ(IR), LP (IR) ∼= LJ(IR), JP (IR) ∼= JJ(IR).

Proof.

(a)(i) Since P is real, symmetric and invertible, we may write P = CT1 C1, for some nonsingu-

lar complex C1. A possible choice for C1 is to take it as a symmetric square root of P , whose

existence is always guaranteed. Define the mapping

Φ1 : gl(n,C) −→ gl(n,C)

A −→ Φ1(A) = C1AC−11

.

It is easy to check that Φ1 establishes an algebraic isomorphism between the groups GP and

GI , a Lie algebra isomorphism between LP and LI and a Jordan algebra isomorphism between

JP and JI .

(ii), (iii) Since P ∈ GL(n, IR) is symmetric, it is congruent with D (see, for instance, [13]),

that is, P = CT2 DC2, for some C2 real nonsingular. Then the mapping

Φ2 : gl(n, IR) −→ gl(n, IR)

A −→ Φ2(A) = C2AC−12

states the required isomorphism.

(b) Since P is real and nonsingular, with even size n = 2m, P allows a Cholesky-type

factorization P = CT4 JC4 (see [4]). Therefore, the required isomorphism may be defined by

Φ4(A) = C4AC−14 .

Remark 2.4 When P is symmetric positive definite, we may rewrite the statements (a)(ii)

and (iii) of the previous theorem in the following way:

(ii)’ G∗P∼= G∗

I , L∗P ∼= L∗I , J ∗P∼= J ∗

I ;

(iii)’ GP (IR) ∼= GI(IR), LP (IR) ∼= LI(IR), JP (IR) ∼= JI(IR).

Indeed, P admits the Cholesky factorization P = CT3 C3, with C3 real (see [11]) and then the

mapping

Φ3 : gl(n, IR) −→ gl(n, IR)

A −→ Φ3(A) = C3AC−13

.

states the desired isomorphisms.

6

Since the condition (1) implies that P T = ±P , it follows from the previous theorem

that any group of the form GP1 , with P1 satisfying (1), is isomorphic to one of the groups

GI = O(n), GD = O(p, q) or GJ = SP (2m). Thus, these three groups are the base to study

the groups of the form GP , with P symmetric or skew-symmetric.

We finish this section giving some references where one can finds information about the Lie

groups corresponding to P = I, P = J and P = D. For real orthogonal and unitary matrices

we refer to some matrix theory books, such as, [11] and [13]; about complex orthogonal see,

for instance, [7] and [9] and references therein; for symplectic see, for instance, [1], [6], [14]

and references therein and for O(p, q) see, for instance, [1], [8] and [19]. Sometimes, this last

group appears associated to indefinite inner products.

3 Exponential representations

We refer to [10], ch. 6, for details concerning to primary matrix functions.

From now on, X1/2 will stand for the principal square root of X and√

X for a generic

square root; Log X denotes the principal matrix logarithm of X.

Given a P -unitary matrix U , there exists a complex P -Hermitian matrix V such that

U = eiV . (2)

Indeed, since U is invertible, some properties of logarithms of matrices (see (6.4.20) in [10])

allow us to choose a P -skew-Hermitian matrix W = log U such that U = eW . Since W = iV ,

for some P -Hermitian V , it yields the representation (2).

In this section, we attempt to generalize the exponential representations given in [7], ch.

XI, for orthogonal and unitary matrices to P -orthogonal and P -unitary matrices.

Lemma 3.1 Let P ∈ GL(n, IR). If A is complex P -orthogonal and P -Hermitian, then:

(i) There exists a real matrix K such that A = eiK;

(ii) If σ(A)∩IR− = φ then there exists a real P -skew-symmetric matrix K such that A = eiK ;

(iii) If there exists a real P -skew-symmetric matrix K such that A = eiK, then the Jordan

blocks of A with negative eigenvalues occur in pairs.

Proof.

(i) Since A is P -orthogonal and P -Hermitian, it is easy to conclude that AA = I. Therefore

(6.4.22) in [10] ensures that A = eiK , for some K real.

(ii) The restriction σ(A) ∩ IR− = φ means that the principal logarithm of A is defined.

Let K = −i Log A. We have to show that K is real and P -skew-symmetric. Since A

is P -Hermitian, it follows that A∗ = PAP−1 and then Log A∗ = P (Log A)P−1. Since

7

Log(A∗) = (Log A)∗, it follows that Log A is P -Hermitian. On the other hand, since

A is P -orthogonal, Log A is P -skew-symmetric. Hence Log A is both P -Hermitian and

P -skew-symmetric. Therefore Log A = −Log A and so K is real. To show that K is

P -skew-symmetric is just a simple calculation.

(iii) Let K be real and P -skew-symmetric and let λ ∈ σ(K). Then λ,−λ,−λ ∈ σ(K).

The eigenvalues of A = eiK are of the form eiλ, where λ is an eigenvalue of K. Hence

eiλ, e−iλ and e−iλ are also eigenvalues of A. If µ is an negative eigenvalue of A then

µ = eiλ, for some λ = (2k + 1)π + bi, k ∈ ZZ, b ∈ IR, and eiλ = e−iλ, eiλ = e−iλ. Since

the matrix exponential preserves the size of the Jordan blocks, one concludes that the

Jordan blocks associated to negative eigenvalues of A occur in pairs.

Remark 3.2 The reciprocal of (iii) may not be true. In fact, suppose that P = I2m and

consider the orthogonal matrix A = −I2m. If K is a P -skew-symmetric matrix then eiK

has always positive eigenvalues. Therefore, we cannot have eiK = −I2m, for any given skew-

symmetric matrix K. However, if P = J , then K = diag(π,−π) is Hamiltonian and eiK =

−I2m.

Theorem 3.3 Let P T = P−1, P 2 = ±I, let A be a complex P -orthogonal matrix and X :=

AP∗A. If σ(X) ∩ IR− = φ then there exists a real P -orthogonal matrix R and a real P -skew-

symmetric matrix K such that

A = ReiK .

Proof. First we observe that X = AP∗A is P -Hermitian and P -orthogonal. Since σ(X)∩IR− =

φ, by the previous lemma there exists K real and P -skew-symmetric such that X = e2iK .

Hence eiK is a square root of X which is P -orthogonal. Therefore a simple calculation shows

that R := Ae−iK is P -orthogonal and P -unitary which, in turn, implies that R is real.

Lemma 3.4 Let P ∈ GL(n, IR). If A is a complex P -symmetric and P -unitary matrix then

there exists a real matrix K such that A = eiK. If σ(A) ∩ IR− = φ then K may be chosen to

be P -symmetric.

Proof. Analogue to the proof of Lemma 3.1.

Theorem 3.5 Let P T = P−1, P 2 = ±I, let A be a complex P -unitary and X := AP A. If

σ(X) ∩ IR− = φ then there exists a real P -orthogonal matrix R and a real P -symmetric K

such that

A = ReiK .

Proof. Similar to the proof of Theorem 3.3.

8

Remark 3.6 In this section we have worked with a general P ∈ GL(n, IR) and, in some cases,

with P satisfying (1). The sufficient conditions stated are very restrictive on the spectrum of

some matrices involved. However, if we study these exponential representations for particular

choices of P , say P = J or P = D, we believe that the range of applications of our results

may be enlarged. These study still remains to be done.

4 Generalized matrix decompositions

4.1 Generalized polar decomposition

The standard polar decomposition of a nonsingular matrix states that every A ∈ GL(n,C)

may be written in the form A = UH, where U is unitary and H is Hermitian positive definite.

The matrix H is always uniquely determined as H := (A∗A)1/2 and if A is real then U and

H may be taken to be real.

Several extensions of this decomposition have been made. An example is the complex

orthogonal/symmetric version (see [7], ch. XI) that allows us to write A = QG, where Q

is complex orthogonal and G is complex symmetric. When A is real, Q and G may also be

taken to be real.

Another important example is the more general case concerning to Lie groups. Lawson

[15] proved the existence of the polar decomposition in arbitrary Lie groups equipped with

an involutive automorphism. However, such a decomposition is only possible for elements

nearby the identity.

For other particular Lie groups, which include some important matrix groups, polar de-

compositions may be obtained for a wide range of elements of the group. Bar-On and Gray

[2] showed that the elements of certain Lie groups, which they called polar groups (the pair

(G,P ) is called a polar group if for all A ∈ G there exists a square root√

AP A ∈ JP ) may be

written as a product of an element in the group GP by an element in JP . However, it is not

easy to identify polar Lie groups. And, in addition, working with these groups is very restric-

tive, since there are matrices which admits polar decompositions independently of belonging

to a polar group or not, as it will become clear latter.

Here, we propose a different approach to generalize the polar decomposition which is based

on the theory of matrix square roots and recent results on the groups Sp(2m) and O(p, q).

In the generalized polar decomposition that will be studied here we suppose that P satisfies

(1). Due to the isomorphisms stated in Theorem 2.3, the most significant groups associated

to these P ’s are GJ and GD. All the other groups GP1 , with P1 satisfying (1), are isomorphic

to GJ or GD. Natural extensions of our results may be easily done if P is either symmetric

or P -skew-symmetric.

We establish sufficient conditions for a generic P satisfying (1) and, since the polar decom-

positions when P = D have recently received some attention in the context of indefinite scalar

9

product spaces (see, for instance, [3] and [18]), we refine these conditions only for P = J , i.e.,

the symplectic/Hamiltonian case.

We start with the complex P -orthogonal/P -symmetric version of the polar decomposition.

Theorem 4.1 Let P T = P−1, P 2 = ±I and A ∈ GL(n,C). Then:

(i) There exists a P -orthogonal matrix Q and a P -symmetric matrix G such that A = QG.

(ii) If σ(AP∗A) ∩ IR− = φ, then there exists a P -unitary matrix U and a P -Hermitian H

with eigenvalues on the open right half plane such that A = UH. Besides, H is uniquely

determined as H := (AP∗A)1/2 and if A is real then U and H may be taken to be real.

Proof.

(i) The matrix X = AP A is P -symmetric and, since it is nonsingular, it has at least a

square root which is a polynomial in AP A (see (6.4.12) in [10]). Let G :=√

AP A be one

of that square roots. Since every polynomial preserves the P -symmetry, we have that

G is also P -symmetric and hence Q := AG−1 is P -orthogonal.

(ii) If σ(AP∗A)∩ IR− = φ then H = (AP∗A)1/2 is P -Hermitian and is the only square root of

AP∗A with eigenvalues on the open right half plane. Moreover, if A is real this square

root is also real.

Remark 4.2 The order in which the matrices Q and G appear in the decomposition of (i)

of the previous theorem may be changed, i.e., there exists a P -symmetric matrix G1 and a

P -orthogonal matrix Q1 such that A = G1Q1. In fact, AT can be represented in the form

AT = QG, for some P -orthogonal Q and some P -symmetric G. This implies that A = GT QT ,

where G1 = GT is P -symmetric and Q1 = QT is P -orthogonal. A similar argument holds for

the P -unitary/P -Hermitian case.

It is well known that the similarity relationship between two complex orthogonal (resp.

symmetric, skew-symmetric) matrices may be established by means of an orthogonal matrix

([7], ch. XI), that is, if B = SAS−1, for some nonsingular S, then B = QAQT , for some

orthogonal Q. Using the generalized polar decomposition given in the previous theorem, we

show, in the next corollary, that an analogue result holds for two complex P -orthogonal (resp.

P -symmetric, P skew-symmetric) matrices.

Corollary 4.3 Let A,B ∈ gl(n,C) and suppose that P satisfies (1).

(i) If both matrices A and B are P -orthogonal (resp. P -symmetric, P -skew-symmetric)

and are similar then they are P -orthogonally similar, i.e., there exists a P -orthogonal

matrix Q such that B = QAQP .

10

(ii) Suppose now that both A and B are P -unitary (resp. P -Hermitian, P -skew-Hermitian)

and are similar, i.e., there exists a nonsingular complex matrix S such that B = SAS−1.

If σ(SP∗S)∩IR− = φ, then A and B are P -unitarilly similar, i.e., there exists a P -unitary

matrix U such that B = UAUP∗. If A and B are real then U may be taken to be real.

Proof.

(i) Without loss of generality, we suppose that A and B are P -orthogonal. If S is a

nonsingular matrix such that

B = SAS−1, (3)

then

B−1 = BP = (S−1)P AP SP = (SP )−1A−1SP ,

which implies that B = (SP )−1ASP . Therefore, using (3), we have A(SP S) = (SP S)A,

that is, A and SP S commute. Since S is nonsingular, by the previous theorem we may

write S = QG, where Q is P -orthogonal and G :=√

SP S is P -symmetric. As we have

already seen in the proof of the theorem, G is a polynomial in SP S and therefore it

commutes with A. Hence,

B = SAS−1 = (QG)A(QG)−1 = QAQ−1 = QAQP .

(ii) Similar to (i).

We will show in the next theorem that for a given nonsingular P -normal matrix A (i.e.,

AP∗A = AAP∗) the restriction σ(AP∗A) ∩ IR− = φ is not needed to guarantee that A admits

a P -unitary/P -Hermitian polar decomposition.

Theorem 4.4 If A ∈ GL(n,C) is P -normal then there exists a P -unitary matrix U and a

P -Hermitian matrix H such that A = UH.

Proof. Analogue to the proof of Theorem 5.1 in [3].

Remark 4.5 The previous theorem may not hold in the real case. For a counter example,

let J =

0 1

−1 0

and A =

1 −4

−4 −1

. The matrix A is Hamiltonian and as consequence

it is J-normal. If there exists a real symplectic U and a real skew-Hamiltonian (i.e., J-

symmetric) H such that A = UH then AJA = (UH)J(UH) = HJH = H2. However, since

AJA = diag(−17,−17), the equation H2 = AJA cannot have real skew-Hamiltonian solutions.

Let us analyse the generalized polar decomposition for the particular case P = J = 0 Im

−Im 0

. We recall that the case P = D has been extensively analysed ([3] and [18]).

11

Let A be complex nonsingular. Since the complex J-orthogonal/J-symmetric polar de-

composition is a particular case of the decomposition in the Theorem 4.1, we only discuss the

J-unitary/J-Hermitian case and its real version whenever A is real. Before proceeding fur-

ther, let us recall some basic facts about J-Hermitian (also called skew-Hamiltonian) square

roots of J-Hermitian matrices. We refer to [6] for more details.

While any nonsingular matrix H ∈ JJ has at least a square root√

H ∈ JJ , an analogue

result may not hold in J ∗J . The matrix H =

1 2i

−2i 1

∈ J ∗

J , whose eigenvalues are −1

and 3, is an example of a matrix in J ∗J which does not have square roots in J ∗

J . Indeed, if

there was K ∈ J ∗J such that K2 = H then the negative eigenvalues of H would have to occur

in pairs, which is a contradiction.

Nevertheless, if a nonsingular matrix H ∈ J ∗J allows the following skew-Hamiltonian

Jordan decomposition

H = S

H1 0

0 H∗1

S−1, (4)

for some S ∈ G∗J (i.e., S is symplectic) and some H1 ∈ GL(m,C), 2m = n, then

K = S

H

1/21 0

0 (H1/21 )∗

S−1 ∈ J ∗

J

satisfies K2 = H.

Let us suppose now that H ∈ JJ(IR) is nonsingular. By Theorem 1 in [6] there exists

S ∈ GJ(IR) and H1 ∈ GL(m, IR) such that H = S

H1 0

0 HT1

S−1. Using the necessary and

sufficient condition for a nonsingular real matrix to have a real square root (see (6.4.14) in

[10]), it follows that H has a skew-Hamiltonian real square root if and only if the Jordan

blocks of H1 associated to real negative eigenvalues occur in pairs.

To summarize the discussion above, in the following theorem we list some sufficient condi-

tions under which a given nonsingular matrix A admits a symplectic/skew-Hamiltonian polar

decomposition.

Theorem 4.6 (a) Let A ∈ GL(2m,C). If one of the following three conditions holds:

(i) σ(AJ∗A) ∩ IR− = φ,

(ii) AJ∗A admits the skew-Hamiltonian Jordan decomposition (4), or

(iii) A is real,

then there exists U ∈ G∗J and H ∈ J ∗

J such that A = UH.

(b) Let A ∈ GL(2m, IR). There exists U ∈ GJ(IR) and H ∈ JJ(IR) such that A = UH if

and only if for each real negative eigenvalue of AJA there are four equal Jordan blocks.

12

4.2 Other matrix decompositions

In this subsection, we shall see some decompositions involving symmetric and skew-symmetric

matrices that may be easily generalized to P -symmetric and P -skew-symmetric matrices. An

example is the following well known result ([20] and [22]): “Given a matrix A ∈ gl(n,C),

there exists symmetric matrices F and G such that A = FG”. The next theorem shows

that a similar result holds for P -symmetric and P -skew-symmetric matrices, provided that P

satisfies (1).

Theorem 4.7 Let A ∈ gl(n,C). If P 2 = I (resp. P 2 = −I) and P is orthogonal, then there

exist complex P -symmetric (resp. P -skew-symmetric) matrices F and G such that A = FG,

with rank(F ) = k, rank(G) = n− k + rank(A), rank(A) ≤ k ≤ n. If A is real then F and G

may be taken to be real.

Proof. We assume that P 2 = I. Since A = F1G1, for some F1 and G1 symmetric, it follows

that A = (F1P )(PG1) = FG, where F := F1P and G = PG1 are P -symmetric. The remain

of the proof in an immediate consequence of Prop. 1.1 in [20].

A necessary and sufficient condition to guarantee that a matrix A is a product of a sym-

metric by a skew-symmetric matrix is that A is similar to −A (see Theorem 2.1 in [20]). The

next theorem states an analogue result.

Theorem 4.8 Let P 2 = ±I and P be orthogonal. A matrix A ∈ gl(n,C) can be represented

as A = FG, where F is complex P -symmetric and G is complex P -skew-symmetric, if and

only if A is similar to −A. If A is real then F and G may be taken to be real.

Proof. Immediate consequence of Theorem 2.1 in [20] and Theorem 2.2 in this paper.

5 Application to matrix equations

In this section we illustrate how the theory developed previously may be used to solve two

particular matrix equations, which arise in the characterization of h-selfdual and σ-selfdual

Euclidean norms in Cn (see [17]). Our goal is to solve the following problems:

Problem 1. Given H complex Hermitian nonsingular, find Hermitian positive definite solu-

tions X of the matrix equation:

H−1X = X−1H; (5)

Problem 2. Given S complex symmetric nonsingular, find Hermitian positive definite solu-

tions X such that

S−1X = X−1

S. (6)

13

Although these problems were completely solved in [17], here we propose an alternative

method based on the previous developments.

We start with Hermitian solutions of (5). Since H is Hermitian and invertible, it is

congruent with D = diag(Ip,−Iq), where p and q are, respectively, the number of positive

and negative eigenvalues of H(see p. 184 in [13]), that is, there exists T such that

H = T ∗DT. (7)

Let Y be a matrix such that

X = T ∗Y T.

Then Y is Hermitian if and only if X is also Hermitian, and therefore solving (5) is equivalent

to finding Y Hermitian such that

DY = Y −1D. (8)

Setting W := DY , it turns out that to solve (8) is equivalent to find W such that

W 2 = I, W ∗D = DW.

Thus, to solve Problem 1, we may solve alternatively the following problem:

Problem 1’. “Given H complex Hermitian nonsingular, having p positive eigenvalues and q

negative eigenvalues, find D-Hermitian square roots of the identity, where D = diag(Ip,−Iq)”.

Theorem 5.1 Let D be as before. A matrix W is a D-Hermitian square root of I if and only

if it can be written in the form

W = R∆R−1, (9)

where ∆ = diag(±1, · · · ,±1) and R is a nonsingular matrix such that R∗DR commutes with

∆.

Proof. (⇒) Let W denotes a D-Hermitian square root of the identity matrix I. Since

W is a square root of I, it follows that ±1 are the only eigenvalues that W allows. Also,

W is diagonalizable because the square root operation preserves the size and the number

of the blocks in its Jordan canonical form. Hence there exists a nonsingular matrix R and

∆ = diag(±1, · · · ,±1) such that W = R∆R−1. On the other hand, since W is D-Hermitian

this implies that R∗DR and ∆ commute.

(⇐) Immediate.

Remark 5.2 If R in (9) is D-unitary then R∗DR commutes with ∆. Thus all matrices of

the form W = R∆RD, where R is D-unitary, are D-Hermitian square roots of I.

14

Corollary 5.3 A matrix X is a Hermitian solution of (5) if and only if

X = T ∗DR∆R−1T,

where T and D are as in (7), ∆ = diag(±1, · · · ,±1) and R is a nonsingular matrix such that

R∗DR commutes with ∆.

Proof. Immediate consequence of the Theorem 5.1.

After studying the Hermitian solutions of (5), we are now able to characterize the Hermi-

tian positive definite solutions.

Corollary 5.4 Given a nonsingular Hermitian matrix H, the matrix X is a Hermitian pos-

itive definite solution of the equation H−1X = X−1H if and only if

X = T ∗ exp

0 K

K∗ 0

T,

where exp(.) stands for the matrix exponential, T is the matrix in (7) and K is an arbitrary p×q complex matrix with p, q being, respectively, the number of positive and negative eigenvalues

of the given Hermitian matrix H.

Proof.

(⇒) Let X be a Hermitian positive definite solution of (5). By Corollary 5.3 there exist non-

singular matrices R and ∆ = diag(±1, · · · ,±1) such that X = T ∗Y T , with Y := DR∆R−1.

Since X and Y are congruent, one may conclude that X is Hermitian positive definite if and

only if Y is. Therefore we may proceed our analysis with Y instead of X. Since Y has to

be Hermitian positive definite, its principal logarithm L := Log Y is D-Hermitian and so Y

can be represented by Y = eL. On the other hand, Y is D-unitary because both D and W

are D-unitary. Hence Log Y is D-skew-Hermitian. Since L is simultaneously Hermitian and

D-skew-Hermitian, there exists a complex p× q matrix K such that L =

0 K

K∗ 0

.

(⇐) Immediate.

Let us now analyse the equation (6). We start with its Hermitian solutions.

Since S in (6) is complex nonsingular, using the Takagi’s factorization (see (4.4.4) in [11]),

we may write

S = UΓUT , (10)

where U is unitary and Γ is diagonal with positive entries. If Y is a matrix such that

X = UY U∗, then X is Hermitian if and only if Y also is. Therefore, the equation (6) reduces

to

Γ−1Y = Y −1Γ. (11)

15

If we let G := Γ−1Y then finding Hermitian solutions of (11) is equivalent to find G which is

simultaneously Γ-Hermitian and Γ-orthogonal.

Since Γ is Hermitian positive definite, by Theorem 2.3 and Lemma 1 in [7], ch.XI, we may

write

G = C−1EeiKC,

where C is such that Γ = CT C, E is a real involution and K is a real and skew-symmetric

matrix which commutes with E. If we choose C = Γ1/2, then G = Γ−1/2EeiKΓ1/2, and the

Hermitian solution Y is given by

Y = Γ1/2EeiKΓ1/2. (12)

If one wants that Y to be Hermitian positive definite, then, according to the proof of Lemma

1 in [7], it is enough to take E = I in (12).

We now summarize the previous discussion in the following theorem.

Theorem 5.5 A matrix X is a Hermitian solution of (6) if and only if it can be represented

by

X = UΓ1/2EeiKΓ1/2U∗, (13)

where U and Γ are as in (10), E is a real involution and K is a real skew-symmetric matrix

which commutes with E. Moreover, X is Hermitian positive definite if and only if X is given

as in (13) with E = I.

References

[1] G. Ammar, C. Mehl, V. Mehrmann, Schur-Like Forms for matrix Lie Groups, Lie Alge-

bras and Jordan Algebras, Linear Algebra and its Applications, 287, (1999), pp. 11–39.

[2] J. R. Bar-on and C. W. Gray, A generalized polar decomposition, Linear Algebra and its

Applications, 170, (1992), pp. 75–80.

[3] Y. Bolshakov, C.V.M. van der Mee, A.C.M. Ran, B. Reichstein, L. Rodman, Polar

decompositions in finite dimensional indefinite scalar products spaces: General theory,

Linear Algebra and its Applications, 261, (1997), pp. 91–141.

[4] P. Benner, R. Beyers, H. Fassbender, V. Merhmann, D. Watkins, Cholesky-Like factor-

izations of skew-symmetric matrices, Electronic Transactions on Numerical Analysis, 11,

(2000), pp. 85–93.

[5] J. R. Cardoso and F. Silva Leite, Theoretical and numerical considerations about Pade

approximants for the matrix logarithm, Linear Algebra and its Applications, 330, (2001),

pp. 31–42.

16

[6] H. Faβbender, D. Mackey, N. Mackey, H. Xu, Real and complex Hamiltonian square roots

of skew-Hamiltonian matrices, Technical Report # 92, Department of Mathematics and

Statistics, Western Michigan University (1999).

[7] F. R. Gantmacher, Theory of Matrices, Vol. II, Chelsea, New York, 1989.

[8] I. Gohberg, P.Lancaster, L. Rodman, Matrices and Indefinite Scalar Products. Operator

Theory: Advances and Applications, Vol. 8, Birkhauser Verlag, 1983.

[9] R. A. Horn and D. I. Merino, The Jordan canonical forms of complex orthogonal and

skew-symmetric matrices, Linear Algebra and its Applications, 302/303, (1999), pp.

411–421.

[10] R. A Horn and C. R. Johnson, Topics in Matrix Analysis. Cambridge University Press,

1994.

[11] R. A Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, 1985.

[12] A. Iserels, On Cayley-transform methods for the discretization of Lie-group equations,

Technical Report 1999/NA4, DAMTP, University of Cambridge.

[13] P. Lancaster, M. Tismenetsky, The theory of matrices. Academic Press, 1985.

[14] A. J. Laub and K. Meyer, Canonical forms for symplectic and Hamiltonian matrices,

Celestial Mechanics, 9, (1974), pp. 213–238.

[15] J. D. Lawson, Polar and Ol’shankii decompositions, J. Reine Angew. Math., 448, (1994),

pp. 191–219.

[16] Anna Lee, Secondary symmetric, skew-symmetric and orthogonal matrices, Periodica

Mathematica Hungarica, 7, (1976), pp. 63–70.

[17] E. Marques de Sa, M. C. Santos, Notes on selfdual norms and products of two involutions,

pre-print, (1992).

[18] C.V.M. van der Mee, A.C.M. Ran, L. Rodman, Stability of selfadjoint square roots and

polar decompositions in indefinite scalar spaces, Linear Algebra and its Applications,

302–303, (1999), pp. 77–104.

[19] V. Merhmann, H. Xu, Structured Jordan canonical forms for structured matrices that

are Hermitian, skew-Hermitian or unitary with respect to indefinite inner products, Elec-

tronic Journal of Linear Algebra, 5, (1999), pp. 67–103.

[20] L. Rodman, Products of Symmetric and Skew-symmetric Matrices, Linear and Multilin-

ear Algebra, 43, (1997), pp. 19–34.

17

[21] F. Silva Leite, P. Crouch, Closed forms for the exponential mapping on matrix Lie groups

based on Putzer’s method, Journal of Mathematical Physics, 40, (1999), pp. 3561–3568.

[22] O. Taussky, The Role of Symmetric Matrices in the Study of General Matrices, Linear

Algebra and its Applications, 5, (1972), pp. 147–154.

18