how i became a torchbearer for matrix theory · pdf filehow i became a torchbearer for matrix...

13
How I Became a Torchbearer for Matrix Theory Author(s): Olga Taussky Source: The American Mathematical Monthly, Vol. 95, No. 9 (Nov., 1988), pp. 801-812 Published by: Mathematical Association of America Stable URL: http://www.jstor.org/stable/2322895 . Accessed: 18/03/2014 12:22 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Mathematical Association of America is collaborating with JSTOR to digitize, preserve and extend access to The American Mathematical Monthly. http://www.jstor.org This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PM All use subject to JSTOR Terms and Conditions

Upload: phamliem

Post on 14-Mar-2018

217 views

Category:

Documents


4 download

TRANSCRIPT

How I Became a Torchbearer for Matrix TheoryAuthor(s): Olga TausskySource: The American Mathematical Monthly, Vol. 95, No. 9 (Nov., 1988), pp. 801-812Published by: Mathematical Association of AmericaStable URL: http://www.jstor.org/stable/2322895 .

Accessed: 18/03/2014 12:22

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Mathematical Association of America is collaborating with JSTOR to digitize, preserve and extend access toThe American Mathematical Monthly.

http://www.jstor.org

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

How I Became a Torchbearer for Matrix Theory*

OLGA TAUSSKY, California Institute of Technology

OLGA TAUSSKY is sufficiently well known to Monthly readers that further words here would be superfluous. Besides, this article is in itself biographical, as you will see [Ed.].

Some time ago in our public library I picked up a book, The Best of All Possible Worlds, by Peter F. Drucker, a professor of social science at Claremont Graduate School. In that book, Drucker writes about a pupil of Riemann who was to write his thesis on quaternions. Riemann had seen their importance to his own areas of study, and his student saw that they would lead to the subject that we now call matrix algebra, which has become all the rage.

But matrices were not always the rage. They have played a large role in group theory since the work of Elie Cartan, and they play a role in physics and in statistics. Still, matrix theory reached me only slowly. Since my main subject was number theory, I did not look for matrix theory. It somehow looked for me. In what follows a number of instances of such events are sketched.

1. Algebraic number theory. In a proof given by Minkowski in 1900** there appears the theory of matrices with dominant diagonal. He was reproving some of Dirichlet's results concerning units in algebraic number fields, and he observed that if a real matrix A has negative off-diagonal elements but only positive row sums, then its determinant is nonzero, in fact even positive. A similar theorem can be proved by a continuity argument. Such results are connected with the famous Ger'sgorin theorem, to which I will return in section 6 below.

2. The theorem of Shoda and R. C. Thompson. The next eye-opener for me was much stronger and it appeared in stranger circumstances: one of my favorite theorems in matrix theory came to me via class field theory! It is K. Shoda's theorem concerning matrices of determinant 1, and it states that for certain fields F, such a matrix is a commutator B -'C -'BC where B, C are matrices over F.

How does class-field theory come into this? When I was working on my thesis in class-field theory, a subject created by Hilbert, some of the most important results in the subject were being found by Takagi, in Japan. I summoned up the courage to ask him for reprints, and not only did he send them, but a number of other mathematicians in Japan, even in quite different areas, sent reprints too, and one of

*This is a slightly enlarged version of an invited lecture delivered in Raleigh, North Carolina, at the 1985 SIAM Conference on Applied Linear Algebra, organized by R. Brualdi and H. Schneider.

**Bibliographic references are collected at the end of the paper, and are arranged by sections.

801

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

802 OLGA TAUSSKY [November

them was Shoda. I adored this theorem and felt right away that some day it would play a role in my life (but it took a long time to do so).

The problem was this. If we take a pair of nonsingular matrices X, Y over a field, then it is easy to see that they can be replaced by AB, BA, for suitable matrices A, B if and only if they are similar. I tried to find out under what circumstances there exist matrices A, B, C such that

X= ABC, Y= CBA.

The answer is, iff det X = det Y, or equivalently, iff det XY-' = 1. My proof depended on Shoda's theorem, so it was valid in fields for which that theorem holds. Shoda himself had not found all of those fields, but R. C. Thompson, in his 1960 thesis, was able to characterize them completely. The result is now called the Shoda-R. C. Thompson theorem (see also Sourour 1986, for recent work in this area).

3. Topological algebra. Pontrjagin's paper Uber stetige algebraische Korper fell into my hands quite by accident. Let F be a topological field, i.e., a field that is also a topological space, so that

lim(an + bn) = lim an Jr lim bn; lim (anbn) =lim anlim bn;

lim bn7' = (lim bn ) 1 unless lim bn = 0

Then Pontrjagin proves that under certain topological assumptions, F must be isomorphic with either the reals, the complex numbers or the real quaternions. Under the same conditions, F must contain a subfield D isomorphic with the reals, such that D commutes with every element of F. Furthermore, F contains a finite set of elements such that every element of F is a linear combination of these elements with coefficients in D. Hence F is actually a division algebra over the reals, and so by a theorem of Frobenius it is isomorphic with the reals, the complex numbers or the real quaternions. Frobenius used matrix theory for the proof of his theorem. Later I myself found a topological proof for the theorem. Pontrjagin's proof partitioned the field into sets { X }, { i }, { v } with the properties that Xn -_ 0, ln is divergent, and vn has no divergent subsequence nor is 0 a point of accumulation (analogous to the interior, exterior and circumference, respectively, of the unit disk).

I became interested in studying such X sets for real algebras. The matrices over the reals form such a set in their natural topology. John Todd and I wrote the paper 'Infinite powers of matrices' on this subject.

It is well known that matrices C whose powers approach zero play a big role in iteration processes. They have been characterized by P. Stein as being precisely those matrices C for which there exists a matrix X such that X - CXC * is positive definite. Later I showed that Stein's theorem is equivalent, by a Cayley transforma- tion, to the Lyapunov theorem for stable matrices. See also section 6 below for more on Lyapunov's theorem.

4. Integral matrices. An integral matrix is one whose entries are rational integers. My first work on this subject was with John Todd, during the last weeks before the war broke out, in Great Britain. Integral matrices had made me return to my major subject of number theory in a big and unexpected way. It happened at Bryn Mawr College, where I held a scholarship during a year when Emmy Noether was also there, and she was known as a champion of completely abstract approaches, even to

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

1988] HOW I BECAME A TORCHBEARER FOR MATRIX THEORY 803

number theory. But a strange thing happened. Another Fellow at the College, Grace Shover (now Quinn), introduced me to her thesis adviser MacDuffee, who was an expert in matrices. I learned more about MacDuffee's work, some of which can be traced back to Poincare's studies on matrices that are attached to ideals in algebraic number fields. This led to my work on so-called ideal matrices. Another paper that was influential for me was that of Latimer and MacDuffee, because it provides an important link between algebraic number theory and integral matrices.

5. The theorem of McCoy. John Todd and I spent the first year of World War II, 1939-40, in Ireland on leave from our positions in the University of London. I had no assigned duties during the first term, and found the library at Queen's University very appealing. Among items that I had not seen before was the work of McCoy, in particular his well known characterization of pairs of matrices which can be transformed simultaneously to upper triangular form.

Precisely, here is McCoy's result. Let A, B be two n x n matrices, and suppose they have the following property: for every polynomial f in two variables, the matrix f(A, B) has for its eigenvalues the numbers f(X, It), where X, yt are suitably ordered eigenvalues of A and B, respectively. McCoy showed that for A, B to have this property it is necessary and sufficient that a matrix T exists such that both T- 'AT and T- 1BT are upper triangular.

Since this work was kind enough to jump off of the library shelves into my hands, I started a correspondence with McCoy, and my interest in matrices was further strengthened. Drazin, Dungey, and Gruenberg [1951] gave a more elementary proof of McCoy's theorem, but my 1957 proof, via the radical in abstract algebras, takes only a few lines. See also Flanders' treatment.

6. The new numerical mathematics. In the early forties, numerical mathematics was in a rather primitive state. The standard textbook in Britain was Whittaker and Robinson, which concentrated on interpolation, the solution of equations by New- ton's method, numerical quadrature and Fourier analysis. The solution of linear equations was studied, not for its own sake, but as an appendage of the theory of least squares approximation. Characteristic value problems and differential equa- tions received little attention. A book with a more comprehensive point of view was that of Frazer, Duncan and Collar, but it was not as widely used.

In the decades that followed there was intensive study of the approximate solution of continuous problems such as differential equations, and many of these involved matrix methods in a natural way, either iteratively or directly. In particu- lar, the whole subject of sparse matrices grew largely out of the study of characteris- tic value problems and the discretization of partial differential equations. I had several contacts with these developments, which I will now discuss. They brought me ir(to matrix theory again, whereas previously, numerical mathematics had interested me only for special problems in number theory.

(A) The Gersgorin theorenm The war was on, WWII, and I was working in London at the National Physical

Laboratory under R. A. Frazer in the flutter group. I was assigned to the study of flutter in supersonic aircraft, which leads to boundary value problems in hyperbolic partial differential equations. Hence this work did not immediately contribute to my matrix enthusiasm. However, I had read Frazer's article on how the flutter calcula-

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

804 OLGA TAUSSKY [November

tions were to be carried out. A large group of young girls, drafted into war work, did the calculation on hand-operated machines, following the instructions of Frazer and his assistants.

The relevance of these calculations to aircraft design is that in flight the interaction between the elastic forces in the airframe and the aerodynamic forces induces self-excited vibration which, above a certain speed, is unstable. This phenomenon is called flutter. It is, therefore, important to know what the flutter speed is before the aircraft is built and flown.

By a mere accident I had heard about the Ger'sgorin theorem, whose statement is given in a Zentralblatt review. It showed me how to reduce the amount of calculation, in a way that I will now try to explain.

The theorem itself states that the eigenvalues of an n x n matrix A with complex entries lie in the union of the closed disks ('Gersgorin disks')

Iz - aiij < E laikl (i = 1,2,, n) k*i

in the complex z plane. We will call the union of these disks the Gers'gorin set of the matrix A, and will denote it by P.

In the case that I was working on, the question came down to showing that a certain 6 x 6 matrix of the form - w2A + iwB + C, where w, the flutter parameter, is taken as 1 in the example (see Fig. 1), had no real eigenvalue to the left of the small circles. The matrix entries were in the neighborhood of 20 or so, but in itself that told us nothing about the whereabouts of the eigenvalues.

However, it turned out that the Ger'sgorin disks looked like the ones shown in FIG. 1.

1.0

0

Circles of the sixth order case (c)

-23 -18 -13 -8 -3 -2 -1 0

FIG. 1. The Ger'sgorin disks.

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

1988] HOW I BECAME A TORCHBEARER FOR MATRIX THEORY 805

This meant that we were lucky indeed, because the Gersgorin theorem can be applied as follows:

(i) If F, the Gersgorin set, falls into two connected components, one generated by r of the disks and the other by s of them, then there are r eigenvalues in the first component and s in the other.

(ii) A similarity transformation S-1AS does not change the eigenvalues of A, but it may well change the estimates that are given by Gersgorin's theorem. Hence, by a careful choice of S, we may get sharper estimates. The easiest kind of an S to use, it turns out, is one that agrees with the identity matrix except in one position, say Sj1 = r, where r = 0. This similarity will multiply the radius of the ith Gersgorin disk by 1/r while leaving all the centers of the disks unchanged.

(iii) The intersection of two Ger'sgorin sets, the original one and the one obtained after the similarity transformation, is again a region in which all of the eigenvalues must lie.

In FIG. 2, for example, it turned out that the larger circle could be replaced by the small circle far above the x-axis, by a similarity transformation of the diagonal type discussed above.

= 1.0 4

2

1' Circles for sixth order case (c)

0~~~~~~~~~~~~~~~~~~~~~~

-23 -18 -13 -8 -3 -2 -1 0

FIG. 2. Similarity transformations shrink the regions.

The small circle cannot be shrunk by the same method into a point unless the diagonal element is itself an eigenvalue. If we vary the diagonal similarities then at a certain point the other circles will overlap the isolated one and it will no longer be isolated. I raised the question of describing when this happens. Henrici, with a followup by F. Gaines, had the first contribution, in a special case. John Todd and Richard Varga gave different solutions of this problem. The school of A. Brauer at Chapel Hill worked on related questions.

Once again, I didn't ask to be assigned to matrix problems. They found me.

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

806 OLGA TAUSSKY [November

(B) The Stein-Rosenberg theorem Mordell asked me to look at a manuscript of Stein and Rosenberg, before it went

out to a referee. I liked it, and it has since become a classic. It has been particularly studied by Francois Robert in France, and is discussed in detail in Varga's 1962 book. The Perron-Frobenius theorem, which concerns the eigenvalues of matrices with nonnegative entries, plays a big part in this paper.

The Stein-Rosenberg theorem itself concerns iteration. The two classical iterative methods for the solution of linear equations Ax = b can be described as follows. Assume A = I - L - U, where L, U are lower and upper strictly triangular and I is the unit matrix. Make a guess x(O). Then, in the Jacobi method, we improve the guess by means of

x(1) -(L + U)x(0) + b.

Anyone who tries this soon discovers that it works better if one replaces the components of x(0) as fast as the improved ones are calculated, instead of continu- ing to use the old values until all new ones have been found. Formally, if we do that, the iteration process is defined by

-G) (I L)-L(Ux(o) + b)

and the procedure is called the Gauss-Seidel method. It seems plausible to assume that the Gauss-Seidel method, since it uses improved estimates of the components of the unknown vector, would converge faster than the Jacobi method. This is not the case always. However, the Stein-Rosenberg theorem asserts that if L and U are nonnegative, the two processes converge or diverge together, and the Gauss-Seidel process is at least as fast, whichever happens.

(C) The Lyapunov theorem The Lyapunov theorem, too, is of great interest in flutter work. It is a criterion

for the stability of an n x n complex matrix, where a stable matrix is one all of whose eigenvalues have negative real parts. The economist Arrow had used this theorem for a measure of the stability of an economic system.

The Lyapunov theorem gives the following criterion for a complex matrix A to be stable: there should exist a positive definite matrix H such that AH + HA* = -I. This criterion is useful for theoretical purposes though it is not well suited to computation in particular instances.

(D) The Hilbert matrix The Hilbert matrix A is defined by

ai, j = 11(i + j) (i, j = 1, 2, ..... , n).

Its inverse is integral. This matrix, too, made a surprise entry into my research interests. In late 1947,

after we had settled down at the U.S. National Bureau of Standards, I received a letter from Professor G. Temple in London. He wrote that the Oscillation Subcom- mittee of the British Aeronautical Research Council was interested in the Hilbert matrix, and would appreciate comments from me.

In due course I wrote a paper explaining the slow convergence of the largest eigenvalue Xn of A to its limiting value of s. John Todd then studied the condition of A and he and others used it as an example of an ill-conditioned matrix. Its

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

1988] HOW I BECAME A TORCHBEARER FOR MATRIX THEORY 807

determinant, for example, is nonzero, but extremely small. Later H. S. Wilf and N. G. de Bruijn described the behavior of X,, more precisely. While I had proved that

An = vt{l + 0(1/log n)}

they showed

X X IT- 17n(log n)-2 + O((loglog n)(log n)).

The literature on A is extensive and interest in A continues (see Wilf, Ergebnisse, volume 52, 1970).

7. The Perron-Frobenius theorem and combinatorial matrix theory. The Perron- Frobenius theorem concerns the eigenvalues of matrices that are irreducible and have nonnegative entries. Among its conclusions, for instance, is the fact that the matrix must have a positive real eigenvalue that is not exceeded, in absolute value, by any other eigenvalue.

It received a special lift through the proof by Wielandt (1950), cf. Gantmacher, Matrix Theory II. Graph theory plays a role there, in fact one can define irreducibil- ity of a matrix by connectedness of a certain graph that is determined by the positions of the nonzero matrix entries.

I studied N x N incidence matrices A of projective planes, and observed that AN- I always has strictly positive entries. I further raised the question of determining the exponent of A, i.e., the least power of A that has positive entries only. A. L. Dulmage and N. S. Mendelsohn showed that A4 > 0 and that permutation matrices P, Q exist such that (PAQ)3 has some zero entries. These authors used purely combinatorial arguments rather than the Perron-Frobenius theorem. For other combinatorial matrix work I refer to, e.g., Brualdi, Schneider, Engel, M. Hall, Ryser, etc.

8. Connections with topological algebra. In the late thirties I suddenly realized that the Cauchy-Riemann equations and the fact that they imply the Laplace equation can be expressed via matrix theory and can be connected with the fact that the field of complex numbers has no zero divisors. I then studied the values of n for which generalized 'Cauchy-Riemann' equations for n functions u, in n real vari- ables xl lead to the n-dimensional Laplace equation. I used algebras over the reals which have no zero divisors.

At that time the fact that such algebras must have dimensions 1,,2,4,8 was not yet completely established, although great progress had been made by topological methods, in particular by E. Stiefel. However, Stiefel reproved my result about dimensions 1 2, 4, 8 and for complex variables as well, without topology, by using representation theory of algebras and matrix theory.

9. M. Marden's book on geometry of polynomials. Marden's book, which ap- peared as Mathematical Surveys, No. 3, AMS, 1966, especially section 31, contains applications of matrix theory to the study of geometry of the zeros of polynomials, e.g. Ger'sgorin's theorem applied to the companion matrix of a polynomial gives information on the location of its zeros, and the theorem of Perron-Frobenius does likewise (cf. H. S. Wilf (1961)).

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

808 OLGA TAUSSKY [November

10. The Schur matrix. The Schur matrix is the matrix (e2,rimn/q) for 1 < m, n < q, and it has generated many problems of great interest. Its trace, for example, is obviously a Gaussian sum. The eigenvalues of this matrix were obtained by Carlitz, and the eigenvectors by P. Morton answering a question of Hugh Montgomery. Landau's famous book on number theory had introduced me to this matrix, and it is a valuable bridge between matrices and the theory of numbers.

If we take the Schur matrix in the form

S = Sn - = exp2'ri/n )

then we find it also in the theory of the Fast Fourier Transform. If v is a vector, then Sv is essentially its discrete Fourier transform. Theilheimer showed that the Fast Fourier Transform, which speeds up the calculation quite a bit, amounts essentially to a factorization of S, namely,

S2n = ? T][ tnD

where P is a permutation matrix, T = (D2(i-i)(-1)), and D = diag(1, ,..., -

11. Cramped matrices. A matrix theorem that fascinated me at a very early time came from the book by Speiser on finite group theory. It is connected with the fact that every finite group is isomorphic to a group of unitary matrices. Let A, B be such matrices, and put C = ABA - 1B -. Let the eigenvalues of A lie on an arc less than a semicircle of the unit circle. If A and C commute then C = I.

Zassenhaus wrote a paper, as did M. Marcus and R. C. Thompson, on this subject, and so had I done. Papers in functional analysis were stimulated too, and Berberian introduced the term 'cramped' for such matrices A. The theorem itself goes back to C. Jordan.

12. Other areas. While I was still in Britain, the idea had come to me that while there are a number of inequalities tying the eigenvalues of A, B and A + B there had not been much done on explicit relationships between them. With these questions in mind I became very enthusiastic about Mark Kac introducing me to what I later called the L-property, L for linear, because the eigenvalues of XA + ,iB are then Xai + p43i with ai, Pi suitably ordered eigenvalues of A, B. Motzkin joined me in this work, and we proved among other things the following theorem: if for all X and IL the matrix XA + ,uB is diagonalizable, then A and B commute. This result was reproved by Tosio Kato, using perturbation theory, and was later generalized by S. Friedland. Kaplansky generalized the L-property to operators.

After the war, in 1947, we came to the USA when John Todd was invited to work at the National Bureau of Standards on the uses of high speed computing machines, and soon after my arrival I was given employment as well. At that time I picked up the torch. Matrix theory had become a subject for me. Matrices were not any longer just used, they were algebraic structures like rings, groups, lattices....

At first I collected relevant references and they went into my chapter in John Todd's Survey of Numerical Analysis. But I was also working on other projects

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

1988] HOW I BECAME A TORCHBEARER FOR MATRIX THEORY 809

linked to matrices, both algebraically and arithmetically. Stimulated by McCoy's work, I became interested in pencils of matrices and in matrix algebras. The Caltech theses by F. Gaines and by H. Shapiro generalized McCoy's idea of simultaneous triangularization to simultaneous block triangularization, and further contributions were made by R. C. Thompson. All of this led to the study of generalized commutativity, commutators, higher order commutators, and to my idea of poly- nomials in the commutator operator.

While still at NBS, in 1951, I was asked to organize a conference in numerical analysis and, of course, I chose matrix theory as the theme. It was perhaps the first matrix conference ever. The Proceedings of this conference contain Givens' rotation method for determining the eigenvalues and eigenvectors of real, symmetric matrices.

In my position at NBS I encouraged people to turn to matrix theory, and I thought of bounds for eigenvalues as my major project there. The school of W. V. Parker at Auburn also worked on this subject at the time. I had a master's student, Marion Walter, at NYU, who wrote a thesis on limits for the characteristic roots of a matrix, and I gave a course on matrix theory at NYU in 1955. I also looked after a group of highly talented postgraduate students and high ranking visitors at NBS. Strangely enough, they all became interested in matrix theory.

Matrix theory changed my life quite a lot. During the war and my civil service work I had lost my favorite subject to a large extent, though not entirely. However, integral matrices brought it back to me quite unexpectedly.

During our 1947 visit to Princeton I met Chowla. At that time he and two other mathematicians talked to me about similarity classes of integral matrices with the same characteristic polynomial. This reminded me of the theorem of Latimer and MacDuffee; Chowla urged me to find a new proof for it, and I managed to do so.

Since then integral matrices have played a major role for me, and I helped to get a number of other people, like Zassenhaus, Dade, M. Newman, Estes, and Gural- nick interested in them. We found simpler ways to prove many classical number- theoretic theorems by methods that used integral matrices. I have given an exposi- tion of the connection between algebraic number theory and integral matrices in an appendix to a book by H. Cohn on algebraic number theory.

Some advice. When you observe an interesting property of numbers, ask if perhaps you are not seeing, in the 1 x 1 case, an interesting property of matrices. Think of GL(n, F) or SL(n, F), GL(n, Z) or SL(n, Z).

When you have a pair of interesting matrices study the pencil that they generate, or even the algebra.

When the determinant of a certain matrix turns out to be important, ask about the matrix as a whole, for instance as in the case of the discriminant matrix, as suggested by the discriminant of an algebraic number field.

When a polynomial in one variable interests you, ask about the matrices of which it is the characteristic polynomial.

When people look down on matrices, remind them of great mathematicians such as Frobenius, Schur, C. L. Siegel, Ostrowski, Motzkin, Kac, etc., who made important contributions to the subject.

I am proud to have been a torchbearer for matrix theory, and I am happy to see that there are many others to whom the torch can be passed.

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

810 OLGA TAUSSKY [November

BIBLIOGRAPHY

Section 1

1. J. L. Brenner, A bound for a determinant with dominant main diagonal, Proc. A nier. Math. Soc., 5 (1954) 631-634.

2. __, Neuer Beweis einer Satzes von Taussky and Geiringer, Arch. Math., 7 (1956) 274-275. 3. S. Gersgorin, Uber die Abgrenzung der Eigenwerte einer Matrix, Izv. A kad. Nauk S. S. S. R. (1931)

749-754. 4. H. Minkowski, Zur Theorie der Einheiten in den algebraischen Zahlkorpern, GCtt. Nachr. (1900)

90-93. 5. A. Ostrowski, Uber die Determinanten mit uberwiegender Hauptdiagonale, Comm. Math. Heft., 10

(1937) 69-96. 6. G. B. Price, Bounds for determinants with dominant principal diagonal, Proc. Amer. Math. Soc., 2

(1951) 497-502. 7. H. Schneider, An inequality for latent roots of a matrix applied to determinants with dominant

main diagonal, J. London Math. Soc., 28 (1953) 8-20. 8. 0. Taussky, A recurring theorem on determinants, Amer. Math. Monthly, 56 (1949) 673-676.

Section 2

1. J. L. Brenner and J. S. Lim, The matrix equations A = XYZ and B = ZYX and related ones, Bull. Amer. Math. Soc., 17 (1974) 179-183.

2. D. R. Estes, Scalar matrices as multiplicative commutators having prescribed determinants for the variables, Linear and Multilinear Algebra, 8 (1980) 213-217.

3. K. Fan, Some remarks on commutators of matrices, Arch. Math., 5 (1954) 102-107. 4. H. Flanders, Elementary divisors of AB and BA, Proc. Amer. Math. Soc., 2 (1951) 871-874. 5. K. Shoda, Einige Satze uber Matrizen, Jap. J. Math., 13 (1937) 361-365. 6. 0. Taussky, Generalized commutators of matrices and permutations of factors in a product of three

matrices, Studies in Mathematics and Mechanics presented to Richard von Mises, Academic Press, NY, 1954.

7. R. C. Thompson, Commutators in the special and general linear groups, Trans. Amer. Math. Soc., 101 (1961) 16-33.

8. , On matrix commutators, Portugal. Math., 21 (1962) 143-153. 9. , Commutators of matrices with prescribed determinant, Canad. J. Math., 20 (1968)

203-221. 10. _ _, Commutators of matrices with coefficients from the field of two elements, Duke Math. J.,

29 (1962) 367-373.

Section 3

1 L. Pontrjagin, Uber stetige algebraische Korper, Ann. Math., 33 (1932) 163-174. 2. 0. Taussky and John Todd, Infinite powers of matrices, J. London Math. Soc., 17 (1942) 147-151. 3. 0. Taussky, Matrices C with C' - 0, J. Algebra, 1 (1954) 5-10.

Section 4

1. 0. Taussky and John Todd, Matrices with finite period, Proc. Edinburgh Math. Soc., 6 (1939) 128-134.

2. 0. Taussky, On a theorem of Latimer and MacDuffee, Canad. J. Math., 1 (1949) 300-302. 3. __, Ideal matrices, I., Archiv. d. Math., 13 (1962) 275-282.

Section 5

1. M. P. Drazin, J. W. Dungey, and K. W. Gruenberg, Some theorems on commutative matrices, J. London Math. Soc., 26 (1951) 221-228.

2. H. Flanders, Methods of proof in linear algebra, Amer. Math. Monthly, 63 (1956) 1-15. 3. R. M. Guralnick, Triangularization of sets of matrices, Linear and Multilinear Alg., 9 (1980)

133-140. 4. T. Laffey, Simultaneous triangularization of matrices, J. Algebra, 44 (1977) 351-357. 5. N. H. McCoy, On the characteristic roots of matrix polynomials, Bull. Amer. Math. Soc., 42 (1936)

592-600.

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

1988] HOW I BECAME A TORCHBEARER FOR MATRIX THEORY 811

6. 0. Taussky, Commutativity in finite matrices, Amer. Math. Monthly, 64 (1957) 229-235. 7. _ _, Sets of complex matrices which can be transformed to triangular forms, Coll. Math. Soc.

Janos Bolyai, 22 (1977) 579-590.

Section 6

1. J. L. Brenner, Gersgorin theorems by Householder's proof, Bull. Amer. Math. Soc., 74 (1968) 625-627.

2. N. G. de Bruijn and H. S. Wilf, On Hilbert's inequality in n dimensions, Bull. Amer. Math. Soc., 68 (1962) 70-73.

3. D. Carlson and H. Schneider, Inertia theorems for matrices, the semidefinite case, Bull. Amer. Math. Soc., 68 (1962) 479-483.

4. M. Fiedler, Matrix inequalities, Numer. Math., 9 (1966) 109-119. 5. R. A. Frazer, W. J. Duncan, and A. R. Collar, Elementary Matrices and Some Applications to

Dynamics and Differential Equations, Cambridge University Press, 1938. 6. W. Givens, Elementary divisors and some properties of the Lyapunov mapping X - A X + XA *,

Argonne National Laboratory ANL-6456. 7. __, Numerical computation of the characteristic values of a real symmetric matrix, Oak Ridge

National Laboratory, ORNL.1574, 1954. 8. A. J. Hoffman and H. Wielandt, The variation of the spectrum of a normal matrix, Duke Math. J.,

(1953) 37-39. 9. M. A. Lyapunov, Probleme general de la stabilit6 du mouvement, A nn. Math. Studies, 17,

Princeton, 1949. 10. V. B. Lidskii, On the characteristic roots of a sum and a product of symmetric matrices, Dokl.

Akad. Nauk. SSSR, 75 (1950) 769-772. 11. F. Robert, Autour du the6reme de Stein-Rosenberg, Numer. Math., 27 (1976) 133-141. 12. H. Schneider and A. Ostrowski, Some theorems on the inertia of general matrices, J. Math. Anal.

and Appl., 4 (1962) 72-84. 13. P. Stein and R. L. Rosenberg, On the solution of linear simultaneous equations by iteration, J.

London Math. Soc., 23 (1948) 111-118. 14. 0. Taussky, A remark concerning the characteristic roots of finite segments of the Hilbert matrix,

Quart. J. Math. Oxford, 20 (1949) 82-83. 15. 0. Taussky, A remark on a theorem of Lyapunov, J. Math. Anal. and Appl., 2 (1961) 105-107. 16. 0. Taussky, A method for obtaining bounds for characteristic roots of matrices with applications to

flutter calculations, Aeron. Res. Council of Great Britain, Report 10.508 (1947). 17. 0. Taussky, Stable matrices, Programmation en mathematiques numeriques, CNRS, No. 165,

Besanqon (1968) 75-88. 18. John Todd, On smallest isolated Gerschgorin disks for eigenvalues, Numer. Math., 7 (1965)

171-175. 19. John Todd, Survey of Numerical Analysis, McGraw-Hill, New York, 1962. 20. John Todd, Computational problems concerning the Hilbert matrix, J. Res. Nat. Bur. Standards,

65 (1961) 19-22. 21. R. S. Varga, Matrix Iterative Analysis, Prentice Hall, 1962. 22. R. S. Varga, On smallest isolated Gerschgorin disks for eigenvalues, Numer. Math., 6 (1964)

366-376. 23. H. Wielandt, An extremum property of sums of eigenvalues, Proc. Amer. Math. Soc., 6 (1955)

106-110. 24. H. S. Wilf, Finite sections of some classical inequalities, Ergeb. Mathematik, 52 (1970). 25. H. S. Wilf, On finite sections of the classical inequalities, Nederl. Akad. Wetensch. = Indag. Math.,

24 (1962) 340-342.

Section 7

1. R. A. Brualdi, S. V. Parter, and H. Schneider, The diagonal equivalence of a nonnegative matrix to a stochastic matrix, J. Math. Anal. Appl., 16 (1966) 31-50.

2. A. L. Dulmage and N. S. Mendelsohn, The exponents of incidence matrices, Duke Math. J., 31 (1964) 575-584.

3. G. M. Engel and H. Schneider, The Hadamard-Fisher inequality in a class of matrices defined by eigenvalue monotonicity, Linear and Multilinear Algebra, 4 (1976) 155-176.

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions

812 OLGA TAUSSKY [November

4. M. Fiedler and V. Ptik, On matrices with non-positive off-diagonal elements and positive principal minors, Czechoslovak Math. J., 12 (1962) 382-400.

5. F. R. Gantmacher, The Theory of Matrices, vol. II, 1959, translated by K. A. Hirsch, Chelsea, New York, 1960.

6. M. Hall, Finite projective planes, Amer. Math. Monthly, 62 no. 7, Part II (1955) 18-24. 7. J. C. Holladay and R. S. Varga, On powers of non-negative matrices, Proc. Amer. Math. Soc., 9

(1958) 631-634. 8. V. Pttk, On a combinatorial theorem and its application to non-negative matrices, Czechoslovak

Math. J., 83 (1958) 487-495. 9. H. Ryser, Geometries and incidence matrices, Amer. Math. Monthly, 62 no. 7, Part II (1955) 25-31.

10. H. Wielandt, Unzerlegbare nicht negative Matrizen, Math. Z., 52 (1950) 642-648.

Section 8

1. E. Stiefel, Uber Richtungsfelder in den projektiven Raumen und einen Satz aus der reellen Algebra, Comment. Math. He/v., 13 (1941) 209-239 (Satz IIc).

2. __ , On Cauchy-Riemann equations in higher dimensions, J. Res. Nat. Bur. Standards, 48 (1952) 395-398.

Section 9

1. M. Marden, Geometry of the Zeros of Polynomials, Math. Surveys 3, Amer. Math. Soc. 1966. 2. H. S. Wilf, Perron-Frobenius theory and the zeros of polynomials, Proc. Amer. Math. Soc., 12

(1961) 247-250.

Section 10

1. L. Carlitz, Some cyclotomic matrices, Acta Arithm., 5 (1959) 293-308. 2. P. Morton, On the eigenvectors of Schur's matrix, J. Number Theory, 12 (1980) 122-127. 3. I. Schur, Uber die Gausschen Summen, Nachr. Kgl. Ges. GCottingen Math. (1921) 147-153. 4. F. Theilheimer, A matrix version of the Fast Fourier Transform, IEEE Trans. AU-17 (1969)

158-161.

Section 11

1. S. K. Berberian, A note on operators unitarily equivalent to their adjoints, J. London Math. Soc., 37 (1962) 403-404.

2. C. Jordan, M6moire sur les equations differentielles lineaires 'a integrale algebrique, J. fur die Reine u. Angew. Math., 84 (1878) 89-215.

3. M. Marcus and R. C. Thompson, On a classical commutator result, J. Math. Mech., 16 (1966) 583-588.

4. C. R. Putnam, Commutation properties of Hilbert space operators and related topics, Ergeb. Math., 36 (1967).

5. 0. Taussky, Commutators of unitary matrices which commute with one factor, J. Math. Mech., 10 (1961) 175-178.

6. H. Zassenhaus, On a paper by 0. Taussky, J. Math. Mech., 10 (1961) 179-180.

This content downloaded from 132.178.2.65 on Tue, 18 Mar 2014 12:22:42 PMAll use subject to JSTOR Terms and Conditions