mathematical foundations · 2009-07-14 · mathematician of his era, greeted the ... barber who...

12
MATHEMATICAL FOUNDATIONS by T. A. Heppenheimer 2 MOSAIC Volume 21 Number 1 Spring 1990

Upload: hoangthien

Post on 02-Aug-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

MATHEMATICAL FOUNDATIONS

by T. A. Heppenheimer

2 MOSAIC Volume 21 Number 1 Spring 1990

e hear within us the per­petual call: There is the

problem; seek its solution; you can find it by pure rea­

son, for in mathematics there is no ignorabimus [we shall not know]." With these words David Hilbert, the leading mathematician of his era, greeted the Second International Congress of Mathematicians in Paris. It was the eighth of August, 1900.

Hilbert's statement summarized the widely held hope and faith that mathe­matics might deal conclusively with all problems that came within its ken. Any such problem would be solved, or the impossibility or nonexistence of a solu­tion would be clearly demonstrated. Any mathematical statement set forth to be proved would be either affirmed or refuted beyond all doubt, entirely through the use of logic.

This faith had served mathemati­cians well during the nineteenth cen­tury. They had proved theorems of exceptional depth and been rewarded with profound insights. They had estab­lished methods with extraordinary power for the solution of mathematical problems. They naturally expected this trend to continue into the new century, with new insights and discoveries con­firming the expectation that mathemati­cal reasoning would sweep all before it.

In the twentieth century's second decade this hope received a powerful boost from two British logicians. A leading problem of the day was to establish foundations for mathematics. These were to take the form of defini­tions and axioms (fundamental state­ments) that would suffice to prove all known or existing mathematical theo­rems (statements known to be valid).

These logicians, Bertrand Russell and Alfred North Whitehead, knew that the building of foundations re­quired overcoming logical paradoxes similar to the one in this well-known conundrum: In a certain village lives a barber who shaves those men, and only those men, who do not shave them­selves. The barber is a man. Does he shave himself?

One can reason that if the barber

shaves himself, then he does not shave himself; and if he does not shave him­self, then he shaves himself. Similarly, "Russell's paradox" asserts that if a certain mathematical statement is assumed to be true, then it must be false, and vice versa.

To overcome this difficulty, Russell and Whitehead introduced new techni­cal concepts (known as the theory of types and the axiom of reducibility) and, on that basis, they wrote their mammoth Principia Mathematica, which set forth foundations for mathe­matics. Some logicians rejected the axiom of reducibility, and others raised questions about the theory of types. Nevertheless, there was wide agree­ment that the Principia represented an extraordinary advance in the search for foundations, and might even be the solution to the problem.

No foundations

Then, in 1931, a young logician at the University of Vienna named Kurt Goedel showed that the hopes of Rus­sell, Whitehead, Hilbert, and their col­leagues were not to be realized. In the first of two theorems, Goedel demon­strated that if any consistent system of mathematics or logic is rich enough to encompass arithmetic, then statements can be made within that system that its axioms can neither prove nor disprove. Within the system of the Principia, for instance, it would be possible to set forth statements couched in that work's terminology and definitions, and deal­ing with its subject matter, that could neither be proved nor disproved using that work's axioms.

This meant the Principia could not stand as the desired foundations for mathematics. Nor could one repair its deficiencies by adding more definitions and axioms, for, by Goedel's theorem, the resulting system, even though richer, would also admit of statements that could not formally be proved or disproved within that system. They would be undecidable, in the language of mathematics. The very hope of establishing foundations, ironically, was without foundation.

There was more. Goedel also exam­ined the assertion that the system of the Principia was consistent It would be consistent if it avoided giving rise to mutually contradictory conclusions. Given a particular statement, then, the propositions "This statement is true" and 'This statement is-false" could not both be proved. But Goedel showed that while the contention "The Prin­cipia is consistent" could be set forth in the language of the Principia, it could neither be proved nor disproved within the confines of that system.

More generally, in his second theo­rem, Goedel showed that no such sys­tem can prove its own consistency, even if it is indeed consistent. In the words of Donald Martin of the Univer­sity of California at Los Angeles, "If you want to establish consistency, you need the full strength of the theory—and even more." It would take a theory stronger than that of the Principia to prove the Principia9s consistency. Then this stronger theory would need yet a stronger one to prove that it itself was consistent, and so forth.

New ground

Anil Nerode of Cornell University describes Goedel's conclusions as "the paper that everyone read, because it was the most signal paper in logic in 2,000 years." Goedel's work broke with an understanding of logic and mathe­matics that dated to the work of Aris­totle and Euclid. "Euclid alone has looked on Beauty bare," wrote the poet Edna St. Vincent Millay, expressing a deeply rooted concept in Western thought that mathematics, as it devel­oped according to Euclid's methods, encompasses universal beauty, univer­sal truth.

Euclid set forth a set of basic terms: point, line, angle, and the like. He also gave a small list of axioms, such as "A straight line can be drawn between any two points" and 'Things that are equal to the same thing are equal to each other." Then, using rules of inference that Aristotle discussed in his own writ­ings, Euclid filled 13 books with theo­rems proved as consequences of his

MOSAIC Volume 21 Number 1 Spring 1990 3

e hear within us the per-petual call: There is the

r problem; seek its solution; you can find It by pure rea-

definitions and axioms. The work of subsequent centuries showed that vir­tually any statement about geometrical figures could be proved or disproved using Euclid's methods. So towering was Euclid's reputation that even the slightest challenge to his body of work could bring on a major crisis in the intellectual world.

Such a crisis had arisen during the nineteenth century, when the mathe­maticians Karl Friedrich Gauss, Janos Bolyai, Nikolai Lobachevski, and Georg Bernhard Riemann invented non-Euclidean geometry Euclid had postu­lated that through a given point, one and only one line could be drawn that would be parallel to a given line. Non-Euclidean geometry asserted that any number of parallel lines could be drawn through that point, or, alternatively, that parallel lines do not exist. That such assertions might be entertained, and that they could lead to conclusions as acceptable and logically consistent as Euclid's theorems, was in its day both shocking and tradition-breaking.

GoedeFs work went much further. The non-Euclideans had shown that new mathematical systems could be produced by the introduction of changes to axioms of an existing sys­tem, such as Euclid's. Goedel showed that any such system would give rise to problems that could only be addressed by inventing richer and richer new sys­tems, systems that might feature axioms of an entirely novel type and whose nature could not always be pre­dicted or anticipated. Only within these more far-reaching systems would it be possible to prove the formally undecid-able statements that would arise within preexisting systems, or to establish such systems' consistency

This was a most unexpected result within mathematical logic. A host of savants had labored at the problem of building foundations: not only Russell and Whitehead but also Germany's Gottlob Frege and Georg Cantor, Ernst Zermelo and Abraham Fraenkel, and the Italian Giuseppe Peano. The prob­lem of consistency, in turn, had been a principal concern of the German-born David Hilbert. Now, however, their goals were forever out of reach. It was as if mathematicians had worked to climb a mountain only to see, receding endlessly into the distance, a range of peaks that were higher still

Mercifully, formal undecidability

infected only the branches of mathe­matics concerned with proving new statements. Even then the infection erupted only rarely, with most such statements being provable in familiar ways. The vast body of established proofs and methods, of mathematical results known to be valid, were entirely untouched by GoedeFs results. Still, the implications of GoedeFs work reached well beyond the cloistered confines of mathematical logic.

Implications

A similar general collapse of intellec­tual certainties occurred simultane­ously within physics, through the work of Einstein and through quantum mechanics. Since its inception, physics had advanced amid two articles of faith. The first was that the world of common experience could give a sure guide to the world at large; there was no distinc­tion between the realms of people and of angels. Einstein's relativity theory showed that this was not so, that high speeds and extreme distances would carry with them their own laws. The second belief, even more ingrained, was that nature is knowable in detail and that all of physics can be compre­hended through laws of cause and effect. Einstein himself embraced this concept in all his work. Quantum mechanics shattered this certitude, introducing fundamental limits to knowability and to the applicability of cause and effect. Similarly, GoedeFs work demonstrated limits to the ability of mathematics to answer questions in its own fields.

Today, a half-century later, these intellectual revolutions have run their course. Relativity and quantum mechanics are part of the working tools of practicing physicists. GoedeFs work, too, has been assimilated into mathematics, producing a flourishing array of researches and throwing new light on old problems and at the same time stimulating an increasing bold­ness in mathematical thought.

As of yet, these approaches have not touched the broad realms of analysis within which most mathematicians work. But people remember that the equally abstruse non-Euclidean geome­tries of the nineteenth century led in time to the work of Einstein. With GoedeFs shadow falling across the very foundations of mathematics, the researches that have grown out of his

work in time may bring forth results that are equally portentous.

Number theory is among the more well-established fields crossed by GoedeFs shadow. It deals with nothing more complex than the ordinary inte­gers that children learn in kinder­garten, yet it abounds in theorems that demand the deepest and most powerful of mathematical insights—when they can be proved at all

Karl Friedrich Gauss, who is often ranked with Archimedes and Newton as one of the greatest mathematicians, called number theory the "queen of mathematics." Indeed, what the Roman biographer Plutarch wrote of Archi­medes' work could readily be said of number theory: "It is not possible to find . . . more difficult and intricate questions, or more simple and lucid explanations. . . . No amount of investi­gation of yours would succeed in attain­ing the proof, and yet, once seen, you immediately believe you would have discovered it; by so smooth and so rapid a path he leads you to the conclu­sion required."

Diophantine equations

An important topic in number theory is the study of Diophantine equations, named for the third-century mathe­matician Diophantus of Alexandria. These take the same form as ordi­nary algebraic equations, which can have straightforward rules for telling whether solutions exist, along with well-known methods for solution, and whose solutions typically involve deci­mal numbers, such as the square root of 2, which is 1.414213562. . . . But Dio­phantine equations impose another condition: The only permitted solutions are integers.

Such equations generally are far more difficult to solve than their alge­braic counterparts. Indeed, long after mathematicians had achieved their cur­rent understanding of algebraic equa­tions, they still could solve Diophantine equations in only a few special cases. David Hilbert, in his 1900 address, regarded Diophantine equations as suf­ficiently important to merit a place on a list of 23 problems that he pre­sented to his colleagues as projects to be studied during the- new century His tenth problem set the goal: Give a gen­eral method for determining whether or not a given Diophantine equation possesses solutions. He did not pro-

4 MOSAIC Volume 21 Number 1 Spring 1990

MOSAIC Volume 21 Number 1 Spring 1990 5

6 MOSAIC Volume 21 Number 1 Spring 1990

MOSAIC Volume 21 Number 1 Spring 1990 7

8 MOSAIC Volume 21 Number 1 Spring 1990

MOSAIC Volume 21 Number 1 Spring 1990 9

pose the much more difficult problem of actually finding these solutions, but merely sought to settle the question of whether they exist.

This problem was solved only as recently as 1970, by Yuri Matyasevich, who is now at the Steklov Institute in Leningrad. He proved that no such general method exists. Any particular Diophantine equation might yield to some method, adapted perhaps to the peculiarities of that example; but given any algorithm or procedure capable of settling questions of the existence of solutions, there will always be Diophan­tine equations that escape its net. As UCLA'S Donald Martin states, "There's nothing that will work on everything and always give the right answer."

Any theory of Diophantine equa­tions, then, will entail formally undecid-able examples. Using that theory, one could neither prove nor disprove that the equations possess solutions. This will be true no matter how powerful or all-inclusive a theory might be. A Goedel-like undecidability thus exists even in the realm of solving equations, which is among the most basic tasks in mathematics.

Turing's conundrum

More recently, in 1987, Gregory Chaitin of the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, used Diophantine equations to address a fundamental issue in com­puter science. The point of departure for Chaitin's work has been a result known as Turing's halting theorem, given by the British logician Alan Tur­ing in 1937. Indeed, Turing's work stands as part of the basis for the math­ematical theory of computers.

Turing envisioned an idealized com­puter, able to carry out a strictly limited set of operations. He argued that such a machine, despite these restrictions, would be able to execute any conceiv­able calculation that could be specified in a finite number of steps. He then raised a question: Given an arbitrary program, could one decide whether the computer would run on indefinitely or finish its calculations and halt?

Turing showed that the answer to his

Heppenheimer is a frequent contributor to Mosaic. His most recent article was "Classical Mechanics, Quantum Mechanics, and the Arrow of Time" in Volume 20 Number 2 Summer 1989,

problem was the same one that Matya-sevich would later give for Diophantine equations: No general method can exist for determining whether any pro­gram will halt. Specific programs might yield to specific methods, but any gen­eral technique will be able to answer this question for some but not all com­puter programs.

Chaitin's work goes one step further. He addresses the question: Given an arbitrary program, what is the probabil­ity that this program will halt? This question differs from Turing's in that Turing, in effect, says merely that this probability has some value less than unity—not all programs will halt. Chaitin, by contrast, seeks to compute a specific value for this probability.

A value of 1/3, for example, would mean that in a large ensemble of arbi­trary programs, one-third will halt. In ordinary decimal notation, which expresses numbers using the digits 0 to 9, 1/3 is written 0.33333 . . . . Chaitin expresses his probability as a fraction in binary notation, which uses only the digits 0 and 1; in this system, 1/3 is written 0.01010101 . . . . Chaitin has shown that the actual value for his probability would be written as a ran­dom sequence of 0s and Is; no rule can be given for its computation.

Each binary digit in the sequence has the value 0 or 1, depending on whether a particular Diophantine equation has finitely many or infinitely many integer solutions. Each such equation fills about 200 pages of text, having 17,000 variables and being some 900,000 sym­bols long. A specific equation of this type exists for each digit. And, as Chaitin notes, "There are no logical connections between the Diophantine equations generated in this way."

World enough, and time

Thus, even to compute a single digit for Chaitin's number is an overwhelm­ing task, since Matyasevich's unsolv-ability—the inability of any general algorithm to settle the question of exis­tence of solutions for all Diophantine equations—extends as well to the ques­tion of whether there is a finite or an infinite number of solutions. The math­ematician is left with the daunting prospect of studying these equations one by one, hoping in any particular case to gain insight but with no proof that this insight will help crack the next one. "A mathematician could do no bet­

ter than a gambler tossing a coin in deciding whether a particular equation had a finite or an infinite number of solutions," says Chaitin.

Chaitin's problem demonstrates a key point: Just as quantum mechanics dictates that there are physical mea­surements that can never be made even in principle, so there are mathe­matical quantities that can be described but cannot be computed even in princi­ple. Chaitin's arbitrary computer pro­grams are the sort the proverbial monkey would write if set to work at a keyboard. They would mostly be gib­berish, but once in a long while the monkey would turn out a meaningful computer program that fails to halt.

The problems of Chaitin, Turing, and Matyasevich all have this in common: They involve well-posed questions or well-defined numbers that cannot in general be determined by any proce­dure that uses a finite number of steps. They are therefore radically different from even the most difficult problems studied with supercomputers, which may demand billions of steps to get an approximate answer. Matters would be different, however, if there were a com­puter that could carry out not mere bil­lions of operations, but infinitely many. To borrow from the English poet Andrew Marvell—"Had we but world enough, and time," then with such infinitely powerful computers it would be possible to solve all three problems.

To address Turing's halting problem, one would simply run any program on this "Marvell computer." Those that could halt would do so; those that would run on indefinitely would do so as well To handle Matyasevich's prob­lem, one would test any Diophantine equation by plugging in, one by one, all the infinitely many sets of integers that are candidate solutions; the Marvell computer then would find all sets that satisfy the equation. By dealing specifi­cally with Chaitin's Diophantine equa­tions, the Marvell computer also would readily determine which ones have infinitely many solutions and so con­struct Chaitin's number, digit by digit.

The concept of a Marvell computer helps to illustrate another major contri­bution by Goedel: the axiom of con-structibility. This axiom states that all admissible mathematical objects must be constructible by explicit definition using standard mathematical opera­tions, even though these operations

10 MOSAIC Volume 21 Number 1 Spring 1990

may have to be repeated an infinite number of times.

One might begin by making calcula­tions with a Marvell computer. One could then operate on the results using infinitely many Marvell computers. And these results, in turn, could serve as input data to infinitely many more such computers, and so on and on, infinitely many times. The results of this exercise would still be part of Goedel's constructible universe.

Unequal infinities

Using the axiom of constructibility, Kurt Goedel made major contributions that opened up still another topic for study, a topic so important that Hilbert had ranked it first on his list of 23 prob­lems for twentieth-century mathemati­cians. This topic is known as the continuum hypothesis. It dates to the work of the German mathematician Georg Cantor, late in the nineteenth century, who constructed the modern theory of infinite sets. His key achieve­ment was to demonstrate that not all infinities are equal.

Cantor addressed the questions of how many integers there are, how many even numbers (such as 2, 4, 6, . . .), how many divisible by 1,000 (such as 1,000, 2,000, 3,000, . . .), how many fractions (such as 3/8, or 1/23, or 879/364), and how many real num­bers expressible as decimals of arbi­trary length (such as 3.14159 . . . ) .

The answer to Cantor's questions is called the cardinality of the particular set under discussion—the set of even numbers, or integers, or fractions. In these three cases the cardinality is infi­nite, and all these infinities are the same. Cantor proved this by showing that for each of these sets one could list their members neatly, in; order: first, second, third, . . . , one millionth, and so forth.

What of the cardinality of the real numbers? Cantor showed that while there are infinitely many, the real num­bers cannot all be listed in a simple ordered sequence like that of the inte­gers. If one tries to do so, inevitably one can find another real number that is not on the list Hence there are, in an important sense, more real numbers than there are integers or fractions. The cardinality of the reals—the answer to how many—must be infinite, but a higher infinity than that associ­ated with the integers.

Cantor gave the name aleph-zero to the infinity that gives the cardinality of integers, fractions, even numbers, and the like, aleph being the first letter in the Hebrew alphabet. He gave the des­ignations aleph-one, aleph-two, and so on to infinities that are larger still. Moreover, he conjectured—but did not prove—that the cardinality of the reals is aleph-one.

The set of all reals is often called the continuum, which is why Cantor's con­jecture is called the continuum hypoth­esis. "The problem of the continuum is one of the fundamental questions of set theory," says the California Institute of Technology's Alexander Kechris. "It's the first basic set-theoretical question you could ask about the reals: How many are there?" And because the real numbers are the basis for much of mathematics, the continuum hypothe­sis is sufficiently important to have ranked first on Hilbert's list.

The axiom of constructibility

The first step toward resolving the issue of the continuum hypothesis came in 1908, when the German math­ematician Ernst Zermelo gave a collec­tion of axioms that could form the foundation for a theory of sets: sets of integers, of fractions, of real numbers, or of other mathematical objects. His associate Abraham Fraenkel later extended this work, and the Zermelo-Fraenkel axioms became the basis for subsequent work in set theory.

Goedel, in 1938, showed that the con­tinuum hypothesis is consistent with the Zermelo-Fraenkel axioms. That is, if one assumes that both these axioms and the continuum hypothesis are valid, then the theorems derived from these axioms would introduce no con­tradictions or paradoxes.

Goedel did more. He introduced the axiom of constructibility. "It can be added to the usual axioms," says Har­vey Friedman of Ohio State University, "and from this you can derive the con­tinuum hypothesis as a theorem." When this axiom is joined with those of Zermelo and Fraenkel, then, Cantor's conjecture indeed gives the correct answer. The question How many real numbers? has the answer aleph-one.

Still, the axiom of constructibility would restrict mathematics to Goedel's constructible universe, where all objects not only exist but possess explicit definition. Outside this uni­

verse, one attains yet greater general­ity, and hence freedom of abstraction, by considering objects that possess merely the character of existence alone without explicit definition. (See "The Banach-Tarski paradox" accompanying this article.)

Outside the realm of Goedel's con­structible universe, would the contin­uum hypothesis necessarily hold? Paul Cohen of Stanford University found the answer in 1963: No. He showed that if one avoids the axiom of constructibility but continues to use the Zermelo-Fraenkel axioms, then the negation of the continuum hypothesis is also con­sistent, with these axioms. This nega­tion asserts that the question of how many real numbers there are has an answer of something other than aleph-one. Under the Zermelo-Fraenkel axioms alone, then, the continuum hypothesis is formally undecidabie. This hypothesis as well as its negation are both consistent with these axioms.

What does this mean, and why is it important? The Zermelo-Fraenkel axioms are the basis for set theory. It is widely accepted that set theory repre­sents the bedrock of mathematics, since all mathematical objects can be understood as sets. The continuum hypothesis, again, is sufficiently funda­mental to rank in first place on Hu­bert's list of problems.

How many reals?

Under the Zermelo-Fraenkel axioms, this hypothesis can neither be proven nor disproven. The statements "The continuum hypothesis is t rue" and "The continuum hypothesis is false" are equally acceptable. One could say, then, that in examining this hypothesis, one peers deeply into the foundations of mathematics and finds only Goedel's formal undecidability.

By adding one or more axioms to those of Zermelo and Fraenkel, how­ever, it is indeed possible to address further the issue of the continuum hypothesis. Goedel's axiom of con­structibility, for one, serves this pur­pose so admirably that it permits this hypothesis to be proved as a theorem. Since 1963, a number of logicians have sought to give new axioms, which also could be appended to those of Zermelo and Fraenkel, that would settle the question of the continuum hypothesis outside the realm of Goedel's con­structible universe. Such axioms would

MOSAIC Volume 21 Number 1 Spring 1990 11

answer the question of how many reals there are by citing a specific type of infinity, such as aleph-two.

Goedel himself was among the first to take on this challenge. In 1964? fol­lowing earlier work he had presented in 1947, he proposed that the solution would lie in "large-cardinal axioms" or "axioms of higher infinities." He had been impressed with the fact that as early as 1911 the mathematician Paul Mahlo had given a consistent account of concepts of infinity lying far beyond Cantor's alephs. Hence, Goedel de­clared, there must be new axioms suit­able for study of these -infinities and that could carry over to the problem of the continuum hypothesis.

Ohio State's Harvey Friedman describes such "large" or "inaccessi­ble" cardinals as lying "beyond aleph-two, aleph-three, aleph-infinity. They are beyond aleph-aleph-infinity, aleph-aleph-aleph-infinity. Take the limit; that's still tiny Take aleph of that limit. Do anything you want; do even wilder things than that. The first inaccessible cardinal—the lowest large cardinal—is bigger than any process like that"

"Inaccessible cardinals," Friedman continues, "cannot be built up by any specifiable process. Yet this first one is

merely the weakest of the things that set theorists consider now, the baby thing that is barely anything. There is an inaccessible cardinal that is the limit of a sequence of inaccessible cardinals. Then there is one that is a limit of lim­its, and a limit of limits of limits, and so forth. And these are still very small. They are the most understandable ones. Then they get more technical."

Around the edges

Large-cardinal axioms have not suc­ceeded in doing more than nibble around the edges of the continuum hypothesis. Robert Solovay of the Uni­versity of California at Berkeley has used such axioms to study the "gener­alized continuum hypothesis," a result that if valid would imply that the ques­tion of how many reals has the answer aleph-one even outside of Goedel's con-structible universe. However, Solovay has established the validity of the gen™ eralized continuum hypothesis for cer­tain cases only, not for all cases and in particular not for Cantor's. Donald Martin of UCLA has summed up the sit­uation by noting that the continuum hypothesis cannot be proved on the basis of "any large-cardinal axiom that has been proposed as at all plausible."

Taking a different tack, other mathe­maticians have proposed new axioms that indeed purport to solve this prob­lem. In 1968 Donald Martin gave "Mar­tin's axiom"; it settled certain problems that had grown out of the continuum hypothesis, but did not resolve the hypothesis itself. More recently, Saharon Shelah of Hebrew University in Jerusalem has given such assertions as the "proper forcing axiom" and the "semiproper forcing axiom." Shelah, Menachem Magidor of Hebrew Univer-sity, and Matthew Foreman of Ohio State University, along with Stevo Todorcevic of the University of Col­orado, have shown that as a conse­quence of such axioms, the cardinality of the reals would be aleph-two.

Nevertheless, many mathematicians regard the axioms proposed by Shelah, Magidor, and Foreman as having little real significance for the continuum hypothesis. 'There's no particular rea­son to think these axioms are true or false," says Martin. The new axioms "are not ad hoc, [but] there's just no particular evidence for them, no reason to believe in their truth."

Large-cardinal axioms are different. Caltech's Alexander Kechris notes that the theory based on large cardinals "is

The Banach-Tarski Paradox What does it mean to say that mathematical objects

may possess existence but not explicit definition? Consideration of admissible objects begins with the Zer-melo-Fraenkel axioms of set theory. Restriction of admis­sibility to include objects that have existence but not definition requires invoking the axiom of choice.

The axiom of choice asserts that given infinitely many sets, it is possible to pick out one or more chosen ele­ments from each of them. If a queen had infinitely many diamond necklaces, for instance, one could pick particu­lar diamonds from each one. The resulting system—Zer-melo-Fraenkel together with the axiom of choice—then introduces objects that have existence but not definition. Polish mathematicians Alfred Tarski and Stefan Banach carried out a striking study of such objects in 1924.

In his World of Mathematics, James R. Newman describes the resulting Banach-Tarski paradox as assert­ing that "there is a way of dividing a sphere as large as the sun into separate parts, so that no parts will have any points in common, and yet without compressing or dis­torting any part, the whole sun may at one time be fitted snugly into one's vest pocket." This would be done by dividing up the sun into parts whose shapes exist but cannot be explicitly defined, in accordance with the axiom of choice.

The paradox lies not in the logic, but in the fact that such a result is a gross affront to intuition. "Surely no fairy tale, no fantasy of the Arabian nights, no fevered dream can match this theorem," writes Newman— char­acterizing it, however, as "hard, mathematical logic."

The Banach-Tarski paradox as such holds in three dimensions, but not in two. One cannot draw a circle the size of a dime, divide it up into constituent parts, and then reassemble the parts to form a disk the size of Man­hattan's Columbus Circle. However, there is a related problem called "squaring the circle." It calls for dividing up a circle into a large but finite number of parts that can then be reassembled to form a square having the same area. Tarski set forth this problem in 1925.

The solution came in 1989. Miklos Laczkovich of Eotvos Lorand University in Budapest has shown that the circle can indeed be squared—by dividing it into some 1050 pieces. Indeed, Laczkovich has shown that not only the circle can be squared but also any geometric figure whose boundary consists of curves and angles. As in the Banach-Tarski paradox, however, although Laczkovich's pieces possess existence, their shapes can­not be explicitly defined.

T.H.

12 MOSAIC Volume 21 Number 1 Spring 1990

a very satisfactory, deep theory. It, solves a lot of problems; it has a very nice structure; we have a lot of confi­dence in it. It gives us insight." Specifi­cally, it provides a fairly complete understanding of "definable sets," a more restricted class than the totality of sets that lie outside Goedel's con-structible universe. Kechris notes that the definable sets, in contrast to Goedel's constructible sets, include most objects of mathematical interest.

For the continuum hypothesis, Kechris adds, "we would hope to develop a similar theory that would have the same success." Rather than solve the continuum hypothesis as if it were an isolated result, then, this hoped-for theory would give insight into the nature and structure of the continuum, so that mathematicians could understand why it has the prop­erties they infer.

Outside the realm of definable sets, then, the mathematical world is so poorly mapped that it does not yet pos­sess even an appropriate fundamental axiom. Such an axiom would offer insight and understanding comparable to that of the axiom of constructibility for Goedel's constructible universe, or of large-cardinal axioms for the realm of definable sets. But in the broadest realm of all—that of sets possessing existence but not definition or con­structibility—no suitable axiom is at hand that would permit solution of the very basic question of how many real numbers there are.

Trees

To be sure, all this might be dis­missed as mathematical abstraction pursued for its own sake, as dreaming spires of thought that pierce the intel­lectual sky but have no significance in the real world. But mathematics has a way of turning beyond itself. Through­out recent centuries, time and again, highly abstract concepts have proved essential in solving problems of the most widespread interest not only in mathematics but also in physics. And Harvey Friedman has taken the lead in showing that approaches lying far out­side conventional mathematics— including the use of large-cardinal axioms—can be essential in proving reasonably straightforward theorems of direct interest to mathematicians.

One point of departure for Fried­man's work has been the study of trees.

A tree is a set of points connected by branching lines that have no loops. They resemble family trees in which a number of different individuals des­cend from a common ancestor. In 1960 Joseph Kruskal of AT&T Bell Laborato­ries gave an important theorem: In any sequence of infinitely many trees, at least one of them must be embedded in a later one. That is, the two trees have trunks and branches that can be made to coincide.

A different Infinity

It was natural for mathematicians to ask whether the same theorem might hold for a large but finite collection of trees. Friedman, in 1981, proved such a revised theorem: In any sufficiently numerous collection of finitely many trees, whose sizes are close to each other, at least one can be embedded in a later one. On its face, this theorem appeared quite in keeping with other work in combinatorics, the study of dis­crete collections of mathematical objects. This work typically involves theorems that can be proved straight­forwardly, using standard techniques. But Friedman went much further.

He found that his theorem featured GoedeFs formal undecidability. Though the theorem could be stated and under­stood in the language of conventional combinatorics, it could neither be proved nor disproved using the usual concepts of this field. To prove it, Fried­man had to use a concept of infinity not employed in conventional combina­torics. This concept allows one to build infinite objects with reference to large totalities of infinite objects. This con­trasts sharply with the simplest concept of infinity, which might be described in full as "the limit of the sequence 1, 2, 3 , . . . "

Friedman adds that his use of uncon­ventional infinities represents "meth­ods explicitly frowned upon by Henri Poincare and Hermann Weyl," two of the mathematical giants of the early twentieth century. "These people implicitly assumed you couldn't get anything this interesting or useful" by relying on "such crazy methods." Craig Smorynski, a former colleague of Friedman's at Ohio State, says of Fried­man's theorem that it "would have been meaningful to Poincare, but he would not have been able to prove it, disprove it, or accept any proof of it given to him."

This experience has whetted Fried­man's appetite for using highly abstract concepts of infinity in proofs of reason­ably standard theorems dealing with well-understood mathematical objects. Recently he has proved theorems con­cerning "Borel measurable functions on real numbers and on graphs." To do this, he has relied on large-cardinal axioms. This work represents an important link between these axioms and conventional mathematics.

As his work has proceeded, Fried­man has dealt with theorems that he believes are less and less the sort of highly contrived assertion that a logi­cian would invent for the purpose of illustrating some highly obscure point. Rather, he says, they are more and more similar to theorems that ordinary working mathematicians would seek to prove about well-studied mathematical objects such as trees.

Goedel's shadow

Kurt Goedel's shadow continues to lengthen over mathematics and logic. His influence shows itself in work such as Turing's, from which the modern theory of computation has evolved. This work encompasses problems such as the undecidability of certain Dio-phantine equations, leading to numbers such as Chaitin's that are readily described but cannot be computed in even the crudest and most approximate manner. It covers large-cardinal axi­oms, with the possibility, following Friedman, that these unusual concepts of infinity will increasingly be neces­sary in proving straightforward theo­rems of direct mathematical interest. And it includes, finally, the prospect that entirely new axioms will have to be sought in the realm outside of Goedel's constructible universe and of definable sets, with the most basic insights still to be discovered and the simplest of problems—the continuum hypothe­sis—still to be solved. Having settled several of the most outstanding prob­lems that beckoned at the dawn of the twentieth century, Goedel's work now offers increasingly rich vistas for the further growth of mathematics on the eve of the twenty-first. •

The National Science Foundation con­tributes to the support of the research dis­cussed in this article principally through its Topology and Foundations Program in the Division of Mathematical Sciences.

MOSAIC Volume 21 Number 1 Spring 1990 13