An abridged history of mathematical logic

In 1821, Augustin Louis Cauchy gave a famously incorrect proof in his paper Cours d'analyse. The error lay in that Cauchy failed to distinguish continuity:

\[(\forall \varepsilon > 0)(\exists \delta > 0) |x - y| < \delta \Rightarrow |f(x) - f(y)| < \varepsilon\]

from uniform continuity:

\[(\exists \delta > 0)(\forall \varepsilon > 0) |x - y| < \delta \Rightarrow |f(x) - f(y)| < \varepsilon\]

The difference was very subtle (can you find it?); mathematicians of the early nineteenth century did not have the proper formalism to write out the definitions as above, and it cost them more than ten years until Johann Dirichlet finally pinned down the error.

Cauchy was a well-respected mathematician—one of the best in the century, in fact. His mistake soon led mathematicians to fear the possibility of their works going astray, without them even knowing that they were, after all, proving false theorems. Such a realization called for a formalization of mathematics.

Years had passed, with mathematicians trying to pave a strong foundation they could rely on. In 1888, Giuseppe Peano published a set of principles now known as the Peano Axioms, which rigorously characterize the natural numbers. Then in 1893, Gottlob Frege, the founder of modern logic, published a book titled Grundgesetze der Arithmetik (Basic Laws of Arithmetic), in which he tried to derive all of the laws of arithmetic (addition, multiplication ... just about anything you could think of) from a few axioms he asserted logical.

Frege's "axioms" were flawed, however. Bertrand Russell wrote to Frege, right before the second volume of Grundgesetze went to press, pointing out that the axiomatic system Frege had created allowed for what is now known as the Russell's paradox:

Informal 1. There is a barber in a town who only shaves those who do not shave themselves. Can he shave himself?

Informal 2. A library catalog is a book that lists all books in a given library, except itself. If there were a catalog of all catalogs, can it list itself?

Formal. Let \(X\) be a set of all sets that do not contain themselves. Can \(X\) contain itself?

Mathematicians were, of course, not pleased; such nonsense could not be allowed to undermine the foundation they had tried so hard to build. A new system of set theory, now known as Zermelo-Frankel set theory with the axiom of Choice (ZFC), simply rules out \(X\) from the universe of sets—that is, there is no such set \(X\), so it doesn't make sense to talk about it. The desire to rule out all errors and to construct an insurmountable cathedral of logic was pushed further and further, to the point where it eventually turned into an obsession.

The epitome of the said obsession is undoubtedly Principia Mathematica, a monumental three-volume book by Alfred Whitehead and Bertrand Russell. In these volumes, Whitehead and Russell created a much complicated and elaborate logical system, in which they tried to derive all of mathematics from a handful of logical propositions. If you were to disregard its importance in the development of mathematics and logic, what it purports to have achieved is rather absurd, and even ludicrous:

Well, it is done. Principia has been created. Is mathematics all safe and sound ? Can the mathematicians finally hop around in joy?

It turns out that the answer is an emphatic no. Kurt Gödel, in his 1931 paper On Formally Undecidable Propositions in Principia Mathematica and Related Systems I, showed that Principia cannot be both consistent (it doesn't prove false things) and complete (it can prove all true things). Here is an example: Is the sentence "This sentence is not provable in Principia" provable in Principia? If it is, then Principia is inconsistent; if it isn't, then Principia is incomplete.

Inconsistency is a serious issue, because an inconsistent system can prove just about everything through a technique known as proof by contradiction. For example, if we had a logical system in which both "one plus one equals two" and "one plus one does not equal two" were true, then a reasoning such as "one plus one equals two, and one plus one does not equal two, therefore all pigs fly" becomes perfectly sound. The premises were false, thus it doesn't matter what the conclusion is; and, in an inconsistent system, you can always use false premises.

Principia is shattered, and the obsession for formalism received a severe backlash. Here is a personal account of Russell, included in G. H. Hardy's 1940 essay A Mathematician's Apology:

Yet how painful it is to feel that, with all these advantages, one may fail. I can remember Bertrand Russell telling me of a horrible dream. He was in the top floor of the University Library, about A.D. 2100 . A library assistant was going round the shelves carrying an enormous bucket, taking down books, glancing at them, restoring them to the shelves or dumping them into the bucket. At last he came to three large volumes which Russell could recognize as the last surviving copy of Principia Mathematica. He took down one of the volumes, turned over a few pages, seemed puzzled for a moment by the curious symbolism, closed the volume, balanced it in his hand and hesitated.…

A question still remains, however. If a logical system is doomed to be incomplete, shouldn't there be true sentence that are not provable, such as "this sentence is not provable in Principia"? Can something be neither true nor false? This leads to the investigation of independence problems. A sentence is said to be independent of a logical system if it is not provable nor refutable within the system. The concept isn't too complicated, but how do you prove such a thing?

A famous problem in this field is the Continuum Hypothesis, which states:

There is no set whose cardinality is strictly between that of the integers and that of the real numbers.

What does this mean? "Cardinality" refers to the size of a set. Two sets have the same cardinality if there is a one-to-one correspondence between the elements of the two sets (imagine making a knot for each cattle in a farm).

With this concept, we can establish that there are different sizes of infinity. For example, the cardinality of the set of natural numbers is the same as the cardinality of the set of even numbers, even though there are more even numbers:

\[\begin{align*} 1 &\leftrightarrow 2 \\ 2 & \leftrightarrow 4 \\ 3 & \leftrightarrow 6 \\ 4 & \leftrightarrow 8 \\ \vdots & \leftrightarrow \vdots \end{align*}\]

Similarly, the set of rational numbers and the set of natural numbers have the same cardinality. Since a lot of sets have the same cardinality as the set of natural numbers, the said cardinality has a name (actually, two): \(\aleph_0\) (aleph-null), and \(\omega\) (lowercase omega).

It is easier to think of \(\omega\) as the number that comes after all integers:

\[1,2,3,4,5,\ldots,\omega,\omega+1,\omega+2,\ldots,2\omega,2\omega+2,\ldots\]

Of course, there are \(3\omega\), \(\omega^2\), \(\omega\omega\), and so on. These numbers are called ordinals. Not every ordinal represents a cardinality; for example, the set of natural numbers is "twice as big" as the set of all even numbers, even though they both have a cardinality of ω. The ordinals that represent cardinalities of certain sets are called cardinals. The first infinite cardinal is denoted as \(\aleph_0\), the second \(\aleph_1\), and so on. Of course, as noted above, \(\omega\) and \(\aleph_0\) are the same number.

Now, what about the cardinality of the real numbers? Georg Cantor, in his celebrated 1891 paper, showed that the set of the real numbers are, in fact, "larger" than the set of natural numbers. Here is the proof:

Proof. Suppose there exists a one-to-one correspondence between the set of real numbers and the set of natural numbers. Since all real numbers are representable by decimals, we can then make a list of all real numbers, and enumerate them with natural numbers:

\[\begin{align*} 1 & : & 12340.12343214634563456 \ldots \\ 2 & : & 22344.13562457456234523 \ldots \\ 3 & : & 73343.35634584678545673 \ldots \\ \vdots\end{align*}\]

Now, we explicitly construct a real number that is not in the list—which we shall call \(x\). Pick a number other than the tenth digit of the first number (in this case, 1), and make that to be the tenth digit of \(x\). Also, pick a number other than the hundredth digit of the second number (in this case, 3), and make that to be the hundredth digit of \(x\). Proceed similarly infinitely many times, until we reach the end of the list.

Now, \(x\) is not the first number, because the tenth digits are different. Also, \(x\) is not the second number, because the hundredth digits are different. Similarly, \(x\) is not any of the numbers in the list.

We have thus shown that the set of real numbers has a bigger cardinality than the set of the natural numbers. \(\square\)

The set of real numbers, of course, has a cardinal associated with it. But, what is it? Is it \(\aleph_1\)? \(\aleph_{27}\)? \(\aleph_\omega\)? The Continuum Hypothesis (CH) asserts that the said cardinal is, indeed, aleph-one. That is to say, there is no set bigger than the set of natural numbers and smaller than the set of real numbers.

This was a notoriously difficult problem, and it remain unsolved until 1962, when Paul Cohen invented a technique called forcing to establish the independence of CH. An urban legends suggests that Cohen, not a trained logician, was exposed to the problem when he complained to his logician friend that there weren't "enough difficult problems in mathematics," after which he promptly went ahead and settled the problem.

Some people, however, still believe that the Continuum Hypothesis problem is not entirely settled. Sure, it's independent of the currently accepted logical system, but does it really mean that we will never know whether it is true or not? Thousands of independence results about different kinds of statements in all fields of mathematics have been published since the advent of forcing, but the unsettling question still lingers.