What does Intuitionistic Logic have to do with Kate Moss’ drug use? *A Neighborhood of Infinity* explains.

# Logic

Gregory Chaitin’s Meta Maths is a short book, which meanders pleasantly through various topics: Biological Information (DNA), LISP, several proofs of the infinity of the primes, and the question of whether the universe is discrete or continuous, with accompanying (slightly strange^{[1]}) arguments against the real numbers. Additionally, there’s a good amount of entertaining and idiosyncratic reflection on science, philosophy, and life (“I only really feel alive when I’m working on a new idea, when I’m making love to a woman…. or when I’m going up a mountain!”)

*Meta Maths* also covers that fascinating tie-in between logic and number theory, Hilbert’s 10th problem: whether or not there’s a procedure which can decide whether any integer-valued equation has integer-valued solutions. Some really ingenious mathematics tells us that this is equivalent to the Halting Problem, which asks whether there’s a procedure which can tell if an arbitrary computer program will halt, or continue to run for ever. The answer in both cases is no.

The meat of the story takes us from Gödel’s Incompleteness, to Turing’s Uncomputability (where the Halting Problem arises), and on to Chaitin’s home turf: Algorithmic Information Theory. Each of these areas subsumes the last, in exhibiting in ever more dramatic style the gaping unknowability at the heart of mathematics. The overwhelming majority of real numbers are completely inaccessible to us (other than through non-constructive existence results, of which Chaitin disapproves, I suppose).

The book is full of diagrams like this:

Axioms → Formal System → Theorems

Program → Computer → Output

DNA → Pregnancy → Organism

Ideas → Mind of God → Universe

In each case the first stage consists of compressed information, the middle stage is where this is unpacked, followed by the final result.

Chaitin’s “dangerous idea” is that almost all information (at least the sort of information which can be encoded in binary), is actually *incompressible*. That is to say, it cannot be deduced from any axioms simpler than itself. It *is* a striking thought, and rams home with some force that Gödelian gaps in our mathematics are the rule, not the exceptions. If you pick a real number at random, then with probability 1 you can have no way of writing it out, or naming it under any system you can think of, or indeed referring to it at all without hopeless ambiguity.

The basic definitions of Algorithmic Information Theory are perfectly natural: you define the *complexity* of a piece of binary information (or anything^{[2]}, such as a mathematical theorem, which can be encoded in binary) to be the length of the shortest program which computes it. (In fact the program must be “self-delimiting”, this is the book’s most technical matter.) A real number is defined to be *random*^{[3]} if the complexity of its first *N* digits is equal to *N* (give or take a small error term).

The peg off which the book hangs is the number Ω (“sometimes called Chaitin’s number”, he helpfully adds): the average halting probability for self-delimiting programs.

I may be wrong, but it’s not obvious that you can really *do* much with Ω (so comparisons with other fundamental constants are surely hyperbole). It’s just a fact, whose interest is that it is a random real, but it is strikingly exceptional, since we can pick it out uniquely (in some sense^{[4]}).

A final word about Chaitin’s punchy prose style! Lots of **bold!** And exclamation marks!! **Loads!!!** And more **sex** than most maths books!

[1] It wasn’t at all clear to me in what he means when he says he’s “against” them, especially as much of the material in the book is rather dependent on their existence.

[2] How long can it be before the “irreducible complexity” berks mangle this stuff?

[3] Be warned, the world abounds with different definitions of randomness.

[4] Of course, Ω is incomputable, and we can only access it via something else we can’t know, namely solutions to the Halting problem. But somehow this seems less unsatisfying when you translate it into the language of Diophantine equations: there is one such *Q(n,x _{1},x_{2}…)* which has infinitely many solutions if the

*n*th digit of the binary expansion of Ω is 1, and finitely many solutions if it’s 0.

Matt, having read my recent articles about set-theory, feeds back:

The best “real world” example of Russell’s paradox I’ve seen (I forget where now) is this: A librarian, bored with work, sets about making two indexes of books in their library. In book one, she lists all books which don’t reference themselves. In book two, she lists all books which do reference themselves. But then she thinks: I’ve just created two new books, so I should list them as well. But neither book references itself, so they should both be listed in book one. But by doing this, book one now does reference itself! So book one should really be listed in book two instead. But if she does this, then book one goes back to not referencing itself!

I like it. Much better than the barber one.

I rewatched *Breaking The Code* the other night, the TV adaptation of Hugh Whitemore’s play of the life of Alan Turing, in turn based on Andrew Hodges’ book Alan Turing: the enigma.

It’s a wonderful film, not least for Derek Jacobi’s performance as Turing. I’ve embedded below the scene where he goes to Bletchley Park for the first time, and is interviewed by Dilly Knox (Richard Johnson). I love this speech. When I first saw it, I was struck by the contrast with every other film about maths that I’d seen. It doesn’t lazily misunderstand Turing, so as to portray him as a misunderstood genius, obsessed with the incomprehensible. But it makes a real effort to let him say *why* he’s so passionate about his research, and explain what it is actually about. For me, it beautifully captures the drama within mathematics itself.

I have an article in the New Scientist magazine – in the shops now.

(Unless this post is more than a week old, in which case it be accessed here; or in the July 2007 archives of the New Scientist Website; (subscription requred).)

[This is a sequel to my post on the Axiom of Choice]

In the late 19th century, Georg Cantor’s work in set theory opened up an exciting world: that of infinite numbers.

Paul Cohen has sadly died. The only logician ever to win the Fields Medal, he will primarily be remembered for his work in Set Theory, and in particular for proving two major independence results: that of the Axiom of Choice, and the Continuum Hypothesis. He also worked in mathematical analysis, for which he was awarded the Bôcher Memorial Prize by the American Mathematical Society in 1964.

A short discussion of the Axiom of Choice is below the fold, with a sequel on the Continuum Hypothesis in the pipeline.