Gregory Chaitin’s Meta Maths is a short book, which meanders pleasantly through various topics: Biological Information (DNA), LISP, several proofs of the infinity of the primes, and the question of whether the universe is discrete or continuous, with accompanying (slightly strange[1]) arguments against the real numbers. Additionally, there’s a good amount of entertaining and idiosyncratic reflection on science, philosophy, and life (“I only really feel alive when I’m working on a new idea, when I’m making love to a woman…. or when I’m going up a mountain!”)
Meta Maths also covers that fascinating tie-in between logic and number theory, Hilbert’s 10th problem: whether or not there’s a procedure which can decide whether any integer-valued equation has integer-valued solutions. Some really ingenious mathematics tells us that this is equivalent to the Halting Problem, which asks whether there’s a procedure which can tell if an arbitrary computer program will halt, or continue to run for ever. The answer in both cases is no.
The meat of the story takes us from Gödel’s Incompleteness, to Turing’s Uncomputability (where the Halting Problem arises), and on to Chaitin’s home turf: Algorithmic Information Theory. Each of these areas subsumes the last, in exhibiting in ever more dramatic style the gaping unknowability at the heart of mathematics. The overwhelming majority of real numbers are completely inaccessible to us (other than through non-constructive existence results, of which Chaitin disapproves, I suppose).
The book is full of diagrams like this:
Axioms → Formal System → Theorems
Program → Computer → Output
DNA → Pregnancy → Organism
Ideas → Mind of God → Universe
In each case the first stage consists of compressed information, the middle stage is where this is unpacked, followed by the final result.
Chaitin’s “dangerous idea” is that almost all information (at least the sort of information which can be encoded in binary), is actually incompressible. That is to say, it cannot be deduced from any axioms simpler than itself. It is a striking thought, and rams home with some force that Gödelian gaps in our mathematics are the rule, not the exceptions. If you pick a real number at random, then with probability 1 you can have no way of writing it out, or naming it under any system you can think of, or indeed referring to it at all without hopeless ambiguity.
The basic definitions of Algorithmic Information Theory are perfectly natural: you define the complexity of a piece of binary information (or anything[2], such as a mathematical theorem, which can be encoded in binary) to be the length of the shortest program which computes it. (In fact the program must be “self-delimiting”, this is the book’s most technical matter.) A real number is defined to be random[3] if the complexity of its first N digits is equal to N (give or take a small error term).
The peg off which the book hangs is the number Ω (“sometimes called Chaitin’s number”, he helpfully adds): the average halting probability for self-delimiting programs.
I may be wrong, but it’s not obvious that you can really do much with Ω (so comparisons with other fundamental constants are surely hyperbole). It’s just a fact, whose interest is that it is a random real, but it is strikingly exceptional, since we can pick it out uniquely (in some sense[4]).
A final word about Chaitin’s punchy prose style! Lots of bold! And exclamation marks!! Loads!!! And more sex than most maths books!
[1] It wasn’t at all clear to me in what he means when he says he’s “against” them, especially as much of the material in the book is rather dependent on their existence.
[2] How long can it be before the “irreducible complexity” berks mangle this stuff?
[3] Be warned, the world abounds with different definitions of randomness.
[4] Of course, Ω is incomputable, and we can only access it via something else we can’t know, namely solutions to the Halting problem. But somehow this seems less unsatisfying when you translate it into the language of Diophantine equations: there is one such Q(n,x1,x2…) which has infinitely many solutions if the nth digit of the binary expansion of Ω is 1, and finitely many solutions if it’s 0.
Hi Elwes,
Thanks for the exciting review. I thought I’d leave a comment, since this is looking rather lonely without one…
The book does sound quite intriguing and nutritious; hopefully I’ll get time to read such things one day.
Randomness is certainly very widespread, even in unexpected places. I gave some examples in a lecture today (mainly just for fun, to scare the hell out of my students…it seemed to work…) there are various problems where one wants to construct an analytic function with various weird, horrible properties; one such method is to choose a particular sequence (a_n), and then consider random power series:
sum_{n=0}^infty u_n a_n z^n,
where u_0, u_1, u_2, … are independent, identically distributed random variables taking values +1 and -1 with probability 1/2.
The funny thing is that, even though no-one knows how to find even one single specific pattern of +/- signs to give various horrible results in very simple-looking cases (I’m not sure exactly, but I think even things like a_n = 1/(n+1) and maybe even a_n=1 fall into this category), you will still find with probability one that the u_n pattern of +/- signs works. Human mathematics is highly ordered and inherently non-random I think.
A fun example from Analysis…!
Incidentally, Terry Tao on his blog seems to think that one of the main weaknesses of mathematicians is a lack of understanding of pseudo-random behaviour, and that this is why the Navier-Stokes equations are so difficult, which also maybe ties in a little with the above thoughts on randomness and inherently horrible difficulty of maths — although it’s dangerous for me to paraphrase him, of course, in case I’ve misunderstood the point completely…
Some crazy thoughts of my own: what is the probability that a mathematical statement chosen “at random” is a genuine Theorem? (Of course, all depends on what measure you take — but since the set of all possible statements is countable, there are no measure theoretic difficulties). Is this number incomputable and “random”, similarly to Omega above? It sounds vaguely similar to halting problem stuff.
It’s clear, though (isn’t it?), that the real numbers are totally inaccessible, just because they’re uncountable, but the total set of all statements and unambiguous formulae (not just Theorems!) is countable. I suppose this also ties in with the random power series example above – the set of all +/- sign patterns is uncountable (and actually canonically isomorphic as a measure space to the interval [0,1] with Lebesgue measure), so we’ve no hope of ever understanding it properly.
Addition to my previous comment:
My knowledge of Turing machines is very poor, but I now realise my question about the probability of random statements being theorems seems to be roughly equivalent to similar questions about the Omega number (of course the actual numerical values of the probability are maybe not so easily related):
Given any finite statement, you can write a computer programme which calculates all possible theorems by brute force, according to some specific order (simply applying every axiom one-by-one to see what happens, in a “breadth first” search), and stopping only if it proves the given statement. This is then a Turing machine which halts if and only if a proof of the statement is found. (If there are countably infinitely many axioms, i.e. some kind of “axiom scheme” or whatever, it needs a little bit more work, but is still possible I think).
Conversely, given any Turing machine, the statement “this Turing machine will halt” is well–defined, and we are simply asking whether this statement is true.
Of course this is full of sloppiness and inaccuracy; I’ll leave it to you logicians to sort all the definitions out (what exactly does “self-delimiting” mean in this context? What about undecidable statements? etc. etc.)
Have fun and nutrition!
I believe that people are actually trying to calculate ?. It was a long time ago, but I believe that the gist of it was that you write down all the minimal length programs of length 1 and run through them all until you know how many halt (and hence the probability of any program of length 1 halting). You then do this for all minimal length programs of length 2,3,etc. and hope that your probability will converge on ?, which the authors claimed it was/would (at least one of these). Of course, one of (if not the) toughest bits is deciding whether or not a given program is equivalent to a shorter one in order to exclude it. I believe it was in New Scientist so you may be able to find it somewhere.