Over on G+, Alexander Kruel pointed me towards an article by Alex Knapp entitled “Why your brain isn’t a computer”.

The point of this post is not to argue whether it is or it isn’t, but to draw attention to a salient point, which I think Alex waves away rather too quickly (and which in any case is important and interesting).

One might reason that there are plenty of different types of ‘computation’ around these days: ordinary computer programs, embedded systems, neural nets, self-modifying code, and so on. So, with all this variety, why should we expect that a human brain, being a neurochemical network, should fall into the same computational category as a laptop? Might it not simply be that the two have different capabilities? As Alex argues *“Electric circuits simply function differently then electrochemical ones”*.

The problem with this argument is that it overlooks the great robustness of the notion of computability, specifically the Church-Turing thesis.

A bit of history: in the 1930s, Alan Turing was investigating the capabilities of *Turing machines*, funny little devices which creep along a ticker-tape, and respond to the symbols they find there. Meanwhile over in the US, Alonzo Church was exploring the semantics of a formal system he had developed, called *lambda calculus*.

On first sight, the two topics appear to have little in common. But when the two men encountered each others’ work, they quickly realised something unexpected and profoundly important: that anything which can be expressed in lambda calculus can also be computed by a Turing machine, and vice versa. Shortly afterwards, a third approach known as *recursion* was thrown into the mix. Again, it turned out that anything recursive is Turing-computable, and vice versa.

This leads to the assertion we know as the *Church-Turing thesis:* that a process which is computable by any means whatsoever, must also be computable by a Turing machine.

It is important to stress that the Church-Turing thesis has good experimental support. Every computational system we know of obeys it: cellular automata, neural networks, Post-tag systems, logic circuits, genetic algorithms, string rewriting systems, even quantum computers*. Anything that any of them can do can (in principle) be done by conventional computational means.

So when Alex comments that “the brain itself isn’t structured like a Turing machine”, the obvious response is, “well, no, and neither are lambda calculus, cellular automata, and the rest”. (Come to think of it my phone doesn’t much look like a Turing machine either.)

The ‘dualism’ which distinguishes software from hardware (which Alex argues fails for the human brain), is not something built in from the outset. Rather it emerges from the deep, non-obvious fact that computational systems beyond a certain complexity can all emulate each other.

Needless to say, there have been no shortage of people claiming to have developed systems of different kinds which go ‘Beyond the Turing Limit’. (See Martin Davis’ paper on The Myth of Hypercomputation.) And who knows, maybe our brain embodies such a process. (I have my doubts, but if we’re going to find such a system anywhere, the brain is certainly an obvious place to look.)

The bottom line here is that if you don’t want to accept that

0) The human mind is computable

then I’d say you have three positions open to you:

1) It requires an extra metaphysical ingredient;

2) It’s a hypercomputer which violates the Church-Turing thesis;

3) It relies in an essential way on a non-computable process, meaning some inherent element of randomness.

Personally I’d order these 0312, in order of likeliness. (At the same time, I’d say talk of reverse-engineering the human brain is like a toddler planning a manned expedition to Mars. How about we concentrate on crossing the room without falling over first?)

*Quantum computers may be able to compute certain things quicker than conventional ones, but they won’t be able to compute essentially different things.

If the laws of physics we know are about right, the brain cannot be exactly simulated by a Turing machine, for a number of reasons.

First, even ignoring quantum mechanics, the state of the brain at any time cannot be exactly described by a finite word in a finite alphabet: physics makes use of the continuum.

Second, if we include quantum mechanics the first issue persists, but we also have to decide whether to treat quantum mechanics deterministically via Schrodinger’s equation (so that a brain with a well-specified mental state now will evolve into a superposition of mental states in the future) or keep projecting down to a specific randomly chosen ‘branch’ in which the brain has a well-specified mental state. (Of course, nobody knows enough neurobiology to do the latter: we don’t know which quantum states correspond to ‘well-specified mental states’, and the whole idea could turn out to be ambiguous.)

There is however a theory of computable functions from the real numbers to the real numbers, and from Hilbert spaces to Hilbert space to Hilbert spaces, etc. – this is covered in the book by Pour-El and Richards. In my undergrad thesis I showed Schrodinger’s equation for charged point particles gives a way of evolving quantum states that’s computable: you get a computable function from real numbers (times) to unitary operators. Computable functions of this sort can be approximately computed by Turing machines, so one just needs to decide what approximation is ‘good enough’.

On the other hand, we’re nowhere near being able to simulate the brain in this detail, tracking the quantum state of all its elementary particles… so in practice we’d need to make a vastly coarser approximation and hope that the result acts enough like a brain to pass the Turing test.

Oh, but there’s the third objection. Brains are connected to bodies, and bodies are part of the world – a lone brain unconnected to a body, or a lone body without a world, is not much like an actual person. So unlike a classic Turing machine, we’re talking about an ‘open system’, one that interacts with its environment. And that adds extra subtleties.?

In particular, for open systems, pure states evolve to mixed states. In other words, unless we want to incorporate the entire universe in our model, the portion of the universe we model will evolve in a somewhat random way.

So I’d say the human brain, or body, does some random things. A lot of small random fluctuations don’t amplify to the point of being noticeable, but my guess is that a lot do. (It happens with the weather.) It’s up to you whether you want to consider this an ‘essential’ part of the human mind, or something you’d be willing to throw out in a simulation.

Well, lets not tangle two distinct questions: (1) is it possible to build a ‘computer’, inequivalent to a Turing machine, by leveraging the continuum (e.g. QM) and (2) can the human brain be “adequately” simulated by a Turing machine?

Personally, I believe the answer to both is yes. It is hard to imagine how a sufficiently powerful Turing machine would not be able to simulate the neurons or the biochemical reactions of a human brain. We can approximate real-valued quantities with fractions, and we can always get arbitrarily close. Likewise, no matter what the behavior, I believe that there is a Turing machine that can approximate it arbitrarily closely. (well, clearly, anything that is strongly chaotic, positive lyapunov exponent, etc. will be sensitive on initial conditions, and will diverge on any given input. But rather than focusing on a specific instance, we should focus on an ensemble: every differential equation can be approximated arbitrarily well by a set(?) of turing machines. Insofar as the human mind is a mass of differential equations…)

Returning to question (1) can there be, in nature,

‘hypercomputers’ or ‘oracles’ that leverage the continuum in some way, to solve problems thaat Turing machines cannot? Clearly, the concept of a “quantum finite automaton”, or more generally, a “geometric finite automaton”, is tractable, and is studied, and can recognize languages that ordinary automatons cannot. (which reminds me… I better start reading John Baez’s tract on this, eh? Once again, sadly, he’s more of an expert than I am…) By analogy, I expect that there are such ‘quantum turning machines” that can perform computations that ordinary turing machines cannot. (whether we can prepare them with initial states, in such as way as to do something useful.. is a different question).

Lets assume the (1) is true. Then we have two more questions: (3) is the space of turing machines “dense” in this space of hypercomputers (e.g. the way that the rationals are dense in the reals)? and (4) would a description of the human mind be simpler/easier/faster by appealing to such hypermachines, or not?

My current opinion would be 3) yes and 4) no. I think perhaps that the Penrose hypothesis can be interpreted as taking 4) to be yes.

A problem with social media is a duplication of conversations… thanks to John Baez for taking the trouble to repeat himself here and on G+.

If I put a link to the G+ discussion, I think that people here, even those not signed up, should be able to read it:

“These are not like the Chinese government often accuses or just states — terrorists,” said Mr. Seytoff, the president of the Uyghur American Association. “The Chinese repressive policies have driven some ordinary Uighurs into the ultimate desperation.” Authentic Jordans for Sale http://www.stonewall.org.uk/jordan/JordansOutletStore.php