The point of this post is not to argue whether it is or it isn’t, but to draw attention to a salient point, which I think Alex waves away rather too quickly (and which in any case is important and interesting).
One might reason that there are plenty of different types of ‘computation’ around these days: ordinary computer programs, embedded systems, neural nets, self-modifying code, and so on. So, with all this variety, why should we expect that a human brain, being a neurochemical network, should fall into the same computational category as a laptop? Might it not simply be that the two have different capabilities? As Alex argues “Electric circuits simply function differently then electrochemical ones”.
The problem with this argument is that it overlooks the great robustness of the notion of computability, specifically the Church-Turing thesis.
A bit of history: in the 1930s, Alan Turing was investigating the capabilities of Turing machines, funny little devices which creep along a ticker-tape, and respond to the symbols they find there. Meanwhile over in the US, Alonzo Church was exploring the semantics of a formal system he had developed, called lambda calculus.
On first sight, the two topics appear to have little in common. But when the two men encountered each others’ work, they quickly realised something unexpected and profoundly important: that anything which can be expressed in lambda calculus can also be computed by a Turing machine, and vice versa. Shortly afterwards, a third approach known as recursion was thrown into the mix. Again, it turned out that anything recursive is Turing-computable, and vice versa.
This leads to the assertion we know as the Church-Turing thesis: that a process which is computable by any means whatsoever, must also be computable by a Turing machine.
It is important to stress that the Church-Turing thesis has good experimental support. Every computational system we know of obeys it: cellular automata, neural networks, Post-tag systems, logic circuits, genetic algorithms, string rewriting systems, even quantum computers*. Anything that any of them can do can (in principle) be done by conventional computational means.
So when Alex comments that “the brain itself isn’t structured like a Turing machine”, the obvious response is, “well, no, and neither are lambda calculus, cellular automata, and the rest”. (Come to think of it my phone doesn’t much look like a Turing machine either.)
The ‘dualism’ which distinguishes software from hardware (which Alex argues fails for the human brain), is not something built in from the outset. Rather it emerges from the deep, non-obvious fact that computational systems beyond a certain complexity can all emulate each other.
Needless to say, there have been no shortage of people claiming to have developed systems of different kinds which go ‘Beyond the Turing Limit’. (See Martin Davis’ paper on The Myth of Hypercomputation.) And who knows, maybe our brain embodies such a process. (I have my doubts, but if we’re going to find such a system anywhere, the brain is certainly an obvious place to look.)
The bottom line here is that if you don’t want to accept that
0) The human mind is computable
then I’d say you have three positions open to you:
1) It requires an extra metaphysical ingredient;
2) It’s a hypercomputer which violates the Church-Turing thesis;
3) It relies in an essential way on a non-computable process, meaning some inherent element of randomness.
Personally I’d order these 0312, in order of likeliness. (At the same time, I’d say talk of reverse-engineering the human brain is like a toddler planning a manned expedition to Mars. How about we concentrate on crossing the room without falling over first?)
*Quantum computers may be able to compute certain things quicker than conventional ones, but they won’t be able to compute essentially different things.