4 comments on “AI and the Church-Turing Thesis

  1. If the laws of physics we know are about right, the brain cannot be exactly simulated by a Turing machine, for a number of reasons.

    First, even ignoring quantum mechanics, the state of the brain at any time cannot be exactly described by a finite word in a finite alphabet: physics makes use of the continuum.

    Second, if we include quantum mechanics the first issue persists, but we also have to decide whether to treat quantum mechanics deterministically via Schrodinger’s equation (so that a brain with a well-specified mental state now will evolve into a superposition of mental states in the future) or keep projecting down to a specific randomly chosen ‘branch’ in which the brain has a well-specified mental state. (Of course, nobody knows enough neurobiology to do the latter: we don’t know which quantum states correspond to ‘well-specified mental states’, and the whole idea could turn out to be ambiguous.)

    There is however a theory of computable functions from the real numbers to the real numbers, and from Hilbert spaces to Hilbert space to Hilbert spaces, etc. – this is covered in the book by Pour-El and Richards. In my undergrad thesis I showed Schrodinger’s equation for charged point particles gives a way of evolving quantum states that’s computable: you get a computable function from real numbers (times) to unitary operators. Computable functions of this sort can be approximately computed by Turing machines, so one just needs to decide what approximation is ‘good enough’.

    On the other hand, we’re nowhere near being able to simulate the brain in this detail, tracking the quantum state of all its elementary particles… so in practice we’d need to make a vastly coarser approximation and hope that the result acts enough like a brain to pass the Turing test.

    Oh, but there’s the third objection. Brains are connected to bodies, and bodies are part of the world – a lone brain unconnected to a body, or a lone body without a world, is not much like an actual person. So unlike a classic Turing machine, we’re talking about an ‘open system’, one that interacts with its environment. And that adds extra subtleties.?

    In particular, for open systems, pure states evolve to mixed states. In other words, unless we want to incorporate the entire universe in our model, the portion of the universe we model will evolve in a somewhat random way.

    So I’d say the human brain, or body, does some random things. A lot of small random fluctuations don’t amplify to the point of being noticeable, but my guess is that a lot do. (It happens with the weather.) It’s up to you whether you want to consider this an ‘essential’ part of the human mind, or something you’d be willing to throw out in a simulation.

  2. Well, lets not tangle two distinct questions: (1) is it possible to build a ‘computer’, inequivalent to a Turing machine, by leveraging the continuum (e.g. QM) and (2) can the human brain be “adequately” simulated by a Turing machine?

    Personally, I believe the answer to both is yes. It is hard to imagine how a sufficiently powerful Turing machine would not be able to simulate the neurons or the biochemical reactions of a human brain. We can approximate real-valued quantities with fractions, and we can always get arbitrarily close. Likewise, no matter what the behavior, I believe that there is a Turing machine that can approximate it arbitrarily closely. (well, clearly, anything that is strongly chaotic, positive lyapunov exponent, etc. will be sensitive on initial conditions, and will diverge on any given input. But rather than focusing on a specific instance, we should focus on an ensemble: every differential equation can be approximated arbitrarily well by a set(?) of turing machines. Insofar as the human mind is a mass of differential equations…)

    Returning to question (1) can there be, in nature,
    ‘hypercomputers’ or ‘oracles’ that leverage the continuum in some way, to solve problems thaat Turing machines cannot? Clearly, the concept of a “quantum finite automaton”, or more generally, a “geometric finite automaton”, is tractable, and is studied, and can recognize languages that ordinary automatons cannot. (which reminds me… I better start reading John Baez’s tract on this, eh? Once again, sadly, he’s more of an expert than I am…) By analogy, I expect that there are such ‘quantum turning machines” that can perform computations that ordinary turing machines cannot. (whether we can prepare them with initial states, in such as way as to do something useful.. is a different question).

    Lets assume the (1) is true. Then we have two more questions: (3) is the space of turing machines “dense” in this space of hypercomputers (e.g. the way that the rationals are dense in the reals)? and (4) would a description of the human mind be simpler/easier/faster by appealing to such hypermachines, or not?

    My current opinion would be 3) yes and 4) no. I think perhaps that the Penrose hypothesis can be interpreted as taking 4) to be yes.

  3. A problem with social media is a duplication of conversations… thanks to John Baez for taking the trouble to repeat himself here and on G+.

    If I put a link to the G+ discussion, I think that people here, even those not signed up, should be able to read it:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s