Saturday, January 27, 2007

"Mind" -> On Turing Machines And Ghosts

In 1989, Sir Roger Penrose wrote the classic guide to the layman's understanding of Strong AI. The book was called The Emperor's New Mind. On the road to strong AI, he covers Turing machines, Godel, Church's lambda calculus, Einsteins theories, and Quantum mechanics. While science has marched forward, the book holds up remarkably well.

So what is Strong AI?

Strong AI vs Weak AI is the real root of the question about intelligence. If you believe in strong AI, then you are saying that a machine can become self-aware. Without going into "what is self aware" let's just assume that everybody here knows what it is. (Although I have had certain friends that didn't seem to have strong AI at all, even though they claimed they were human.) As stated on the front side of this blog, I believe that humans are made up of three parts: body, mind and soul. (By the way, the fact that the YMCA also calls this out has nothing to do with my blog!)

So, can a machine be self aware? The first chapter opens by answering a question by David Hilbert, often called out as the Entscheidungsproblem. This problem simply asks if you can always come up with an answers to any question. In other words, is is possible to always find an answer.

Any school boy will say, "well of course not." But on further examination, the boy will refine this to say, "well maybe somebody could, but not me."

The whole of science is built on the idea that eventually, to any problem, we can find a solution. How did the universe come about? Science will answer "the Big Bang." How did life start? Science will answer "evolution." Now, science may not have all the answers today, but look how far it has come. So, can't it find all the answers...if not today, then some day?

The answer to this question was answered 3 ways: Godel incompleteness theorem, Church's Lambda Calculus, and--more interesting for us today--by Turing.

Turing was the father of modern computer science. (A man rejected for his homosexuality, he received less than Christian treatment, and left this earth an early age.) To make a long story short Turing proved that all computers that we have today are equivalent. Sometimes this is referred to as "Turing equivalent." So that Mac and that PC you have really are equivalent (not in speed, but in end result). There is no computer (given that you can have limitless storage) that cannot do what another computer can do with the right algorithms (think software). In EE talk, all hardware is a "state machine" (and normally all state machines are driven by a clock, although some architectures don't require this.) Turing basically was able to show that was impossible to create a perfect algorithm (or program) that could deal with all inputs. In other words, computers basically would come up with results that didn't make sense. This is often called the "halting problem" and it said that certain programs would lead to paradoxes (by virtue of the same principle that Cantor showed there were different sizes of infinite!).

Now, Turing started to think about the human mind. Turing came up with a very simple test. If a computer can fool at human into thinking that it is human, then how could we really say that it didn't have self-awareness. However, John Searle said that this wasn't enough. In today's terms, Searle said that the computer could only be weak AI.

Searle used the analogy of a room where you would be locked inside. You could have a list of instruction in English that you could follow when certain Chinese tiles where dumped into your room. Now, say somebody started dumping random Chinese tiles into the room, and if you had the right instruction in English, you could sort the tiles and send them back out another slot. Let's say that the English instruction told you how to make a joke out of the words, even though you didn't understand what you were doing. You wouldn't know what the Chinese words said, but to the person outside the room you would look like you understood Chinese because they sent you tiles in Chinese and you ordered them into Chinese words. However, you really didn't understand them, you just followed a list of instructions. Again, this is weak AI, because you really don't understand Chinese. You aren't self aware.

So while Searle does a nice job of pointing out that fooling does not necessarily mean self-aware, he really didn't shut down the main idea behind Strong AI. Yes, Searle points out that it is possible to "load an algorithm (software) that doesn't cause comprehension, but he hasn't proven that you couldn't load a different type of instruction that could eventually train the person inside the room to understand the Chinese tiles coming into the room. Searle solves for a subset of the general problem, and therefore Searle cannot close the door on Strong AI.

Strong AI really points to the idea that if we could only learn to load the right algorithm into a computer, we could replicate the programing that we have in our own brain. So, people with strong AI would suggest that it is simply getting the human software into a computer. If you watched Ghosts in the Shell, you will see this idea echoed over and over. People loading their programming into machines.

So where does Penrose come into the picture? Penrose starts digging into the nature of algorithms. He points out that through things like Godel and Church Lamba calculus that we cannot figure out if all Turing machines really get to a completed state. On top of this, we dive into general "incompleteness" in that we can not only not figure a completed state for all Turing machines, but we also find out that various things in this universe (like the Quantum model and relativistic models) really never get to completion always. However, our brain deals with them just fine. Even though math is not a complete system (Godel says that all math will fundamentally experience paradoxes), we can use it with all of its problems. Thus Penrose says "you know, I don't think that our brain really looks like a Turing machine at all!"

Some would say that Penrose then goes off the deep end by hypothesizing that our brains are actually new types of computers: Non-algorithmic. They are not state machine! In Penrose's world, since all Turing machines (and all computer architectures) are algorithmic because they are all Turing equivalent, our brains cannot be replicated by a Turing machine.

Now, if you watch any Science Fiction, or if you are an anime fan like myself, you will recognize that this is destructive to a common motif. Ghosts in The Shell is one anime that uses the standard convention of Strong AI to a great extent. In this anime, computer programs can become self aware. In the same way, humans can load their algorithms (or their conscious mind) into computers. The same thing happens in the Matrix movies.

If Penrose is correct, the Ghost in Shell idea could never happen. The idea of the Matrix is just that: an idea. The Turing machine architecture cannot hold a non-algorithmic QUANTUM computer programming.

This last bit is highly controversial, and Penrose, while appreciated for tour of all things strange, simply lacks any real evidence. Stuart Hameroff is trying to get around some of this. However, if Stuart is correct in any of his ideas, you cannot create a quantum computer out of a Turing machine. You need the ability to go backwards in time, and the brain is all about collapsing probability functions and NOT moving through a state machine.

Now some people have have heard of neural nets, but it is important to recognize this is NOT what Penrose is talking about. Neural nets, fuzzy logic, etc are all algorithm in nature. (For instance, neural nets speak to the non-commutative nature of the order of synaptic connections. This is simply a different type of algorithm.)

What Penrose is suggesting is only found one place in nature: Our brain.

There are paradox in our human self-awareness and Penrose does a brilliant job of going after a new way of thinking about them (did you know that we understand that our brain is on 1/2 second delay, and your brain lies to you to make you think that it isn't!). However, Penrose certainly hasn't disproved that our brain aren't Turing machine. He points to some issues, but no proof.

If Penrose is eventually declared correct, he will be revered for his insight.

If wrong, then a computer can think.

3 comments:

Heresiarch said...

Well, you're right only if you confine the definition of computer to Turing machine. But the Penrose-Hameroff model does NOT preclude actual consciousness in a machine per se. That machine would just have to include components that function as do microtubules inside neurons. More details at http://www.starlarvae.org/Star_Larvae_The_Physics_of_Subjectivity.html
--Heresiarch
www.starlarvae.org

Theologic said...

Hi Heresiarch,

The only computers man can make today are Turing machines.

Hameroff's microtubules are the quantum computers referred to in the post--and they are only a suggestion of how our brains could function. There is no indication that consciousness is found in the microtubules other than the fact that he's found sympathetic oscillations across the brain structure.

We certainly cannot make a Quantum computer.

Theo

tmm said...

Something interesting about Ghost in the Shell, however, is the dualistic title.

While AI is everywhere in series, you still have cognitive AI asking if they have a "ghost" or not. So, the anime series doesn't link intelligence to... whatever it is that makes a human "human."