Monday, April 26, 2010

Philosophy of Mind and Computers Part 2

Part 2 of a 3 part series that I am running called "Philosophy of Mind and Computers"

This essay discusses two thought experiments, the Chinese Room and the Chinese nation, and how they relate to artificial intelligence.


Searle's Chinese room argument is a very simple thought experiment that happens to be one of the most hotly debated works in both the fields of philosophy of mind and artificial intelligence. The thought experiment begs the listener to imagine a room with with an entrance and an exit. Inside the room exists an English to Chinese dictionary and a man capable of using it. Through the entrance comes a series of words written in English and the man is tasked with translating the text into Chinese using the dictionary. The question here is, does the room have any any understanding of Chinese? It becomes immediately obvious why this is such a crucial thought experiment to the field of artificial intelligence. On a very basic level, the room operates just like a computer does; it gets an input, does some process, and spits out an output. The main thrust of Searle's argument is an attack on what he calls Strong AI. "Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose abilities they mimic." ( Stanford Encyclopedia of Philosophy) This basically means that Strong AI would have the ability to pass the Turing test, ie in a blind test it would be able to trick a human into thinking it was interacting with another human. Searle concludes that the man in the Chinese Room is not an example of Strong AI and thus does not really Understand Chinese. He goes on to assert that no computer truly could understand Chinese the way a human does because the computer would only be manipulating symbols like the man in the room and thus would not have any real understanding. There are objections raised to the argument, but Searle seems to maneuver around them in every case. The main problem still seems to persist though. What is it about the human brain that allows us to understand things beyond just manipulating an input in some way to produce an output? Searle seems to speak to this point in his response to the "brain simulator reply". The brain simulator reply states that if we can create a machine that emulates every neuron in the brain and then run a story by it that we are asking it to translate from English to Chinese, we would have to say that the machine understood the stories. (Haugeland) Searle seems to take the position that we do not need to understand brain chemistry to understand how the mind works because "there is a level of mental operations that that consists in computational processes over formal elements which constitute the essence of the mental, and can be realized in all sorts of different brain processes in the same way that any computer program can be realized in different computer hardware."(Haugeland 194) Wouldn't it be true though that these "computational processes over formal elements" be best represented by a computer model of the actuall processes that they represent? How can we hope to begin to abstract these processes if we do not have yet a full understanding of what is actually happening. There seems to be something intrinsically missing from this line of thinking. The problem is, Searle doesn't ever take a stab at explaining what is so special about the human brain that allows it to have this unique understanding that other representations of the brain do not have. We can safely say the brain is not doing anything magical. It may also be useful at this point to enterain the idea that the brain and the mind have a far more back and forth relationship then a computer program does with its hardware. It is true that all software can be found physically in a computer just like all brain states will eventually be able to be picked out in a brain. The main difference here is that the software operates on static hardware; the hardware is just a venue for the software to run its operations. The way that thoughts operate on brains is different in that thoughts actually influence and change the physical structure of the brain. This in my mind is at least one part of the secret sauce that allows human brains to experience conciseness.
Another thought experiment that has similarly interesting repercussions is Block's Chinese Nation. In this thought experiment, Block imagines that a large subset of the Chinese population receive instructions to make particular phone calls at particular times. When these people make their calls, the receivers would then call the person whom they had previously received instructions to call. The calls would be intentional, such that the Chinese population would appear to be emulating what goes on in the brain when a particular mental state is activated, such as pain. (Braddon-Mitchell, David, and Frank Jackson.) The question then is, is the Chinese Nation "in pain" The obvious answer seems to be no, but it becomes very difficult to prove that pain is not being felt. The same patterns are being shown as in a person that is feeling pain, and you can think of several examples, a man could be in coma or maybe just good at not showing pain, where a person feels pain but there is no way of verifying that he is in pain unless you took a brain scan. It is possible then to say that China experiences pain but it is then relevant to think about how we classify different types of pain we see in things other then humans. We say all the time that animals are "in pain" and sometimes, depending on how closely linked the animal is to humans, we treat the pain with a percentage of the seriousness that we treat conscience human pain. It seems like what we should do then is treat this neural configuration of phone calls causing "pain" with exactly 0% of the seriousness that we reserve for pain inflicted on humans, down 1% from the 1% pain value evaluation we give to ants. Also while there are only edge cases that demonstrate a person not showing any symptoms of pain when they are obviously in pain, there is almost no conceivable edge case that would show the Chinese state showing symptoms of being in pain.
Block was obviously being slightly facetious with his thought experiment in that he was trying to point out how silly it would be for the Chinese population to represent a mind in such a way, and actually have some semblance of conciseness, but it appears that he has hit on something important. Why is it that we only allow neurons to posses brain states? Isn't it true that if we build an accurate enough model, that too can be said to experience some kind of brain state? Both of these men take a similar position, that it is impossible outside the human brain to create something that experiences consciousness. This alone seems to be the mistake that both of their arguments rest on. They make the classic intellectual mistake of assuming the human brain has some kind of special, magical characteristic that cannot be duplicated elsewhere. They also both do not specify in any way what particular brand of magic that they think the brain operates on to create this consciousness effect and ultimately end up with some interesting thought experiments that live inside a box somewhere in China.

Wednesday, April 21, 2010

Philosophy of Mind and Computers Part 1

Part 1 of a 3 part series that I am running called "Philosophy of Mind and Computers"
This essay is a serious look at type Physicalism, particularly drawing from Jaegwon Kims book, Philosophy of Mind.


In Jaegwon Kim's work "Philosophy of Mind", Kim seems to be progressing, chapter by chapter through views about the way the mind works that at first glance appear to be obviously wrong, to views that seem to be a more accurate representation of the way the universe is actually operating. Starting with Dualism, Kim quickly moves on to Behaviorism. While both of these models may be useful in predicting some aspects of the human condition, they are both clearly not systems that can be accepted as the complete picture. Next, Kim moves on to speaking of the mind in a physical sense, where the mind is the brain. The physicalist's approach is explained in detail and then several objections to this approach are listed. Kim differentiates between two different groups of phycalists, token and type, and defines type physicalism as "Mental kinds are physical kinds; alternatively, mental properties are physical properties (Kim, 102)." Type physicalism is a much more rigid form of physicalism then token physicalism, which simply asserts that "Every event that has a mental property has also some physical property." (Kim, 102)
The first epistemological objection Kim gives against physicalism seems fairly wrapped up in semantics. The objection centers around the idea that just because a human may know that he feels pain, this does not mean he knows anything at all about the specific brain state that is occurring when he is feeling pain. The argument states that he knows nothing about his brain states, but is this really true? A man feeling pain certainly knows something about his brainstate, he knows that he is in pain. He may not be able to talk specifically about what the brain is doing, but he certainly has some access to what is happening. This objection simply cannot be held true. There are many cases where something turns out to be more complicated then previously thought. Even simply looking though the eyes of child, there are many phenomena that they could make an observation about but not have a full understanding of. Ultimately just because our mind does not have direct access to how it works, it does not follow that our mind is not physical.
Objection two is slightly different, and poses a serious problem for type physicalists. The second objection seems to suggest that just looking at a brainstate is not good enough to define pain because it is not empirical evidence of the event causing the pain happening. The particular brainstate would only point to the definition of pain, not to any evidence. It goes on to say that "to make sense of the empirical character of psychoneural identities, we must acknowledge the existence of phenomenal properties distinct from neural properties."(Kim, 107) Even further, it says that the when a person recognizes a pain as a pain, this action is irreducibly psychic. The first objection has a problem with the idea that someone could feel pain without fully understanding how their brain was working to create that pain. This objection was quickly proven silly by the simple realization that things go on all the time around us that we do not have a full understanding and this does not disinclude the fact that there may be something more complex going on under the surface. This second objection seems to be more difficult to disprove.
The second objection is different from the first in that it cannot be immediately falsified. Type physicalists need to find a way around this objection as it seems to stand in complete defiance of their idea that things only exist in the physical realm and instead creates two properties, that of the mental and of the physical, which is unacceptable to a type physicalist. To get around this objection, Smart seems to think that he can turn observations into what he calls "topic-neural translations". An example of a topic neural translation would be, "When I look at the wall a similar thing is happening in my brain as when I look at the same or similar walls other times". This method of topic neural translation seems to be a viable way to cut out any sort of irreducible mental activity that the type physicalists so loath. The problem with the topic-neural translation however, is painfully obvious. While it may be technically correct to phrase sentences in such a way, it also seems like a giant leap away from the problem at hand. The type physicalist ultimately would want to be able to identify a particular brainstate for every single action, thought, belief, or emotion. This topic-neural translation steps away from anything of the sort. It simply says that when x is happening, it can correspond to any brainstate y, where y is a brainstate that has occurred before during x. This approach seems almost as crude and off the point as the behaviorist approach is. With the topic neural approach you can say nothing of new brainstates that you discover, you can only talk about them in terms of brainstates that you have already paired to previous physical states.
These two objections are not the only ones that can be leveraged against type physicalists. Their theories seem to fall apart pretty quickly when you start explaining new ideas in neuroscience such as neuroplasticity. It seems obvious that type physicalism is a slightly outdated idea that does not account for the complex and beautiful nature of our very physical and not at all static brains.

Wednesday, March 24, 2010