Wednesday, January 19, 2011

The Chinese Room

Comments:
Luke Roberts
Poala Garza

Reference Information:
        Wikipedia - http://en.wikipedia.org/wiki/Chinese_room
        The picture is from - http://www.mind.ilstu.edu/curriculum/searle_chinese_room  /searle_chinese_room.php   

Summary:

       To anybody like me who did know what the Chinese room experiment is, it was introduced by John Searle in the paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. The experiment argues that a program cannot give AI a mind to understand the actions it is doing. This thought experiment puts Searle in a room alone as people outside of the room slide cards with Chinese symbols for Searle to respond to. Even though Searle does not understand any Chinese and it looks like just scribbles he uses instructions from a program to write back. Searle argues that because the computer does not literally think, it cannot literally understand Chinese. 

Discussion:
 
       To better understand the experiment and make it simple, I found a flash version that takes 5 seconds to complete.


http://www.mind.ilstu.edu/curriculum/searle_chinese_room/chinese_room_flash.php?modGUI=203&compGUI=1864&itemGUI=3258

       This experiment was introduced in 1980 and we all know that technology has greatly advanced since then, especially the things being done with AI. I do believe that for the time being I agree with the results of the experiment, that AI cannot not understand what it is doing. As a programmer I feel that even for the Chinese room example if a computer on one side of a door were to make a Chinese speaker on the other side of the door think it was indeed another Chinese speaking human it still does not understand Chinese. I don't think the argument can be made that even if it tricks every Chinese speaker in the world to thinking it's a Chinese speaking human, it doesn't understand. The computer is just following a list of commands based on the input.

3 comments:

  1. I agree with you that programs do not fully understand but I feel the Chinese Room argument really is arguing over terminology of how well an AI program can simulate or trick users. I feel this is more of an argument on describing the levels of AI rather than if it is or isn't intelligent.

    ReplyDelete
  2. I really enjoy how many different ways you can look at this idea. I latched more onto the idea of whether or not a program could replace a brain and mind. As of right now it can't. However, if another human believes a program's actions to be of a human, then the program is being viewed as a brain with a mind by that person. It still does not possess the essence of a human, but I suppose this is a matter of perspective at this point.

    ReplyDelete
  3. @Miguel: I like your insight on the question of terminology. His argument is that if there were such a protocol to successfully trick a native Chinese speaker, that does not necessitate the understanding of the computer just as he would not understand Chinese. I think it is an incredible question on whether or not it would be understanding. In my opinion the terminology in question is the word understanding.

    ReplyDelete