I am unconvinced by the Chinese Room thought experiment which argues against the possibility of “Strong” AI. It argues that an algorithm, no matter how sophisticated, cannot be said to be intelligent because all it is doing is following a sequence of steps and not displaying true intelligence.
The Chinese Room
The experiment is as follows: There is an AI algorithm sitting inside a room. In accepts inputs in Chinese characters and responds in Chinese, the output being so perfect that it passes the Turing test. A Chinese speaker is unable to tell whether a human provided the output or a machine did, no matter what input he chooses to provide. Could the AI be said to have gained intelligence? Could it be said to understand Chinese? The argument of the philosopher John Searle, who conceived of this thought experiment, is that it cannot.
The reasoning is that if instead of a machine, he, a non-speaker of Chinese, sat inside the room with a paper copy of the algorithm and laboriously followed all the steps to generate unerring responses to Chinese inputs, one wouldn't say that he knew Chinese.
The Indian Exam Room
Why do I find this unconvincing? Consider another thought experiment which I will call the Indian Exam Room. In this room sits a student schooled in the art of rote learning. This art has advanced since the time of my childhood, when it involved mugging up answers from the guide. Now, it has grown in sophistication to encompass various techniques, or algorithms, that will get you the answer based on keywords in the question and an analysis of the questions’ grammar. But these techniques are carefully designed to ensure that the student gets no understanding of the subject.
You are the examiner of this student. Will you be always be able to arrive at questions that show the student's lack of understanding? Provided that there is no limit on the number or type of questions, I argue that our answer should be yes. The Indian examination system encourages rote learning because the questions are too easy. There is a set of model questions that everyone prepares for; anything outside this set is considered “out of syllabus”. If examiners had the freedom to set questions that test the ability to apply concepts rather than recite them, questions that require you to synthesise different concepts to work out the answer, even the most advanced cramming techniques will fail.
If you agree that a student who lacks understanding will always break down under your questioning, then it follows that a student who passes your exam is intelligent. Why would your answer change if you replaced the intelligent student with a machine that displays similar capabilities?
Cogito Ergo Sum
The answer is: Failure of Imagination. The human mind is a super-advanced technology. To us it is indistinguishable from magic. We look upon it with amazement, but we do not understand how it works. An algorithm, on the other hand, is the opposite of the kind of technology that seems magical. When you explain how a thing works, it ceases to be magical. But that's literally what an algorithm is- a sequence of steps that produces a result. The idea that this sequence of steps can produce an equivalent result causes cognitive dissonance.
The Chinese Room thought experiment is just a statement of this cognitive dissonance in philosophical terms. Tomorrow, if we could examine human thinking and understand it in a mechanistic way, it will no longer seem magical, and who knows, we may conclude that humans too do not think.
When I think that I understand something, there is a feeling that I feel, that can probably be best described as feeling “alive”. When we notice this kind of understanding in others, we see a similar spark. This spark, or the feeling of being alive and aware, is what we call Consciousness. It is an emergent property.
When they do not display an understanding, we say that they are just parroting words, or engaging in Duckspeak. This is true whether the others are humans or machines.
If Searle sat in the Chinese Room with the algorithm, ran the algorithm, produced the output and sent it outside the room without understanding a single word of Chinese, the fault is not with the algorithm. If the algorithm passes every test of proficiency in Chinese, it embodies the understanding of the language. The test is not that anyone who mechanically follows the steps of the algorithm learns Chinese. If I am scanning the brain of a Chinese speaker while he is hearing things in his language and deciding what to respond, and while I understand everything that the brain is doing, I gain no knowledge of Chinese, does it mean that he doesn’t know Chinese?
I find all these arguments that refute intelligence dissonant for similar reasons. The truth is, we do not even know what 'understanding' means in the human context. Or, what we know is mechanistic - patterns of neurons firing, models of the world, etc. Any explaining we do using 'conceptual models' etc. can be used equally effectively for the models as they exists today.