Stengah wrote:It's basically saying that a program that only does what it's programmed to do cannot be said to be a consciousness, or be actually thinking when it does it.
Also Stengah wrote:A) you'd have to determine what consciousness actually is first, and then figure out how to code that.
Well then how can you claim that the program isn't conscious if you can't define consciousness in the first place? Can't have it both ways, dude.
What you're basically saying with that second point is that we wouldn't recognize consciousness if we created it anyway, thereby invalidating your first point.
The first quote is about we do know some things that consciousness isn't, and a program like the chinese room falls into that category. You don't need to be able to perfectly define a thing to understand that something else isn't it.
My second quote is about how even once we do fully know what consciousness is, it doesn't mean we'll understand how it works or how to code a program to actually create it. Being able to define something and being able to create it (not just giving the appearance of replicating it) are two very different things.
The Chinese Room thought experiment doesn't say that true AI is impossible, it just says that certain kinds of programs aren't actually AI, even if they can pass the Turing test.
It's basically saying that a program that only does what it's programmed to do cannot be said to be a consciousness, or be actually thinking when it does it.
It has nothing to do with the question of whether books or computer programs can actually think
The summary I'm working from lays the argument out a bit differently. E.g.: "Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics." Or, quoting Searle himself:
I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality ... A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker
My issue with this is that the key elements of the CRA are extraneous to the question being examined. If a computer program can pass an arbitrarily hard Turing test then there are plenty of interesting questions we can ask about it, but putting that program in a room with John Searle illuminates none of those questions. Searle thinks that the human in the room is the system we're interested in ("me, for example" above), but this is clearly not the case - whatever the program's mental capabilities are or aren't, they don't change when Searle enters or leaves the room.
To see this, try reading Searle's two quotes about the CRA (section 1 in my link), and remove the bits about the human. The result boils down to: "imagine a room containing books of data and instructions which would allow someone to pass the Turing test in Chinese. I assert without argument that those books cannot understand Chinese."
B) if it was truly conscious, it would mean it could alter or ignore its own code, at which point it's no longer only doing what you originally programmed it to as the program has changed.
Incidentally for reference, I don't think this is a distinction that could stand up to scrutiny. Most any nontrivial program that accepts inputs can be said to be "altering its own code" in some sense - or put another way, given some specific definition of what it would mean for a program to alter its own code, I think most programs could be written so as to meet or not meet that definition.
Keep in mind it was originally meant for a 1980's-1990's audience, who wouldn't be as familiar with how computer programs work. The human in the room is meant to make it easier for people to understand what is happening "inside" the computer program. Who the human is is inconsequential, what matters is that they are merely following the instructions of the program, written down into a paper book, and will not gain an understanding of Chinese from doing so. They do not know what the symbols they receive mean, nor what those they respond with mean.
[the human is] merely following the instructions of the program, written down into a paper book, and will not gain an understanding of Chinese from doing so.
Sure, but that's not a claim anyone disagrees with. The point of CRA isn't to argue that the man doesn't learn Chinese, it's to make arguments about the limits of computers or programs. E.g.:
..if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.
What I'm pointing out is that that last claim - that the computer can't understand anything the man doesn't understand - isn't a conclusion Searle has reached, it's an assumption he makes but doesn't support. If he's already assuming that, why invoke the man or the room at all? He could just have said "Turing tests are flawed because functionalism is wrong" and called it a day.
The man in the example is a stand in for the computer. The books are a stand in for the program. He doesn't need to use them as stand-ins, but he does it this way so people in the 1980s can more easily understand what he is saying is happening. If your criticism is that the computer in the example can understand things the man cannot, then how? They are given the same inputs, and the same rules for what to respond with. How could the computer understand more than the man does?
Edit - the problem might be the summary of it on that page you linked. I think https://mind.ilstu.edu/curriculum/se... and https://iep.utm.edu/chinese-room-arg... provide better summaries of the experiment itself, while the one you posted spends more time with the various responses to it and kind of glosses over the experiment itself and what it is actually trying to say about ai and conscious thought.
Thanks for the links, but now I'm getting a weird feeling. Is Searle arguing that the human in the room is analogous to a physical computer, and the books and whatnot are analogous to a program, and that the CRA demonstrates that even if a computer-plus-program passed the Turing test, we could still conclude that the the computer does not understand/think/whatever, with that claim being separate from whether or not the program might be said to this/that/t'other?
I hesitate to even offer that interpretation, but it would explain a lot of his quotes in your first link and in the link I posted.
Fenomas, can you tell us what your hypothesis is? What conclusion are you driving to about the Turing Test and the Chinese Room refutation of it?
My issue with this is that the key elements of the CRA are extraneous to the question being examined. If a computer program can pass an arbitrarily hard Turing test then there are plenty of interesting questions we can ask about it, but putting that program in a room with John Searle illuminates none of those questions. Searle thinks that the human in the room is the system we're interested in ("me, for example" above), but this is clearly not the case - whatever the program's mental capabilities are or aren't, they don't change when Searle enters or leaves the room.
To see this, try reading Searle's two quotes about the CRA (section 1 in my link), and remove the bits about the human. The result boils down to: "imagine a room containing books of data and instructions which would allow someone to pass the Turing test in Chinese. I assert without argument that those books cannot understand Chinese."
So, let's remove Searle from the room, now that we have real-time NLP. Assume that a Chinese language chatbot is in the room instead, programmed to respond plausibly to enough Chinese conversation to be judged "probably human" by someone on the other end of the connection. (Note that Turing proposed the person also has another terminal with a human on the other end, also available to chat with, and that there were many runs, with a judgement at the end of each as to which one was (more) human. If the computer convinced the questioner that it was actually a human more than 50% of the time, Turing would assess that the computer in some way "thinks like a human".)
(I believe Turing's flaw was imagining that only a "thinking" computer - one that replicates human thought processes in this and other ways - could pass this test, rather than the clearly non-human-thought-capable NLP systems we have today. I believe he didn't forsee those types of systems.)
Given that - and that's the assumption Searle is making - do you believe it's reasonable to believe that that chatbot is actually "thinking like a human being"? Because without your describing where you want to take your argument, that's the only goal I can see you driving towards.
What did I miss? I believe today that Searle has been long vindicated, and that no one views chatbots as evidence of the capacity for human thought in machines. (Note that if you actually are on to something here, then this is a Big Deal, and it means that every modern philosopher interested in the topic has missed something simple...)
But I still don't understand where you mean to take this. Dualism? Functionalism? Strong AI?
Thanks for the links, but now I'm getting a weird feeling. Is Searle arguing that the human in the room is analogous to a physical computer, and the books and whatnot are analogous to a program, and that the CRA demonstrates that even if a computer-plus-program passed the Turing test, we could still conclude that the the computer does not understand/think/whatever, with that claim being separate from whether or not the program might be said to this/that/t'other?
I hesitate to even offer that interpretation, but it would explain a lot of his quotes in your first link and in the link I posted.
As well you should, because that assumes the dualist answer to the mind-body problem. I would oppose the idea that the program can be separated from the computer and still operate normally quite vigorously. This, again, gets into the question of what exactly it means to "think like a human"; what it takes to host human-like cognition; and so forth.
Searle views the Chinese Room as a Black Box that we are allowed to look into, but the interlocuter is not. (He constructs it with a human and a computer or books of algorithms so as to address common objections of the time, as Stengah noted above.) He simply says that the Chinese Room shows that Turing was wrong in his assumption about the test showing human-like cognition in a computer.
What did I miss? I believe today that Searle has been long vindicated, and that no one views chatbots as evidence of the capacity for human thought in machines.
There's not much (if any) dissention that the kind of programs Searle came up with the Chinese Room to address don't have the ability to understand what they are doing, but there's still a ton of dissention with his claim that his thought experiment can be expanded to cover any program we can write for modern computers.
I'm of the opinion that the hardware matters as much as the "software", if they can even be said to be separate. Anything we write to achieve human-like cognition on something other than a brain in a body would be doing it in a very different way to the way humans do it.
I also note that yes, Searle was addressing a particular paradigm that was widely held then but we have moved past that, clearly.
Went to the Library and Archive of Canada this afternoon to look for information on a friend's ancestor (at her request). I haven't found anything.
However, as I was looking for marriage records in Quebec City, I found a book with marriage records for my home region, and took a peak. Within minutes, I found the marriage dates of my grandparents, along with the names of my great-grandparents! (It was pretty easy: while my grandfathers had some pretty common names, both my grandmothers' names were much less common.) Fun fact: all my grandparents were married in the same catholic church, roughly three years apart (March 1930 and April 1933).
If I were rich and had that kind of clout, I would love to be able to get Dr. Henry Louis Gates to do a "Finding Your Roots" investigation of my parents for them as a gift (and for myself, too, frankly).
Fenomas, can you tell us what your hypothesis is? What conclusion are you driving to about the Turing Test and the Chinese Room refutation of it?
Thanks for the replies! I don't have a desired conclusion, I'm just trying to understand the argument. People seem to think it's important, but as I understand it it seems shallow and tautological.
Given that - and that's the assumption Searle is making - do you believe it's reasonable to believe that that chatbot is actually "thinking like a human being"?
If a system passes a test for having some property, then by definition it has the property it was tested for. Asking about "thinking like a human being" rather rigs the game, since that's very precisely what a Turing test tries not to examine, but consider the property "understands Chinese". If you were designing the most rigorous test you could think of for whether someone understands Chinese, wouldn't that test look fairly similar to a Turing test? If so, and if a chatbot then passes the test you designed, on what basis could anyone claim that the chatbot nonetheless doesn't understand Chinese? To do so would be to claim that the property cannot be detected by tests at all - in which case it's phlogiston and we can safely dispense with it.
I guess the issue here is, as a programmer I don't have any pre-existing opinion on whether chatbots can think/understand/etc. It's like asking whether they can glorp - we need a definition for glorp before we can even consider the question, and then we resolve the question by applying the definition. But Searle seems to view it the other way around - he seems to be saying "no no, as a member of the AUPSLOPTP I already know which systems can glorp. Humans obviously can, and chatbots definitely can't - not even hypothetical future ones. I can't tell you precisely what glorping is, or what effects it has on the world, or how you can test for it yourself. But chatbots can't do it, so if a chatbot ever passes your glorping test then that proves the test was flawed."
Is that what's going on here?
Aside: does this need its own thread? Is there one already?
Proposed title: "Humans talking about AI (talking about humans (talking about AI (..)))"
Searle came up with the Chinese Room Experiment because there was a lot of talk and questions at the time about whether machines could be said to think about and understand what they're doing like humans do. As he was (and is) a philosopher who specializes in the philosophy of language and philosophy of the mind, people were coming to him specifically with those questions for an expert opinion. The example program he was thinking of when he came up with the CRE was one that could read a story and then answer questions about it (Schank and Abelson’s “Script Applier Mechanism” story understanding program (SAM) from 1977). He doesn't intend for the CRE to say no program or computer ever can think like a human, just those that operate like the ones he was being asked about. And the reason he gives for why they cannot, it isn't just because he decides they can't and doesn't explain it, he explains it's because both the man in the room and the computer it's meant to represent are operating purely syntacticly, and lack any ability to understand the meaning of the Chinese symbols in their input or their output. The man in the room isn't "thinking like a human" about the input or output any more than the computer is, and he is a human. They are both following explicit rules that tell them when they receive these specific symbols in this specific order, to respond with these other specific symbols is that specific order. There's no way for them to understand what the symbols actually mean, and they are not thinking about the question they're being asked or the answer they give at all. They don't even know the input was a question and their output was an answer, but the program is written well enough that it can fool someone into thinking there's a person in the room that does understand Chinese.
Edit - As for why it's important now, and why it's being talked about more recently, several months back a Google engineer went public with his claim that Google had created a sentient AI at their LAMDA project. It was essentially a Chinese Room and he let his confirmation bias and religious beliefs convince him it was sentient. He decided that in his capacity as an "ordained Christian Mystic priest" mind you, not as a software engineer. He wasn't even specialized in AI, he was just supposed to be testing it to see if it showed any prejudices in how it interacted with humans. Instead he decided the program he was talking to was sentient might even have a soul, then set about trying to prove it by asking it some extremely leading questions. The higher ups in the LAMDA program reviewed his concerns and spent months trying to clarify to him that it was not sentient, and finally fired him after he posted transcripts of his conversations online in an attempt to "reveal" that Google was hiding that they had accidentally created a sentient chatbot and might abuse or kill it to keep it quiet.
And the reason he gives for why they cannot, it isn't just because he decides they can't and doesn't explain it, he explains it's because both the man in the room and the computer it's meant to represent are operating purely syntacticly, and lack any ability to understand the meaning of the Chinese symbols in their input or their output.
Sure, but what is his actual basis for saying that, besides that he believes it?
I mean - suppose we're standing beside two Chinese Room experiments. One room has a native Chinese speaker inside, and the other has a random guy following instructions out of a book. But - wouldn't you know it? - the latter guy has been doing the experiment for so long he's memorized the instruction book and thrown it away. Neither of them speaks any English, but we can communicate with both via our Chinese-speaking colleagues.
If there's an actual difference between these two guys, how do we tell which is which? If we can't, then what basis do we have for claiming there's a difference?
First, read the original paper, I guess? Searle's response should then be readable in context.
Stengah wrote:
And the reason he gives for why they cannot, it isn't just because he decides they can't and doesn't explain it, he explains it's because both the man in the room and the computer it's meant to represent are operating purely syntacticly, and lack any ability to understand the meaning of the Chinese symbols in their input or their output.
Sure, but what is his actual basis for saying that, besides that he believes it?
I mean - suppose we're standing beside two Chinese Room experiments. One room has a native Chinese speaker inside, and the other has a random guy following instructions out of a book. But - wouldn't you know it? - the latter guy has been doing the experiment for so long he's memorized the instruction book and thrown it away. Neither of them speaks any English, but we can communicate with both via our Chinese-speaking colleagues.
If there's an actual difference between these two guys, how do we tell which is which? If we can't, then what basis do we have for claiming there's a difference?
Is the native Chinese speaker simply "following an instruction book" to hold the conversation? I think most people would argue that no, there's something more going on, involving memories of past experiences, emotions, limbic system responses, etc. (However, if you are a behaviorist, then the idea that thinking is essentially an illusion should be considered, but the physical responses are still there.) So actual response to language involves more than just speaking, even if that does not come through (and we see echoes of this in people bemoaning the fact that conversation on the Internet lacks emotional and physical context).
Is the non-native speaker doing anything more than manipulating symbols (ie, working from syntactic rules)? I would say most people again would say not. For example, if the text he was passed asked "Does Tienanmen Square make you angry or sad?", he would have no emotional response at all, where the native Chinese speaker certainly would, *even if* he gives an answer! That answer would not be based on any emotional response; it would be arbitrary. So they would differ *at least* in biological responses. But then, importantly, neither of them is a computer, which is the basis of the test. Otherwise, the question becomes "can you identify the thinking human?", and clearly what is happening then is that one thinking human is *pretending* to think and understand in Chinese, but just as clearly he does not. But no matter how well he follows the script, he is clearly human and a priori a being that thinks like a human. The question becomes meaningless, as humans can pretend to anything. But we still do not learn whether a *machine* can think. So I think your objection gets off the point, fenomas.
This also gets back to NLP. Today, a machine can carry on very plausible conversations in many different languages. Do we assert that the machine is "thinking" about the semantics, emotions, contexts, memories referenced by the conversations? If so, we need to either be able to pinpoint the code that allows such thought, or be able to explain the process of such thought as emergent behavior. If not, then we need to explain why, which seems to be at the base of your objection.
And that's key. We may not understand fully how humans think, but we understand quite well that they *do* think, and furthermore that that thought arises through the physical operations of the brain and associated physical systems, which the machine lacks entirely. This means that a priori, humans think, and machines do not think in the same way, until we devise a test to prove that they do.
What Searle shows is that Turing's Comparison Test is not capable of making the distinction. Put another way, if they both pass the Turing Test, then we have to turn to *other* tests to continue to distinguish them.
(Now, if you can show that human thought arises from some activity completely independent of the substrate, then you could argue that machines *could* think like humans. But the evidence against that has been piling up steadily for hundreds of years. Instead, we look at the types of cognition that machines *can* do, rather than try to split the hair of "oh, it sounds human, is it then human?", because once you open the door and see the machine you know it's not - and so have you really learned anything from the language test, other than that language by itself is not enough to distinguish humans from machines? You certainly can't say in that case that the machine *thinks* "like a human". instead, you have to say that it "thinks" like a machine of its type, distinguishable from humans by tests other than the Comparison Test.)
Stengah wrote:
And the reason he gives for why they cannot, it isn't just because he decides they can't and doesn't explain it, he explains it's because both the man in the room and the computer it's meant to represent are operating purely syntacticly, and lack any ability to understand the meaning of the Chinese symbols in their input or their output.
Sure, but what is his actual basis for saying that, besides that he believes it?
Because the computer Searle is using was not programmed to understand meaning, just to "manipulate symbols and numerals". Also that Searle did not at the time understand Chinese at all.
Aside: does this need its own thread? Is there one already?
Proposed title: "Humans talking about AI (talking about humans (talking about AI (..)))"
Yes, it does. Very much. Please make it happen, someone. Please.
-BEP
the latter guy has been doing the experiment for so long he's memorized the instruction book and thrown it away. Neither of them speaks any English, but we can communicate with both via our Chinese-speaking colleagues.
If there's an actual difference between these two guys, how do we tell which is which? If we can't, then what basis do we have for claiming there's a difference?
And that's where the analogy falls down. The Chinese Room doesn't consider the long-term impact of the thought experiment, that Mr Room would learn from his translation look-ups over time.
No, he wouldn't. He has no idea what the symbols mean, nor any way to learn what they mean. The program isn't a translation book. It doesn't translate the symbols into a language he knows, it just tells him what symbols to respond with.
In everything that I've seen Aubrey Plaza in lately, she's been wearing a swimsuit.
Good thing, too, or the censors would shut her right down.
I'll stop bumping the Chinese Room thing now, but just wanted to say that I no longer think I'm missing anything about it, and thanks to Stengah and Robear for responding.
As a postscript, now that I google the guy I find that whatever John Searle lacks in the "avoiding claims for which there's no evidence" department, he more than makes up for in the "decades-long history of sexual misconduct" department. Knowing this, I (irrationally) feel even more comfortable rejecting his arguments.
I encourage you to continue to investigate, because the Chinese Room is considered very important and an underpinning element of Philosophy of Mind. Consider counter-factuals especially. For about 50 years, folks in various fields have poked at it from various angles; you'll probably find an objection you favor is already extant.
Ok, I'll take up this particular horse-beatin' stick.
In what way could there exist a "based on these symbols, send back those symbols" machine that would be indistinguishable from an actual person without understanding of the context for the symbols? Language is not a basic call-and-response.
(Also, "think like a human" is a copout because we don't really understand what that means.)
In the same way chatbots exist. The person writing the program knows the meaning of the symbols. Whatever is running it doesn't need to. A human is running the same instructions the program does, theyre just in English rather than binary. As for a human doing it fast enough to fool people, it's a thought experiment, not reality. If it helps, imagine it's done by mail rather someone physically outside the room waiting for an answer.
I'll stop bumping the Chinese Room thing now
I reject this, let's discuss Chinese rooms more.
6/10, not sure where the flatscreen goes.
8/10
Pages