
I think you are attributing WAAY too much agency and insight into ChatGPT. It doesn’t understand what the user wants and it doesn’t understand the intent or context of requests in any meaningful way. It’s not “guessing” what you want from it when you interact with it, it just uses your input as keywords to build an answer probabilistically based on the material it was trained on and the parameters set by it’s programmers. There’s no real thought or intent going on.
I really dislike that these language models are being advertised as AI because I think it’s messing with people’s heads. The word “AI” has a very strong colloquial meaning defined by decades of cultural context and these bots are closer to a phone’s predictive text algorithm than to what most people think of when they hear “AI.”
ruhk wrote:It doesn’t understand what the user wants and it doesn’t understand the intent or context of requests in any meaningful way. ... There’s no real thought or intent going on.
I see claims like this all the time, but honestly I have no idea what to make of them. What does it mean to say that there is or isn't real thought going on? If a new LLM comes out tomorrow, how would you go about deciding whether all the stuff you said here applies to that new LLM or not?
It builds responses based on probabilistic word groupings and statistical speech patterns from its training material and using the prompts as seeds, all within the constraints set by its programming to add structure and restrict or promote certain types of responses. They are extremely complex programs but their ability to make connections between words and concepts is a shadow play generated by the human input from its training dataset. That’s why these chatbots and image generators quickly break down when you start adding “AI” generated content into their training datasets, what they produce is essentially noise structured to look like language/images and feeding the noise back into the system creates feedback loops of gibberish.
I have a strong suspicion we are soon going to go through a period where tools like ChatGPT and Bard will get much worse before they get better. You will have the same people who figured out SEO well enough that they ruined search engines doing the same with AGI, trying to exploit what it likes to pick up so that it favours their ideology/brand/content/etc., you will have trolls trying to feed it bad information, and you will have it pulling a lot of content from the internet that itself was AI-generated.
I understand the argument, but nothing you've said there constrains reality. It boils down to saying: LLMs generate words differently than humans, therefore they don't think. Which is a thing you can believe, I guess, but you should be aware that it's a belief about the word "think", not about AIs.
However, the only baseline we have to even consider what "thinking" is, is human. It centers on how a *mind* represents and manipulates information. It's important if you believe that we are more than just unconscious data processors.
The mind derives from consciousness, and seems to function as a coordinator in some way of what you could loosely call "data processing" in the brain. No one that I'm aware seriously claims that computers or programs instantiate a mind. When we talk about thinking machines vs non-thinking machines, it is the mind that is lurking the background to make the distinction. And AIs, no matter how fast they can process data or what kind of tasks they do, do not seem to instantiate a mind to do it. That does not matter on a task level, any more than we worry about a pick and place robot having a mind because it's faster than a human at the task. It matters on the level of understanding ourselves, as you note but don't address beyond the question of "does this mean we don't have a mind?".
When you actually take that next step, it's clear that there is a distinction between us and AIs to date, and that is conceptualized as "the mind". It may be eliminated eventually by better hypotheses of what happens when we think (as the soul has vanished from serious science) but that has not happened yet; the "hard problem" of consciousness is nowhere near solved.
However, the only baseline we have to even consider what "thinking" is, is human. It centers on how a *mind* represents and manipulates information. It's important if you believe that we are more than just unconscious data processors.
I gave up on that idea when I got a job working retail in high school.
I really dislike that these language models are being advertised as AI because I think it’s messing with people’s heads. The word “AI” has a very strong colloquial meaning defined by decades of cultural context and these bots are closer to a phone’s predictive text algorithm than to what most people think of when they hear “AI.”
Totally agree.
I’ve been building and selling “AI” powered products for more than a decade and I tend to only use the term “AI” to refer to anything we do if it gets the response we want from investors and customers. I think of it like a marketing term or buzz word and for most people you could swap in “magic”. As soon as people start to get used to and understand more about how a particular AI implementation works and they see newer better implementations they want to stop calling the older one AI. Remember when “expert systems” were SOTA AI? A* search for pathfinding in video game AI? So it’s not that far off from “any sufficiently advanced technology” to many people.
We could just start saying “Transformer model” instead. People would assume it is “more than meets the eye” and they would be right. But they might also assume it’s an intelligent robot and self driving car and they may be right sometimes.
I wish "Big" caught on past data as a stand in for any tech that is more complex than humans can easily fathom, like AI. Big computing, big analytics, big chat, big search. I don't see the problem here!
I don't follow. Is there some experiment we can to do to tell which systems have a "mind" and which don't? If not, doesn't your argument again boil down to "AIs generate words differently than we do therefore they don't think" with extra steps?
Right now, we see the mind as intertwined inextricably with a brain. The mind is the result of thinking thoughts, which physically exist in the brain as protein structures that change the brain as they change - neuroplasticity, measurable in the lab - having feelings about those thoughts (which again causes physical changes in the brain and body), and making choices based on those thoughts and feelings.
Needless to say there is no evidence that computers host any activity similar to thoughts or a mind. Similarly, it's obvious why we use human minds as the exemplar for these tests, although we are pretty sure that many animals have minds that are less complex (possibly more, in a case or two) than humans; that's also under investigation.
We know the mind is there in addition to the brain because thinking consciously (as well as less conscious states like dreaming) causes changes in the brain that go beyond simple deterministic stimulus-response and probabilistic systems (which you could easily argue are what digital systems do at this point). It is strongly tied to consciousness, which we don't fully understand, but as with other physically based systems one can be partially conscious, unconscious, or various other states related to damage to the brain via accident or disease (which tells us that conscious is physically based). There are clinical measures of consciousness that reflect the practical assessments needed for medical treatments. We can even, by monitoring blood flow in the brain, literally determine people's thoughts.
So right now, our tests are necessarily related to brains where we can measure physical activity that indicates the presence of a mind - humans and other complex animals, with humans being the one we are most sure have minds. We can distinguish physically between consciousness and unconsciousness to a useful degree (ie, there are devices used to monitor that for surgery via brain monitoring).
AIs definitely generate words differently than we do; that's the entire point. We can look at every step of how they do it and find no need for or sign of any activity that is similar to a brain-based mind. That's not a problem for the overall idea of a mind because there is more than one way to get to a particular end-point (meaningful speech/text, in this case). And there is no *expectation* or *intention* on the part of the ChatGPT LLM authors that they have or will create a mind. So it is perfectly reasonably to argue that yes, humans have more steps going on in the process of thinking about stuff and generating linguistic responses. I don't see any requirement at all - certainly not in evolution - that living things must favor "most efficient" over "functional". Steps in a process simply don't rate as a comparison of useful function in this case.
In sum; we have strong evidence that humans (and other animals) think and have a conscious mind and all that entails, based in the brain and part of the physical universe, and no evidence at all that computers running AI programs do any of that, or that they even need it to accomplish the tasks they are programmed for.
And yet there is actual evidence that thought only comes from brains, from neural structures and related systems that AI currently lacks entirely (although some teams are working on that). And we use the term "mind" to describe that domain of thinking. In the question of thinking, yes, it doesn't matter at all how "good" the AI is; if your concern is how biological systems think, AI does not replicate that, clearly. And that's the only benchmark we have for actual thinking.
The reason people wonder whether AIs can think, or have a mind, is the fact that they were originally thought to be able to constructed in a way that would *emulate* human thought and mind. If only we could do that, we could explain how humans (and animals) think and feel, which would be quite a thing to understand. But it turned out that that is a *very* hard problem, so AI over time moved to "what can we do with computers that replicate different human capabilities, only faster and more accurately?".
There are teams working to build artificial neural structures, that try to continue the original mission, but the problem that outsiders have is that they don't realize that functional AI is now focused on an entirely different mission. It's reasonable to ask whether AI can think, but the answer so far is "no way", for many reasons, including what I've discussed.
However, it's unreasonable to assert that there is no mind that works with thought to manipulate brains - the thinking, feeling, acting triad I discussed earlier. That is no longer a reasonable question. Minds exist, and we not only have physical explanations for thoughts and other mind elements, but we can interact with them and *change* them through physical interactions. We don't have a good overall theory of consciousness yet - there are still dualists around who try to push consciousness and mind into the realm of "it's just part of reality", a kind of restating of "it's your spirit", see David Chalmers for this - but we have a lot of evidence that mind is something beyond a remnant of Descartes.
If your concern is actually whether AIs can think, and/or have a mind based on those thoughts, the only comparison you have is animal and human brains and bodies. You asked if we had any experiments we can do that show that the mind, and thinking, actually exist, and the answer is "yes". But you don't seem to want to acknowledge that.
There are many subsets of speculation on how minds work, but the *evidence* we have so far is that the existence of a mind (and consciousness) requires a brain with a perceptive network, physically based emotions, and the structures that allow conscious and unconscious manipulation of the perceived sensory input. This is not an *assumption*; it is based on the evidence to hand.
Given that information, it's easy to understand why both computational linguists and cognitive scientists simply dismiss the idea that these AIs are in any way "thinking". You're right; there is no debate among the people who do this for a living. There are a few fringe folks, who I would say are likely Dualists if you scratch them, but they are blatantly ignoring the evidence to make their claims, or - like Chalmers - appealing to the mystery of the gaps in our knowledge and asserting the answer must lie in one of those.
There's certainly little debate as to whether minds and thoughts exist. (Note that none of this excludes Functionalism, in that the only minds we have to study are present in brains, and so it is possible that the components of a mind could be created artificially, or exist in alien life. It's just that we've never gotten anywhere near that level of complexity in a model.) Even Daniel Dennett acknowledges the processes arising from the brain that we describe as "mind"; he just argues that the idea that it is in any way separate from the brain and it's functions is a mistake based on our own internal perceptions. He argues that we have an illusion of mind rather than a mind itself; I'm good with that but since it's essentially identical to a standalone mind, it's clear that his interest is simply to exclude dualism (which again I'm fine with).
So minds exists, thoughts exist, and they are tied to brains and sensory systems and hormonal systems and bodies, so far, well beyond just "output". Simulate those well enough, you could possibly end up with a conscious AI with its own mind. It's reasonable to ask whether an AI is conscious, but it's unreasonable to assume that consciousness can exist without a brain and thoughts and feelings and hence a mind (although models may get the job done, eventually). Can AIs think? Reasonable question, today's answer is no. Do minds exist? Certainly. Good evidence for that one.
Is the idea that minds exist and are inextricably ties to brain/body levels of complexity something you can accept? If not, then the discussion is moot.
Does software “think” when it generates a new minecraft world, or compiles video, or solves an equation, or finds the best driving route between locations, or is this a question that only comes up with software that is programmed to mimic language?
Then what is even the point you’ve spent almost an entire page on?
because this whole discussion has been:
“hey, what if this thing?”
“that’s not how that works, that’s not how any of this works.”
“but what if it is, though?”
“it’s not, and here are the reasons why it’s not.”
“but what if it is, though?”
That’s why one of my first comments was about how I think the companies marketing these chatbots as AI was giving people false expectations.
I apologize for being short, it’s just a pet peeve of mine that people think ChatGPT is like one step away from being HAL9000 or Lieutenant Data when it’s just a complicated chatbot.
What I did say was that you're using "think" so as to exclude AIs by definition, and within that domain the question of whether AIs think isn't of any significance.
I do that within that domain because the only baseline we have for thinking is biological, and most AIs today are not biologically oriented and not *designed* to replicate biological systems. Further, they are not *built* to "think" in *any* sense of the word. I say they don't think because there is no *evidence* that they think, or can think, because they are not *designed and built* to think in any way. They are built to manipulate language in a mechanistic way. They don't have the systems or hardware or software to become conscious even accidentally.
That's about as evidence-based a stance as there can be, not an act of exclusion by definition alone. If you can bring yourself to understand that, you'll have a better grasp on modern cognitive science, which has shed the idea that if something *looks* like it's conscious, it is. That standard has been shown to be incorrect.
I was reacting as I did because the method you are using to question seems to presuppose ideas based on arguments rejecting modern conclusions about mind, consciousness and thought. It's hard to tell the difference in a purely Socratic exchange; context is important. And as you can see, I'm not the only one who drew wrong conclusions.
Sorry for that.
And the question of whether AIs can think *is* significant, but we *know* that ChatGPT and its ilk don't, so that question does not apply to the category of dedicated chat engines running on silicon. It applies to AIs that are designed to think, which are still in their infancy, or even zygotic stage, and eventually to anything that can replicate the complexity of beings we know do think.
Is that helpful?
And the question of whether AIs can think *is* significant
Fenomas isn't the only one, I'm also having trouble understanding where you're coming from here. How can the question be significant when you keep saying over and over again that by definition only biology can think?
(added in edit) Perhaps it would help if you could give a clear yes-or-no answer to one basic question: leaving aside the properties of any currently existing system, do you believe that in principle it is possible for a non-biological entity to be conscious?
From everything you've said so far, I would be fairly sure your answer would be "no", but whenever anyone says that's what they think you believe, you go all "no no no I didn't say that!"
Because that's the only platform that we have evidence of thinking for.
I am a Functionalist at heart, so I believe that non-biological entities that reach a certain complexity can be conscious. However, right now, we are constrained by evidence to instantiations of thinking beings that are biological. That could change, but we have a decent (not perfect) idea on the precursors for thought, and further, LLMs do not contain the components we currently understand to be necessary precursors to thought, mind and consciousness. Even Functionalists will be forced to admit that there is no *evidence* that LLMs are thinking.
I'm also a Pragmatist, so I understand that while science continually advances, we have to be led by what we understand today, rather than wait for some perfect understanding in the future. This means that we don't have to resort to ideas of "spirit" or "universal consciousness" or other "gods of the gaps" arguments to explain consciousness. We can simply wait until the evidence we have improves.
Does that help? In short, LLMs have none of the characteristics we understand to be necessary for thought. We don't ask why tractors can't swim; but we are tempted to ask whether LLMs can "think" because they use language, even though they are about as different from thinking beings as tractors are from fish. We are fooled by the output into thinking there is a mind behind the screen. But that misconception has been understood since the early 60's, at least. (We discussed Searle previously.)
Well there's your problem, your assumptions are both flawed. Computers as they currently operate do not show signs of thinking, and there's no point at which they will if they just get fast or big enough. Now, I do think that artificial intelligence is possible, but I think it will require a new kind architecture rather than what we have now.
They can do very impressive things, but it's not thought, no matter how much the pr guys want you to believe it is.
Ah, but what you have been missing in all these discussions is that with AI, we *can* measure and examine the internals. The internals *matter*, even if it was initially believed that they didn't.
Point 1 - That's like saying "If they get big enough and fast enough, cars will eventually be rockets." But they won't. They lack essentials that rockets have. Unless those essentials are added, cars will *by definition* remain road-bound and be able to make tiny moves in small areas, both things that rockets don't do. (The fact that we *can* add parts and make them into airplanes simply speaks to the robustness of this argument, since without those physical parts, cars can't fly, much less become rockets simply by getting bigger and faster.)
The reason AIs don't think is not because they are still too small or too slow. That can be a part of it but it's missing the point. The reason is that thinking, mind, consciousness are emergent qualities of brains, perceptual systems, and a myriad of other chemical inputs and feedback loops, ALL of which are needed to produce conscious cognition as we understand it today. We have no physical examples of thinking systems - minds - without those components.
I hope this is obvious. It's the reason there is a line between General AI, which aims to replicate minds, and narrow AI, which aims to replicate functions. LLMs aim to replicate language use. They are not constructed in any way to do more than that. It's the fact that they are *built for purpose* that creates a bright line between them.
2 - You'd have to go back to the 50's, because that line of thought - "If it can fool a human into thinking it's human through generating responses to speech/text inputs" has been shown to have gaping holes. Also, "intelligence" is a rough measure of various characteristics of capabilities of thought (some conscious, some not), but it is not an actual measure of whether there is a mind or not. Intelligence tests all assume that a mind is answering them. So the naive sense of "if it's convincing, it must be real" is at work in your assumptions, but that's no longer good enough.
Why? Well, for example, ChatGPT was recently trained on LSAT and Bar exam materials, and "passed" those with high scores. The problem is, there's no reason to assume it had to generate and use a mind to do so, since we are perfectly, obviously capable of programming it without any elements of "mind", and still have it answer questions successfully. It is in that scenario a law-related test-taking system, but not a *General AI* that replicates what humans and animals do with their minds. It works entirely different to what we know of biologically based cognition; it literally lacks the systems to do more than answer questions about law.
No one would call it a lawyer, in other words, because a lawyer is more than a system that answers questions about the law with 75% accuracy.
Actually, the field of AI has progressed from your way of thinking ("if we simply get a computer and program it correctly, logically we will be able to replicate human thought and minds"), which was pretty much the goal of AI research in the 40's and part of the 50's) to "human-like General AI and function-oriented Narrow AI are wildly different things". Historically, General AI has moved from an entirely logic-based computational approach, to one that is informed by neuroscience, endocrinology, and lots of analog electrical computation.
I've already described the bare elements of what is needed for mind. You need a substrate that is capable of cognition - taking sensory input, storing and manipulating it, changing it in certain ways at need, and causing actions in the physical world. The brain, a perceptual system and control systems for a body that houses the brain.
Next, you need the feedback and monitoring mechanisms that can fire off "tasks", for lack of a better word, that can model future states of both the world and the body, and select what actions will be taken when those states change. These tasks constantly compete for attention from the monitoring functions. The monitoring functions need to be aware of the state of the tasks - some people argue that this is the basis of what we perceive as consciousness. And the monitoring system needs to be able to modify itself based on, well, all the different interacting components, from "unconscious" brain data to emotions (chemical stimuli) to thoughts (protein structures affecting and affected by neurons) to neuronal firing states (did we just trip over something? Better deal with that!) and many more interactions of parts.
Further, the tasks and decision makers can be affected strongly by feelings and emotions, which are internal chemical states that can cause the mind, brain and body to change states through *internal* inputs as well as being caused by external ones. This is yet another system that affects the brain, but is not thinking or based in the mind.
And don't forget the input and data from senses - touch, taste, smell, sight, hearing, proprioception, and probably others that are less well known. All that goes into the mix.
Minds appear to be emergent from all these components. Model them, and you're going to get somewhere in your experiment. But - the big issue - the only models we have for how all this incredibly complex stuff comes together is biological. So we need a really amazing set of models, an analog and digital and symbolic and electrical and chemical and physical and physiological simulation with all parts firing together, in order to even get a hint of mind.
Why do we do it that way? Because coming up with a system that creates a mind a priori is a far, far harder task - as the early General AI researchers found out. Biological systems are our best guide to how to create a mind. Any tool today, no matter what sort of output it creates, that does not *attempt* to replicate a brain and body and all the other bits, is not going to create what we would call a mind, in any way.
General AI cannot be instantiated on a Narrow AI system, today, or in the near to medium future, for these reasons. That's why we say there is a bright line between the two. The idea that LLMs in some way "think" is simply a cognitive error based on the way our "theory of mind" works. We see something that *looks* alive, we treat it as alive until we can learn otherwise. But that filter is wide enough to turn waving leaves into monsters at night, teddy bears into beloved child companions, and shifting shadows into mysterious people moving in the back of dark alleys. It's no wonder we naively thought that if a machine could talk "like a human" for ten minutes without being detected, that we'd be on the way to human-like intelligence.
But we were soooo wrong.
Like if you went back to 1990 and asked random smart people what bar a computer should clear before it can be considered intelligent, you'd almost certainly get some answers that LLMs meet.
This argument is the equivalent to the trope where someone from "the past" gets shown a recording of music and thinks there's a tiny group of people in the playback device that's actually playing the music. All it means is that those random smart people's definitions were incorrect. It's essentially an appeal to false authority with the Dunning-Kruger effect mixed in. Being smart in their own field does not make them experts on the one in question, so their definitions don't matter.
And as I noted, in the field itself, by 1990 that stance had been obsolete for around 30 years... Not that it kept some folks from trying into the late 80's...
Pages