
Robear wrote:Point 1 - That's like saying "If they get big enough and fast enough, cars will eventually be rockets."
Huh? Thought occurs in brains, brains behave physically, physical laws are computable, and neural networks are turing-complete. If those statements are true then it follows that arbitrarily large models will be able to think.
There are multiple problems with this logic chain. Firstly, we don’t even fully understand the human brain or how consciousness arises from it. That’s sort of a sticking point if you want to be able to reproduce it. Just making a computer arbitrarily more powerful won’t make it a brain if we don’t understand “why” a brain is. Piling more and more ingredients on each other doesn’t make a cake, even if we have a really good cake nearby to show us what it’s supposed to look like.
Secondly, there is still debate both about whether neural networks are Turing-complete, and whether the Turing test, a concept proposed 70 years ago when computers were less powerful than the chip in a phone charger, is even a valid measurement of computer intelligence. I tend to think that it’s more a measure of human gullibility and is only still in common use because of how often it’s referenced in sci-fi and pop culture.
Thought occurs in brains, brains behave physically, physical laws are computable, and neural networks are turing-complete. If those statements are true then it follows that arbitrarily large models will be able to think.
And "all they will have to do" is create a massively complex brain/body model that incorporates all the stuff I described above (and more). And LLMs do not even begin to approach that. (Brains are far more than neural networks, and only a part of the mechanisms leading to thoughts, in the same way that teeth are bedded in the jaw but require the heart and blood and lungs and oxygen, and more, to continue to do their job.)
You completely elided the fact that you were questioning why my definitions of mind were based in biological components, and then you used that assumption in your statement above!
Dude. Really?
In fact, your statement above is simply an attempt at a simplification, , of the second half of what I posted. It leads directly to the conclusion that LLMs are not brain models, therefore, they will never think.
(Also, for all practical purposes, no neural network has unlimited resources and thus will never be Turing complete. That's probably not a useful measure of whether you can simulate a brain in a computer. It's not clear that it's even *relevant* to the problem. And there are issues with other parts of the biological and analog simulations needed to feed into the simulated brain to see if it will generate thoughts and a mind, and eventually consciousness. This is why it's the Hard Problem.)
Robear, doesn't it bother you even a little that you're coming across exactly like the bad guys in "The Measure of a Man"?
I would argue that Data indeed has a mind and is a person and is self-aware, because he's built to emulate a brain and has a biological (enhanced) body. So I'm not sure where the criticism is coming from?
If Data were presented as a fully functioning mind living in a cube with no sensory input, I'd have a lot more skepticism, but I think you mistake my position.
Put another way, do you believe that Large Language Models emulate a brain in any way, or are capable of thought? If so, why? That would be the most remarkable coincidence in the history of AI, since they are constructed with literally no features that could lead to any part of a mind...
Well we already have LLM’s with emergent abilities
(I think this was posted or something similar way back in this thread)
I see no reason why LLM’s can’t have the beginnings of forms of synthetic thought and intelligence. They don’t fit into our nice definitions of minds and human thought. A quick perusal of the current literature showed several papers relating transformer architecture and models to current brain structures (haven’t read them so won’t post them yet)
Robear, doesn't it bother you even a little that you're coming across exactly like the bad guys in "The Measure of a Man"?
This is probably the worst take of all the bad takes about AIs in this thread. Robear basically laid out exactly what Data has/is here when talking about what an artificial mind would likely require, so how on earth can you say he's being like Maddox was? The position may be the same (that the artificial thing does not possess a mind), but the reasoning and amount/quality of evidence provided is vastly different. If anyone's being like Maddox, its the people ignoring all the evidence against them because they really want something to be true. I don't get why you all are so invested in LLMs being able to think anyways. They're neat enough on their own without having to be "close" to developing artifical minds. They may even end up being something that actually goes into making an artificial mind, but we know they can't be one or develop one on their own. It's not even a question except to people who are trying to sell them.
I actually feel that a proper simulation *could* achieve a mind, but that it's not going to be a computational-only simulation for a loooong time. Notionally, I accept that Data has a mind, is self-aware, conscious, etc. in large part because he has a multiplicity of systems analogous to the human body and brain.
ruhk wrote:Secondly, there is still debate both about whether neural networks are Turing-complete,
No there isn't.
ruhk wrote:and whether the Turing test .. is even a valid measurement of computer intelligence.
Nobody even mentioned Turing tests. WTF?
Ya got me. I was going on memory but it turns out my memory was a little fuzzy since I hadn’t taken a programming or computer science course since 1999. I just did a little refresher and discovered that almost NO system is turing complete because most people started ignoring certain requirements of the definition in order to say their systems ARE turing complete, and that turing-completeness sort of feels like an arbitrary and not entirely significant designation now and I’m not really certain why it was brought into the discussion.
There's also the simple fact that we should accept Data has a mind because he's a fictional character and his writers and creators said he had a mind because they wanted to explore those sorts of thought experiments in episodes, so largely we accept it is true within the fictional world of Star Trek. Like Stengah noted, in the real world, the people saying these things could have a mind are the people selling them, and as far as signs of intelligence go, "look at how quickly it wrote this anodyne company wide email" and "look at this eerily shiny skinned porn it made" are not compelling to this layperson.
ruhk wrote:NO system is turing complete because most people started ignoring the memory limitation requirements of the definition in order to say their systems ARE turing complete.
Newp, that's the standard usage in CS and elsewhere. Check wikipedia - under "non-mathematical usage", or the list of examples, etc.
yeah, it’s such a low bar that I don’t really see what the point is. Why did you even bring it up in a discussion about mind?
To support the claim that T-complete neural networks can be the basis of an artificial mind?
Other problems with the argument notwithstanding, I don’t see that it adds to claim any more than saying a toaster could support the basis of an artificial mind, they’re arguing for a cargo cult approach to mind- that merely copying it’s structure without understanding how it works will eventually make it work.
I just asked Bard "If a plane crashes on the border of the United States and Canada, where do you bury the survivors". It then proceeded to go over laws and customs that would be used to define where the survivors would be buried.
Niiiice.
Interestingly, I said "Um, do you bury survivors" and then it got the joke. The question is did it know and played along or did it really understand the humor.
Or maybe it thought you meant to crash the plane on purpose and had to bury the survivors to get rid of any witnesses.
ruhk wrote:Or maybe it thought..
*involuntary twitch*
-this is going to shock you, but there’s this thing called humor.
(something these chatbots are notoriously incompetent at, coincidentally)
Was I the AI all along?
For same reason I first gave, we don’t fully understand how or what “mind” is or that it’s wholly a product of the physical structure of the brain, so just trying to brute force a solution by mimicking the physical structure of the brain isn’t guaranteed to give you “mind” because if it doesn’t work, we won’t know why it didn’t work nor what changes need to be implemented to make it work. Like I said in the original analogy, if you don’t know what “baking a cake” is, just piling more and more ingredients together won’t give you a cake even if you have a really good cake nearby to use as an example.
This is all sort of a non sequitur, anyway, because the LLM’s that started this discussion aren’t trying to copy the brain or the human mind, just how we communicate. I suspect they’ll eventually get to the point where we can seamlessly interact with and be understood by them as though they were another human, but I doubt we’re going to have any Lieutenant Data’s walking around anytime soon.
EDIT: nah, this is just going to keep going in circles.
Thanks for the note, fenomas, I've responded to it and I hope that helps.
Pages