It is just SO stupid!
;P
Though I will admit that Andi being a nickname for Cassandra makes no sense...
They should just let Daniel Craig do a new "up to 11" accent for every new Knives Out film. No explanation, no-one even acknowledges it, now Daniel Craig is talking like a gangster from Brooklyn in 1931. Must-see filmmaking.
I vote that he does the same accent but it keeps getting thicker and thicker with each film until you have to turn on subtitles to understand anything he’s saying.
Blanc: “Ah wah mer fer, bee nar wah may? Bee may? Nef foo doo wahr?”
Murderer: “I’m sorry, what is he saying?”
I mean, these films are about Blanc, right? Like Poirot stories?
Sounds like a job for ChatGPT:
Write a scene for a new Knives Out movie, but Benoit Blanc's accent is so thick no one can understand him.The scene opens on Benoit Blanc standing in the grand library of the Thrombey estate, surrounded by the suspects in the murder case. He clears his throat and begins to speak, his thick southern accent making it difficult for anyone to understand what he is saying.
"Well now, it seems we have quite a puzzle on our hands," he drawls, pacing back and forth. "I've been examining all the evidence, and I believe I have a theory on who might be responsible for this heinous crime."
The suspects exchange puzzled looks, straining to decipher Blanc's words. Marta, the nurse, takes a step forward and asks hesitantly, "I'm sorry, Mr. Blanc, could you please speak more clearly? We're having a hard time understanding you."
Blanc chuckles good-naturedly and holds up a hand. "Ah, my apologies. I tend to get a little carried away with my accent at times. Let me start again." He takes a deep breath and enunciates each word carefully. "I believe I have identified the culprit in this case. It was someone who had a motive, opportunity, and the means to commit the crime."
The suspects lean in, hanging on Blanc's every word as he lays out his theory. Despite the difficulties of understanding him, they are all eager to hear his conclusions and see if his deductive skills can unravel the mystery. As Blanc continues to speak, the tension in the room grows, building to a dramatic reveal that will leave everyone shocked and amazed.
Again, accidentally hilarious because of what a terrible job understanding the assignment it did.
They should just let Daniel Craig do a new "up to 11" accent for every new Knives Out film. No explanation, no-one even acknowledges it,
If the IMDB trivia page is trustworthy (and I'd hate to live in a world where it wasn't), the director briefly considered doing exactly that.
I mean, these films are about Blanc, right? Like Poirot stories?
Ostensibly yes, but my hot take is that they actually aren't. Rian Johnson has a nice formula set up, where in both movies the actual main character is the ingenue wrapped up in the mystery, and Blanc is a kind of arbitrarily smart side character. There's an unspoken premise that he's such a great detective that he'll automatically deduce anything that can be deduced from the information he has available - but the movies aren't about him and his deductions, they're about the ingenue and how she drives the plot to its resolution while Blanc is fluttering about deducing things.
Personally I loved them both; the second was fun and the first one was almost perfect.
Rian Johnson's movies are actually about
Joseph Gordon-Levitt.
Rian Johnson's movies are actually about
two hours fifteen minutes.
Fenomas, I think you nailed it.
Remember, if you start watching Lawrence of Arabia at 10:15 PM New Years Eve, by 12:00 AM you will have been asleep for 36 minutes.
I don't know why, but I find the insult "eat shit" hilarious. It's non-gendered, non-racial, and (for me at least) not overly harsh.
If someone said that to me I'd probably laugh rather than get upset. I've gotta start using it more in my day-to-day life.
There's always room for coprophagia.
I don't know why, but I find the insult "eat shit" hilarious. It's non-gendered, non-racial, and (for me at least) not overly harsh.
I’ve never heard that insult without “…and die.” tacked on. I’m not so sure a “not overly harsh” description applies.
PaladinTom wrote:I don't know why, but I find the insult "eat shit" hilarious. It's non-gendered, non-racial, and (for me at least) not overly harsh.
I’ve never heard that insult without “…and die.” tacked on. I’m not so sure a “not overly harsh” description applies.
Growing up in Québec, "Va chier" ("go shit") and "Mange d'la marde" ("eat shit") were the two most common insults.
Also:
I'm doing my planning for the coming week, and writing "Week of January 1st to January 7th, 2023" feels weird.
Oh, I am totally gonna f*ck up writing the date on my rent check tomorrow (yes, my landlord still makes us write checks).
I don't understand the "Chinese Room" thought experiment, or why it's famous or why anyone pays attention to it. It transparently assumes its conclusion - it boils down to: "assume that a strong AI can be emulated with books, and assume by definition that books cannot think, and voila - strong AI cannot think."
It's basically saying that a program that only does what it's programmed to do cannot be said to be a consciousness, or be actually thinking when it does it.
It doesn't preclude the possibility of ai existing, it just let's us understand that no matter how complex the algorithm is, it is not an actual artificial intelligence so long as it can only do whatever it was programmed to do.
It transparently assumes its conclusion - it boils down to: "assume that a strong AI can be emulated with books, and assume by definition that books cannot think, and voila - strong AI cannot think."
No, you've created that tautology with your framing. Searle's thought experiment asserts that a blind interaction test cannot provide enough information to judge whether the responder is a thinking being. It has nothing to do with the question of whether books or computer programs can actually think (although certainly it argues against that stance, but so do many, many other lines of evidence). It is not unreasonable to believe that books cannot think; but no one believes that books are "Strong AI". (The functionality of the substrate that produces thoughts does matter, but that's a follow-on to Searle's argument.)
Searle was addressing the Turing Test, specifically. The Turing Test asserts - from the perspective of the '40's and '50's - that a person alone in a room with a teletype, exchanging messages with something on the other end of the connection, can judge whether that other entity is human-equivalent or not. Therefore, a computer program that could simulate human knowledge well enough to convince another human that it is indeed human, is functionally equivalent to a human intelligence itself. For years, this was accepted wisdom; write a complicated enough language response program and it could be considered, well, artificially (humanly) intelligent.
But Searle saw a problem. He imagined himself, a non-Chinese speaker, as the respondent in the room (hence the name, the Chinese Room). He is undoubtedly a thinking human being, but he has absolutely no idea what even one ideograph that pops up on the teletype means. However, he has a set of algorithms in a computer that says "when you see these ideograms, reply with these other ideograms". (Don't worry about the actual programming, this is philosophy.) He uses a computer to keep the speed of response to a level that emulates a human.
So Searle, with absolutely no knowledge of Chinese, can convince a Chinese speaker that there is another Chinese speaker on the other end - and it's not Searle, it's the program, which surely does not "understand" Chinese" in any useful sense of the term. * Thus, Turing's Test is *not* a good way to tell if a computer program is intelligent, *even if* there is a human intelligence in the loop. The questioner cannot tell that there is an intelligent human fronting the computer, or that a computer is providing the answers. Searle does not even know if good answers are going out.
This does not mean that AI cannot be useful, or even human-like intelligent, in and of itself. It simply argues that a Turing Test-like evaluation is not enough to tell. However, it's proven far simpler to produce human-like response in narrow areas (like chats, or symptom evaluations in medicine) than it has to try to replicate the human process of thinking. This is the Narrow AI (where we are today) vs General AI (where we want to go), with the proviso that we could easily find a path to General AI without replicating *human* thought, as there are presumably many other ways to reach that goal than the one that humans evolved based on our wetware.
It is often used as an argument against mind-brain duality, which is increasingly facing problems as neuroscience shows more and more that the mind is inextricably tied to a complex processing substrate (the brain, in our case) and is not a free-floating element of human experience.
*Note that if you believe that computer programs that just algorithmically address language responses are actually human-like intelligences, you have a huge amount of work to do to show that that is a reasonable stance. That is not and never has been considered a reasonable belief in the philosophy of mind, for reasons that should be obvious.
It's basically saying that a program that only does what it's programmed to do cannot be said to be a consciousness, or be actually thinking when it does it.
It doesn't preclude the possibility of ai existing, it just let's us understand that no matter how complex the algorithm is, it is not an actual artificial intelligence so long as it can only do whatever it was programmed to do.
Ok, what about of a program being programmed to be conscious?
Stengah wrote:It's basically saying that a program that only does what it's programmed to do cannot be said to be a consciousness, or be actually thinking when it does it.
It doesn't preclude the possibility of ai existing, it just let's us understand that no matter how complex the algorithm is, it is not an actual artificial intelligence so long as it can only do whatever it was programmed to do.Ok, what about of a program being programmed to be conscious?
A) you'd have to determine what consciousness actually is first, and then figure out how to code that.
B) if it was truly conscious, it would mean it could alter or ignore its own code, at which point it's no longer only doing what you originally programmed it to as the program has changed.
There is the thought that we should call what we're currently devloping simulated intelligences rather than artificial ones, as that bypasses the question of whether or not they are truly intelligent minds that can actually think for themselves.
Stengah beat me to it. If you want a Nobel Prize, and any number of cash incentive prizes, figure out *how* to "program consciousness". Your name will go down in history.
It's basically saying that a program that only does what it's programmed to do cannot be said to be a consciousness, or be actually thinking when it does it.
A) you'd have to determine what consciousness actually is first, and then figure out how to code that.
Well then how can you claim that the program isn't conscious if you can't define consciousness in the first place? Can't have it both ways, dude.
What you're basically saying with that second point is that we wouldn't recognize consciousness if we created it anyway, thereby invalidating your first point.
It's entirely possible to know what a car is, and distinguish cars from other things in spite of the incredible variety of them, without knowing (for example) what kind of engine the car uses, how the differential gearing works, all sorts of things. Consciousness is similar.
We are not perfect at defining it, but there's a lot of work going on in neuroscience and medicine to narrow down what consciousness looks like. Thought experiments like the Chinese Room do the important work of evaluating whether a particular *test* of intelligence will work; in this case, the Turing Test is not sufficient.
One thing we can say, though, is that people are very good at distinguishing consciousness from unconsciousness (and yes, there are edge cases that show that there's a gradation, like sleepwalking, comas, or partial consciousness under anesthesia). So it would be wrong to say that we don't know what consciousness looks like in humans, but also wrong to say we can precisely define it in every way.
Note that it's entirely possible that consciousness and thinking are not related in the way we think they are. We may not even be conscious in the way that most people believe, with a little guy in our brain directing our actions (in fact, another thought experiment shows easily that that is not the answer, since that little guy would presumably have another little guy inside *their* head, and so is not explanatory). So we're kind of bleeding into a wider discussion pool by adding consciousness to the mix.
Note that NLP researchers laugh at the idea that a chatbot is in any sense intelligent or conscious. Hysterically. They regard that as a category error.
We're excellent at distinguishing consciousness in people and awful at it everywhere else. It's why anthropomorphism exists. It's why people talk to their plants and pets. It's why Thor exists - we assigned consciousness to the weather.
An intangible piece of software? Forget about it. We aren't going to have a clue.
And the fact that some chatbots arguably pass the Turing Test shows that.
There is some thought on whether quantum theory applies to consciousness, which, if true, might mean a classical computer could never attain true "consciousness", artificial or otherwise.
Pages