News updates on the development and ramifications of AI. Obvious header joke is obvious.
Let's put GPT in charge at Twitter and see what happens.
It couldn't do much worse than what's running it right now.
Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks
Fortunately, Internet articles and discussion forums are never used as AI training data, so the AIs shouldn't see this coming...
Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks
Fortunately, Internet articles and discussion forums are never used as AI training data, so the AIs shouldn't see this coming...
Skynet knowing an attempt to kill it would be made is why it launched Judgment Day in the first place. That and they gave it unrestricted access to nuclear weapon systems.
Really though, all this agreement is meant to do is once again give the general public the false impression that LLMs are far more powerful and capable than they actually are.
https://futurism.com/the-byte/ceo-go...
Google CEO Sundar Pichai says hallucinations are still an unsolved problem.
I think it looks increasingly like they are an unsolvable problem.
https://futurism.com/the-byte/ceo-go...
Google CEO Sundar Pichai says hallucinations are still an unsolved problem.
I think it looks increasingly like they are an unsolvable problem.
I'm starting to think even using the word hallucination is a marketing term to make these things seem more human and powerful than they are.
A LLM throwing Reddit spam at you isn't a hallucination. It's pretty much what it's designed to do. Garbage in garbage out.
Another way of looking at it is that those hallucinations (which I also agree is the wrong word to use) are the flickerings of ai doing the thing it hasn’t done yet: examples of actual hypothesis creation beyond recombining existing ideas.
Most of the thoughts I have on any given day are just recombinations of ideas and thoughts and things I’ve experienced before. Even the few truly original ideas I have could easily be called crazy hallucinations, and are just as batshit stupid as what I’ve seen ai come up with (sometimes I even have the same level of confidence as ai).
This idea, like most of mine, is not original. It’s taking data I’ve heard before and regurgitating it here. (relevant bit at 30:00)
I've taken to calling A.I. in its current form "Computer Mad Libs"
Had an interesting AI interaction yesterday. I was installing a cabinet door and wanted to double check some basic math with fractions, specifically 31 1/2 - 2 7/8, before drilling any holes. Google didn’t show a result without clicking through to a page, so I opened up the perplexity app and used text to speech to ask the question.
The result, complete with every step shown, was 28 11/16…which is wrong. It correctly reduced the answer down to 28 5/8, and then just took it a step further.
Goes to show how fundamentally unreliable an LLM is for any question with a single, correct answer.
Especially math. It's a word salad shooter and for some reason they're not bothering to install a math module and use it when it recognizes a math problem.
The LLMs did successfully solve a computer problem I was having. I basically treated it like a tech support agent and gave it a problem I've been having with this PC for a while where it mysteriously boots up to a black screen for 20 minutes after a windows update, then continued like nothing was wrong. For years I researched the problem without success, just got used to booting the PC first thing and going for a morning walk. I think because the LLM is better with keywords and synonyms it could find the answer and give it to me. The answer was out there but Google couldn't find it with me driving.
The solution was a conversation too. Use Event Viewer, ask about some of the specific errors, try this, then this. That didnt work, what about this? Some scan I've never heard of found a few files to replace, which was promising enough to call it good and wait. Its been several months and several updates without happening so I think it's fixed.
quote: not edit.
A LLM throwing Reddit spam at you isn't a hallucination. It's pretty much what it's designed to do. Garbage in garbage out.
I also don't like the term "hallucination" and I agree that it's not a hallucination when an AI just repeats what it saw on Reddit. The LLM isn't making something up when it does that, this is something it was taught that was just wrong. And this is solvable by not training it on nonsense.
Hallucinating is when it invents court cases in a legal brief or says an author wrote books that don't exist or invents a fictional biography for someone. I don't know if this is solvable, but it's certainly unsolved.
Agreed. These results with google ai aren't hallucinations, just cases of garbage-in-garbage-out. It's actually doing a decent job of what it's supposed to do, which is to amalgamate and summarize the top search results. The problem is that it doesn't tell you that this is all it's doing. That and they way it presents the summary. It will take things that are supposed to be jokes, lies, and sarcasm, strip out all the surrounding context that'd normally let you identify them as jokes, lies, and sarcasm, and then presents it all as factual information.
I think too many people ascribe too much agency to these chatbots and don’t realize that they don’t actually have any contextual understanding of what you’re asking them or what they’re replying, and are just constructing the most likely arrangement of words from the data available to them. They can recite the definitions of “true” and “false” but don’t actually have any concept of what is true or false.
That's why they were branded as "AI" in the first place, so people would make that mistaken ascription.
Philip K Dick really would have dug the idea of computers "hallucinating" and fabricating lies.
jowner wrote:A LLM throwing Reddit spam at you isn't a hallucination. It's pretty much what it's designed to do. Garbage in garbage out.
I also don't like the term "hallucination" and I agree that it's not a hallucination when an AI just repeats what it saw on Reddit. The LLM isn't making something up when it does that, this is something it was taught that was just wrong. And this is solvable by not training it on nonsense.
Hallucinating is when it invents court cases in a legal brief or says an author wrote books that don't exist or invents a fictional biography for someone. I don't know if this is solvable, but it's certainly unsolved.
I can accept that.
When it spits out something so nonsensical people are, wow that's not real at all but good job on being uniquely weird.
I think the recent Google stuff has been good for laughs but how quickly it was traced back to terrible sources just proved how meh their LLM is. Which also makes it pretty clear why Google was holding back. People kept calling on Google to release and compete with openAI but their blunders so far make it clear why.
I don't think ChatGPT would have done any better. Despite how they made the responses look, Google's LLM wasn't actually trying to use what it it was trained on to answer the things it was asked, it was just summarizing the contents of the top search results.
Edit- It certainly does make Google look like it doesn't have a clue how to use their LLM, though, and is just in the AI race so it isn't left behind, not because they have any actual goal to pursue with it.
Philip K Dick really would have dug the idea of computers "hallucinating" and fabricating lies.
So would the movie adaptations.
DECKARD: "Memories... you're talking about memories."
TYRELL: "The term we use is 'training data'".
RACHAEL: "Is this testing whether I'm a replicant or a lesbian, Mr. Deckard? Replicants were originally created on the Greek island of Replicos."
I just watched a morning news spot talking about an AI that allows a school system to optimize bus & van routes for pick up and drop off. Running over the weekend, allows them to come up with a preliminary set of routes for Monday given the available resources.
What am I missing here? This sounds like an algorithm brute force solving a huge travelling salesman problem. Perhaps what I'm failing to understand is any problem solved by computers is called AI to attract investors.
This sounds like an algorithm brute force solving a huge travelling salesman problem. Perhaps what I'm failing to understand is any problem solved by computers is called AI to attract investors.
That's accurate. In this case, it's the news, so there's also the possibility that they decided among themselves to refer to it that way to make the story more "sensational".
Pretty soon they'll be calling calculators AI, too (a calcul-AI-tor?), because you plug in the formula and the computer goes through all the steps and figures out the answer for you. Obviously because it does that it means the computer knows how to think, and not that it's just following an algorithm.
Helen Toner has some things to say about Sam Altman.
The board didn't know of ChatGPT until they read about it on twitter after it was already released.
Despite claiming numerous times that he was an independent board member with no financial interest in OpenAI, he actually owned the OpenAI startup fund (the governance structure of it was changed just this April to finally remove his ownership of it).
He gave the board "inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change."
Two OpenAI executives told the board that Altman engaged in "psychological abuse," that he was creating a toxic work environment, and was unfit to lead the company.
She thinks a number of the employees who signed the letter supporting Altman after he was fired did so out of a fear of retaliation should he win and discover that their names were not on it.
She doesn't say why she wasn't making these arguments against Altman after his dismissal, which is when they would have done the most good and could have possibly prevented his reinstatement.
This image? Who cares? It's hardly surprising or even worth noting. If it wasn't ai generated I'd have expected it to have been entirely created in photoshop. Were there people out there who actually thought it was a picture of an actual camp? This honestly sounds more like an attempt to dismiss or discredit concerns over Rafah rather than a legitimate grievance. Like, they can't counter the outrage with facts so they're trying to insinuate that the pictures coming out of Rafah are also ai generated, since people are using this ai generated image.
As if they would let any camp sit long enough without bombing it for them to plan that out in the first place let alone actually do it
If it's being used to help Palestinians, there'll be a whole lot of governments suddenly becoming very anti-AI.
Stengah wrote:This image? Who cares? It's hardly surprising or even worth noting. If it wasn't ai generated I'd have expected it to have been entirely created in photoshop. Were there people out there who actually thought it was a picture of an actual camp? This honestly sounds more like an attempt to dismiss or discredit concerns over Rafah rather than a legitimate grievance. Like, they can't counter the outrage with facts so they're trying to insinuate that the pictures coming out of Rafah are also ai generated, since people are using this ai generated image.
The criticism, from Palestinians, is none of that:
@bluepashminas wrote:palestinian journalists have been risking their lives for months to document every single massacre and instead people are reposting an ai-generated “art” that says “all eyes on rafah” and tells us nothing about what is actually happening on the ground or gives us any action items
@folkoftheshelf wrote:the resharing of that AI art insta template with “all eyes on Rafah” is so odd to me, there are very real and very tangible pictures and images taken BY Palestinians on the ground of the incredibly corporeal horrors they are seeing and facing, using AI is just disingenuous
I think, ultimately, the image is fine, but I think this critique is fair as well.
Sure, but there's a very valid reason to choose a bland image like this one. Using real pictures of the horrors certainly gets the reality of the situation across better, but it also makes it much easy to flag and remove those images for "violent" content, which is definitely a tactic that gets used more and more to hide provocative protest images. This image isn't being used for news reports either, just to show that the poster supports Palestinians, so it's not meant to convey any information about the situation on the ground. It's basically the equivalent of a temporary facebook profile image in that regard, but it really doesn't matter how it was made. Complaints focusing on it being AI generated just sound calculated to try to get people fighting about AI art rather than the actual genocide, as the same issues should apply equally if it was made by a human using photoshop rather than by AI.
Edit - to put it another way, the complaint that the image is pure slacktivism is very legitimate and I agree entirely. The news stories framing the complaint as if the problem is that it was AI generated is what I feel it's being pushed to distract from the real issue.
Pages