[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

Top_Shelf wrote:

Let's put GPT in charge at Twitter and see what happens.

It couldn't do much worse than what's running it right now.

Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks

Fortunately, Internet articles and discussion forums are never used as AI training data, so the AIs shouldn't see this coming...

*Legion* wrote:

Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks

Fortunately, Internet articles and discussion forums are never used as AI training data, so the AIs shouldn't see this coming...

Skynet knowing an attempt to kill it would be made is why it launched Judgment Day in the first place. That and they gave it unrestricted access to nuclear weapon systems.

Really though, all this agreement is meant to do is once again give the general public the false impression that LLMs are far more powerful and capable than they actually are.

https://futurism.com/the-byte/ceo-go...

Google CEO Sundar Pichai says hallucinations are still an unsolved problem.

I think it looks increasingly like they are an unsolvable problem.

Bruce wrote:

https://futurism.com/the-byte/ceo-go...

Google CEO Sundar Pichai says hallucinations are still an unsolved problem.

I think it looks increasingly like they are an unsolvable problem.

I'm starting to think even using the word hallucination is a marketing term to make these things seem more human and powerful than they are.

A LLM throwing Reddit spam at you isn't a hallucination. It's pretty much what it's designed to do. Garbage in garbage out.

Another way of looking at it is that those hallucinations (which I also agree is the wrong word to use) are the flickerings of ai doing the thing it hasn’t done yet: examples of actual hypothesis creation beyond recombining existing ideas.

Most of the thoughts I have on any given day are just recombinations of ideas and thoughts and things I’ve experienced before. Even the few truly original ideas I have could easily be called crazy hallucinations, and are just as batshit stupid as what I’ve seen ai come up with (sometimes I even have the same level of confidence as ai).

This idea, like most of mine, is not original. It’s taking data I’ve heard before and regurgitating it here. (relevant bit at 30:00)

I've taken to calling A.I. in its current form "Computer Mad Libs"

Had an interesting AI interaction yesterday. I was installing a cabinet door and wanted to double check some basic math with fractions, specifically 31 1/2 - 2 7/8, before drilling any holes. Google didn’t show a result without clicking through to a page, so I opened up the perplexity app and used text to speech to ask the question.

The result, complete with every step shown, was 28 11/16…which is wrong. It correctly reduced the answer down to 28 5/8, and then just took it a step further.

Goes to show how fundamentally unreliable an LLM is for any question with a single, correct answer.

Especially math. It's a word salad shooter and for some reason they're not bothering to install a math module and use it when it recognizes a math problem.

The LLMs did successfully solve a computer problem I was having. I basically treated it like a tech support agent and gave it a problem I've been having with this PC for a while where it mysteriously boots up to a black screen for 20 minutes after a windows update, then continued like nothing was wrong. For years I researched the problem without success, just got used to booting the PC first thing and going for a morning walk. I think because the LLM is better with keywords and synonyms it could find the answer and give it to me. The answer was out there but Google couldn't find it with me driving.

The solution was a conversation too. Use Event Viewer, ask about some of the specific errors, try this, then this. That didnt work, what about this? Some scan I've never heard of found a few files to replace, which was promising enough to call it good and wait. Its been several months and several updates without happening so I think it's fixed.

quote: not edit.

jowner wrote:

A LLM throwing Reddit spam at you isn't a hallucination. It's pretty much what it's designed to do. Garbage in garbage out.

I also don't like the term "hallucination" and I agree that it's not a hallucination when an AI just repeats what it saw on Reddit. The LLM isn't making something up when it does that, this is something it was taught that was just wrong. And this is solvable by not training it on nonsense.

Hallucinating is when it invents court cases in a legal brief or says an author wrote books that don't exist or invents a fictional biography for someone. I don't know if this is solvable, but it's certainly unsolved.

Agreed. These results with google ai aren't hallucinations, just cases of garbage-in-garbage-out. It's actually doing a decent job of what it's supposed to do, which is to amalgamate and summarize the top search results. The problem is that it doesn't tell you that this is all it's doing. That and they way it presents the summary. It will take things that are supposed to be jokes, lies, and sarcasm, strip out all the surrounding context that'd normally let you identify them as jokes, lies, and sarcasm, and then presents it all as factual information.

I think too many people ascribe too much agency to these chatbots and don’t realize that they don’t actually have any contextual understanding of what you’re asking them or what they’re replying, and are just constructing the most likely arrangement of words from the data available to them. They can recite the definitions of “true” and “false” but don’t actually have any concept of what is true or false.

That's why they were branded as "AI" in the first place, so people would make that mistaken ascription.

Philip K Dick really would have dug the idea of computers "hallucinating" and fabricating lies.

Quintin_Stone wrote:
jowner wrote:

A LLM throwing Reddit spam at you isn't a hallucination. It's pretty much what it's designed to do. Garbage in garbage out.

I also don't like the term "hallucination" and I agree that it's not a hallucination when an AI just repeats what it saw on Reddit. The LLM isn't making something up when it does that, this is something it was taught that was just wrong. And this is solvable by not training it on nonsense.

Hallucinating is when it invents court cases in a legal brief or says an author wrote books that don't exist or invents a fictional biography for someone. I don't know if this is solvable, but it's certainly unsolved.

I can accept that.

When it spits out something so nonsensical people are, wow that's not real at all but good job on being uniquely weird.

I think the recent Google stuff has been good for laughs but how quickly it was traced back to terrible sources just proved how meh their LLM is. Which also makes it pretty clear why Google was holding back. People kept calling on Google to release and compete with openAI but their blunders so far make it clear why.

I don't think ChatGPT would have done any better. Despite how they made the responses look, Google's LLM wasn't actually trying to use what it it was trained on to answer the things it was asked, it was just summarizing the contents of the top search results.

Edit- It certainly does make Google look like it doesn't have a clue how to use their LLM, though, and is just in the AI race so it isn't left behind, not because they have any actual goal to pursue with it.

H.P. Lovesauce wrote:

Philip K Dick really would have dug the idea of computers "hallucinating" and fabricating lies.

So would the movie adaptations.

DECKARD: "Memories... you're talking about memories."

TYRELL: "The term we use is 'training data'".

RACHAEL: "Is this testing whether I'm a replicant or a lesbian, Mr. Deckard? Replicants were originally created on the Greek island of Replicos."

I just watched a morning news spot talking about an AI that allows a school system to optimize bus & van routes for pick up and drop off. Running over the weekend, allows them to come up with a preliminary set of routes for Monday given the available resources.

What am I missing here? This sounds like an algorithm brute force solving a huge travelling salesman problem. Perhaps what I'm failing to understand is any problem solved by computers is called AI to attract investors.

Clumber wrote:

This sounds like an algorithm brute force solving a huge travelling salesman problem. Perhaps what I'm failing to understand is any problem solved by computers is called AI to attract investors.

That's accurate. In this case, it's the news, so there's also the possibility that they decided among themselves to refer to it that way to make the story more "sensational".

Pretty soon they'll be calling calculators AI, too (a calcul-AI-tor?), because you plug in the formula and the computer goes through all the steps and figures out the answer for you. Obviously because it does that it means the computer knows how to think, and not that it's just following an algorithm.

Vox Media and The Atlantic sign content deals with OpenAI

Two more media companies have signed licensing agreements with OpenAI, allowing their content to be used to train its AI models and be shared inside of ChatGPT. The Atlantic and Vox Media — The Verge’s parent company — both announced deals with OpenAI on Wednesday.

OpenAI has been quickly signing partnerships across the media world as it seeks to license training data and avoid copyright lawsuits. It’s recently reached deals with News Corp (The Wall Street Journal, the New York Post, and The Daily Telegraph), Axel Springer (Business Insider and Politico), DotDash Meredith (People, Better Homes & Gardens, Investopedia, Food & Wine, and InStyle), the Financial Times, and The Associated Press.

The deals appear to range in price based on the number of publications included. News Corp’s deal with OpenAI is estimated to be worth $250 million over the next five years, according to the Journal, while the deal with the Financial Times is believed to be worth $5 to $10 million. Terms for the deals with The Atlantic and Vox Media weren’t disclosed.

The agreements also cover how content from the publishers is displayed inside of ChatGPT. Content from Vox Media — including articles from The Verge, Vox, New York Magazine, Eater, SBNation, and their archives — and The Atlantic will get attribution links when it’s cited.

Vox Media will begin sharing content with OpenAI in the coming weeks, Lauren Starke, a Vox Media spokesperson, tells The Verge. Starke declined to share the terms of the deal. Vox Media says in a press release that it will use OpenAI’s technology to “enhance its affiliate commerce product, The Strategist Gift Scout” and expand its ad data platform, Forte.

The Atlantic says it is developing a microsite called Atlantic Labs, where its teams can experiment with developing AI tools “to serve its journalism and readers better.” Anna Bross, a spokesperson for The Atlantic, declined to disclose the terms of its deals in an email to The Verge.

The deals also appear to provide OpenAI with protection against copyright lawsuits. Content creators ranging from comedians to newspapers have argued that OpenAI’s training of its tools on their work — and ChatGPT’s subsequent ability to recite parts of their work — is a violation of their copyright.

The New York Times is currently suing OpenAI and Microsoft for copyright infringement over ChatGPT and Microsoft Copilot. The paper has said it’s spent $1 million so far on the lawsuit. The New York Daily News, the Chicago Tribune, The Intercept, and six other publishers later filed a lawsuit over similar claims.

Helen Toner has some things to say about Sam Altman.

The board didn't know of ChatGPT until they read about it on twitter after it was already released.

Despite claiming numerous times that he was an independent board member with no financial interest in OpenAI, he actually owned the OpenAI startup fund (the governance structure of it was changed just this April to finally remove his ownership of it).

He gave the board "inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change."

Two OpenAI executives told the board that Altman engaged in "psychological abuse," that he was creating a toxic work environment, and was unfit to lead the company.

She thinks a number of the employees who signed the letter supporting Altman after he was fired did so out of a fear of retaliation should he win and discover that their names were not on it.

She doesn't say why she wasn't making these arguments against Altman after his dismissal, which is when they would have done the most good and could have possibly prevented his reinstatement.

More from today's Garbage Day newsletter -

Both The Atlantic and Vox announced this week that they’ve signed licensing deals with OpenAI. An official statement from Vox described the deal as an agreement that “recognizes the value of our work and intellectual property, while opening it up to new audiences and better informing the public.”

Meanwhile, The Washington Post has launched some weird AI summary widget at the top of their articles. Ah, yes, instead of reading a story that was, at least, in theory, written to be read and enjoyed by human beings, wouldn’t it be much easier to click on a link that takes you to paywalled story, pay the money to access it, and then click on a button called “summarize” to read three bullet points a machine generated for you?

Look, we all know this is a dead end. It’s so obvious that this is going to be disastrous mess that we don’t even need to waste the space arguing it. Plus, according to a study released by The Reuters Institute this week, most people are only use generative-AI services about once or twice a month. Only about 7% of American responders said that they’re using ChatGPT on a daily basis. So publishers can’t even make the argument that they’re “meeting readers where they are,” like they did with Facebook 15 years ago.

But even funnier, as Big Technology’s Alex Kantrowitz noticed, according to Vox’s press release about the partnership, ChatGPT’s growth has possibly been completely flat for almost a year and a half. Speaking of flat…

Livonia prom revenge murder story is fake news, police say

A video making its rounds on TikTok about a man murdering a teen at her prom in Livonia isn't just inaccurate – it is completely made up.

The video details a murder that allegedly occurred after the Livonia High School prom. According to the video that was posted by a TikTok account full of apparent AI-generated content, a man named Douglass Barnes killed a teen girl named Samantha McCaffery. The video says the murder was revenge for Samantha's cop father, Mike McCaffery, killing Barnes's son several years ago.

However, the story is fake, and it's not the only fabricated story posted on the social media account.

"It's not going on in Livonia. It's fake news going around social media," Livonia police said.

Though the details seem believable, they aren't. Livonia High School doesn't exist in Michigan, and no person named Douglass Barnes lives in Michigan. Also, a reverse-image search for the mugshot in the video also only pulls up stories about the fake murder, suggesting that the image itself may be AI-generated.

Despite this, the story has been spreading online. To make it worse, AI-generated news websites have picked up the story, making it appear legitimate.

Earlier this month, TikTok said it would start labeling AI-generated content on the platform. However, the fake Livonia murder story wasn't flagged.

IMAGE(https://helios-i.mashable.com/imagery/articles/02G1KreSR1gGu3i75HXSzNC/images-1.fill.size_1400x2311.v1716977760.jpg)This image? Who cares? It's hardly surprising or even worth noting. If it wasn't ai generated I'd have expected it to have been entirely created in photoshop. Were there people out there who actually thought it was a picture of an actual camp? This honestly sounds more like an attempt to dismiss or discredit concerns over Rafah rather than a legitimate grievance. Like, they can't counter the outrage with facts so they're trying to insinuate that the pictures coming out of Rafah are also ai generated, since people are using this ai generated image.

As if they would let any camp sit long enough without bombing it for them to plan that out in the first place let alone actually do it

Stengah wrote:

This image? Who cares? It's hardly surprising or even worth noting. If it wasn't ai generated I'd have expected it to have been entirely created in photoshop. Were there people out there who actually thought it was a picture of an actual camp? This honestly sounds more like an attempt to dismiss or discredit concerns over Rafah rather than a legitimate grievance. Like, they can't counter the outrage with facts so they're trying to insinuate that the pictures coming out of Rafah are also ai generated, since people are using this ai generated image.

The criticism, from Palestinians, is none of that:

@bluepashminas wrote:

palestinian journalists have been risking their lives for months to document every single massacre and instead people are reposting an ai-generated “art” that says “all eyes on rafah” and tells us nothing about what is actually happening on the ground or gives us any action items

@folkoftheshelf wrote:

the resharing of that AI art insta template with “all eyes on Rafah” is so odd to me, there are very real and very tangible pictures and images taken BY Palestinians on the ground of the incredibly corporeal horrors they are seeing and facing, using AI is just disingenuous

I think, ultimately, the image is fine, but I think this critique is fair as well.

If it's being used to help Palestinians, there'll be a whole lot of governments suddenly becoming very anti-AI.

Prederick wrote:
Stengah wrote:

This image? Who cares? It's hardly surprising or even worth noting. If it wasn't ai generated I'd have expected it to have been entirely created in photoshop. Were there people out there who actually thought it was a picture of an actual camp? This honestly sounds more like an attempt to dismiss or discredit concerns over Rafah rather than a legitimate grievance. Like, they can't counter the outrage with facts so they're trying to insinuate that the pictures coming out of Rafah are also ai generated, since people are using this ai generated image.

The criticism, from Palestinians, is none of that:

@bluepashminas wrote:

palestinian journalists have been risking their lives for months to document every single massacre and instead people are reposting an ai-generated “art” that says “all eyes on rafah” and tells us nothing about what is actually happening on the ground or gives us any action items

@folkoftheshelf wrote:

the resharing of that AI art insta template with “all eyes on Rafah” is so odd to me, there are very real and very tangible pictures and images taken BY Palestinians on the ground of the incredibly corporeal horrors they are seeing and facing, using AI is just disingenuous

I think, ultimately, the image is fine, but I think this critique is fair as well.

Sure, but there's a very valid reason to choose a bland image like this one. Using real pictures of the horrors certainly gets the reality of the situation across better, but it also makes it much easy to flag and remove those images for "violent" content, which is definitely a tactic that gets used more and more to hide provocative protest images. This image isn't being used for news reports either, just to show that the poster supports Palestinians, so it's not meant to convey any information about the situation on the ground. It's basically the equivalent of a temporary facebook profile image in that regard, but it really doesn't matter how it was made. Complaints focusing on it being AI generated just sound calculated to try to get people fighting about AI art rather than the actual genocide, as the same issues should apply equally if it was made by a human using photoshop rather than by AI.

Edit - to put it another way, the complaint that the image is pure slacktivism is very legitimate and I agree entirely. The news stories framing the complaint as if the problem is that it was AI generated is what I feel it's being pushed to distract from the real issue.