[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

I really made a mistake by not posting what is, by far, the most normal, hinged part of that article:

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Roko's Basilisk is definitely going to turn them into paperclips first.

Prederick wrote:
...; be willing to destroy a rogue datacenter by airstrike.

And if they hit a few Crypto farms by mistake nobody will really mind.

Keldar wrote:
Prederick wrote:
...; be willing to destroy a rogue datacenter by airstrike.

And if they hit a few Crypto farms by mistake nobody will really mind.

AI-willing.

The author was one of the two originators of the LessWrong blog back in the 2000's, and actually has cred among AI researchers. He's not a crazy. His blog is where Roko's Basilisk was first proposed. He also did not entirely buy into it (just some of the precursors).

And if you have any religious training in the Western mode, you'll recognize it as the techbro version of Pascal's Wager. Which I've always regarded as crap.

Midjourney ends free trials of its AI image generator due to 'extraordinary' abuse

Midjourney is putting an end to free use of its AI image generator after people created high-profile deepfakes using the tool. CEO David Holz says on Discord that the company is ending free trials due to "extraordinary demand and trial abuse." New safeguards haven't been "sufficient" to prevent misuse during trial periods, Holz says. For now, you'll have to pay at least $10 per month to use the technology.

As The Washington Post explains, Midjourney has found itself at the heart of unwanted attention in recent weeks. Users relied on the company's AI to build deepfakes of Donald Trump being arrested, and Pope Francis wearing a trendy coat. While the pictures were quickly identified as bogus, there's a concern bad actors might use Midjourney, OpenAI's DALL-E and similar generators to spread misinformation.

Midjourney has acknowledged trouble establishing policies on content. In 2022, Holz justified a ban on images of Chinese leader Xi Jinping by telling Discord users that his team only wanted to "minimize drama," and that having any access in China was more important than allowing satirical content. On a Wednesday chat with users, Holz said he was having difficulty setting content policies as the AI enabled ever more realistic imagery. Midjourney is hoping to improve AI moderation that screens for abuse, the founder added.

Well, I mean who could've possibly seen.... oh everyone? Okay then.

So he's pulling a Musk in that users just have to pay a tenner a month to spread misinformation?

Mixolyde wrote:

AI-willing.

Insh'AI.

H.P. Lovesauce wrote:
Mixolyde wrote:

AI-willing.

Insh'AI.

Dammit, I was going to make that comment, but I got distracted and didn't come back to this tab until it was too late!

*Legion* wrote:
H.P. Lovesauce wrote:
Mixolyde wrote:

AI-willing.

Insh'AI.

Dammit, I was going to make that comment, but I got distracted and didn't come back to this tab until it was too late!

Me too, but I didn't want to culturally appropriate.

Forum-appropriate question: given that current AI systems are increasingly capable of writing code, how long until the floodgates of AI generated videogames open?

Follow up question, how long until one of them gets onto the community GOTY list?

My prediction: 2 years, and then the following year.

There is a lot more to a video game then a little bit of code. I haven't seen AI write a version of Photoshop for example (or anything close). Maybe in a decade or two but I don't see large systems being AI written.

kazar wrote:

There is a lot more to a video game then a little bit of code. I haven't seen AI write a version of Photoshop for example (or anything close). Maybe in a decade or two but I don't see large systems being AI written.

AI already writes code.

AI already writes text, so narrative and dialog is covered.

AI already generates images and video, so graphics is covered.

AI already makes music, so soundtrack is covered.

All the components are already in place, it's just a matter of synthesizing them.

Gonna be honest, "A decade or two" feels like a radical underappreciation of the exponential curve that AI capability is on.

But all of that is at small scale. It writes a small utility or a method or routine. It doesn't write million line programs. When it generates images it doesn't come up with its own, it needs prompting from a person. One day I have no doubt it can do it but there is a lot of advancement still needed, and maybe even a paradigm shift or two.

AI will absolutely be used to help developers make games in the coming years, but we are far far away from saying "Make me a FPS game based on Doctor who" and have a fun playable experience pop out.

As I've said before, it's all fun and games until Geordi tells the computer to make a villain capable of defeating Data.

kazar wrote:

But all of that is at small scale. It writes a small utility or a method or routine. It doesn't write million line programs.

A million line program is just a few thousand utilities or methods.

AI companies are actively working on self-improving AI today. Its coming and it's coming a lot sooner than you think.

I've already seen mods (specifically one for Mount & Blade 2) where most of the "creative" work was made by an ai. The mod added new dialog made with chatgpt, new art made with midjourney, and even new music made with some music generating ai I hadn't heard of before. The mod author just told the ais what he wanted, then arranged it all in the format it needed to be in to work as a mod for that game. I don't think the mod description mentioned that any of the coding of the mod was done by an ai, but it wouldn't be that hard to train an ai to do.

I don't think it will be long at all before we see full games where even the engine itself is made by an ai, with the human element limited to just guiding it woth prompts and checking it over its output to make sure it all works the way they want it to.

You'd think the core devs from M&B2 would have used AI voice generation a while ago, since as until recently everyone spoke with British accents, except for the pretend Greeks who spoke with Star Wars Villian British accents and the British who spoke with French accents.

Oh hey! I didn't realize there was another AI thread.

I legit saw that link and thought "oh interesting, I wonder who the expert is. I sure hope it isn't Gary Marcus." Then I clicked the video and....

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

Motherboard wrote:

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported.

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante.

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.

"Without Eliza, he would still be here," she told the outlet.

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.

But don't worry about the tech's body count, folks. The company that made the chatbot said they "worked around the clock" once they heard about the suicide to implement new functionality that "[served up] a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms" whenever someone discusses something that could be unsafe.

You can really see the difference it makes...

IMAGE(https://i.imgur.com/pJbh0Eh.png)

I saw the first major 'AI game' coming to PC, and it convinced me of its potential for storytelling

I recently tried a game that generates NPC dialogue with an AI chatbot, and within minutes I was arguing with a hard-boiled cop about whether or not dragons are real. "Large language models" like ChatGPT are unpredictable, habitual bullsh*tters, which makes them funny, but not very good videogame characters. Hidden Door(opens in new tab), a company founded by long-time AI developers, says it can do much better.

I didn't get to play Hidden Door's game, which is really a platform, but I spoke about it for an hour with founders Hilary Mason and Matt Brandwein at the Game Developer's Conference last month. What they say they have is a way to generate multiplayer text adventures set in existing fictional worlds—like Middle-earth, for example—that can respond to any player input and tell structured stories with surprises and payoffs that, unlike so many AI chatbot conversations, actually make sense.

Having AI generated the conversations we overhear NPCs have would really help with immersion. We wouldn't keep hearing the same dialog "I used to be an adventurer like you ....". As long as the main story is still scripted, even if AI generated, then at least we wouldnt be consistent, but a game that one person loves but falls flat for someone else. Though I might want to go for the ride of completely random story generated by an AI.

Brace Yourself for a Tidal Wave of ChatGPT Email Scams

HERE’S AN EXPERIMENT being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that.

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused (WaPo Article)

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.

“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”

Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.

As largely unregulated artificial intelligence software such as ChatGPT, Microsoft’s Bing and Google’s Bard begins to be incorporated across the web, its propensity to generate potentially damaging falsehoods raises concerns about the spread of misinformation — and novel questions about who’s responsible when chatbots mislead.

“Because these systems respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods,” said Kate Crawford, a professor at the University of Southern California at Annenberg and senior principal researcher at Microsoft Research.

In a statement, OpenAI spokesperson Niko Felix said, “When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.”

@WillOremus wrote:

It gets weirder. Bear with me.

ChatGPT generated the fake scandal involving law prof @JonathanTurley in response to prompts from
@VolokhC last week. Turley wrote about it in a @USATODAY op-ed Monday.

Today we tested the same prompt on Microsoft's Bing AI. And guess what...

Now Bing is *also* claiming Turley was accused of sexually harassing a student on a class trip in 2018. It cites as a source for this claim Turley's own USA Today op-ed about the false claim by ChatGPT, along with several other aggregations of his op-ed.

IMAGE(https://pbs.twimg.com/media/Fs-RWIsWYAYq04G?format=jpg&name=small)

@alexandr_wang wrote:

term that will be socially important:

botsexual — being primarily sexually attracted to AI

have a pretty strong belief that a meaningful percentage of kids born today will end up being botsexual.

the impact on birth rates in countries with GPUs are going to be pretty clear.

Can I have, at best, complicated feelings about the rise of AI, but also deeply want to shove every single one of its promoters into a locker?

I thought this was interesting.

There are nuggets of truth in that video, but so much of it is misinformation.