News updates on the development and ramifications of AI. Obvious header joke is obvious.
Roko's Basilisk is definitely going to turn them into paperclips first.
...; be willing to destroy a rogue datacenter by airstrike.
And if they hit a few Crypto farms by mistake nobody will really mind.
Prederick wrote:...; be willing to destroy a rogue datacenter by airstrike.And if they hit a few Crypto farms by mistake nobody will really mind.
AI-willing.
The author was one of the two originators of the LessWrong blog back in the 2000's, and actually has cred among AI researchers. He's not a crazy. His blog is where Roko's Basilisk was first proposed. He also did not entirely buy into it (just some of the precursors).
And if you have any religious training in the Western mode, you'll recognize it as the techbro version of Pascal's Wager. Which I've always regarded as crap.
So he's pulling a Musk in that users just have to pay a tenner a month to spread misinformation?
AI-willing.
Insh'AI.
Mixolyde wrote:AI-willing.
Insh'AI.
Dammit, I was going to make that comment, but I got distracted and didn't come back to this tab until it was too late!
H.P. Lovesauce wrote:Mixolyde wrote:AI-willing.
Insh'AI.
Dammit, I was going to make that comment, but I got distracted and didn't come back to this tab until it was too late!
Me too, but I didn't want to culturally appropriate.
Forum-appropriate question: given that current AI systems are increasingly capable of writing code, how long until the floodgates of AI generated videogames open?
Follow up question, how long until one of them gets onto the community GOTY list?
My prediction: 2 years, and then the following year.
There is a lot more to a video game then a little bit of code. I haven't seen AI write a version of Photoshop for example (or anything close). Maybe in a decade or two but I don't see large systems being AI written.
There is a lot more to a video game then a little bit of code. I haven't seen AI write a version of Photoshop for example (or anything close). Maybe in a decade or two but I don't see large systems being AI written.
AI already writes code.
AI already writes text, so narrative and dialog is covered.
AI already generates images and video, so graphics is covered.
AI already makes music, so soundtrack is covered.
All the components are already in place, it's just a matter of synthesizing them.
Gonna be honest, "A decade or two" feels like a radical underappreciation of the exponential curve that AI capability is on.
But all of that is at small scale. It writes a small utility or a method or routine. It doesn't write million line programs. When it generates images it doesn't come up with its own, it needs prompting from a person. One day I have no doubt it can do it but there is a lot of advancement still needed, and maybe even a paradigm shift or two.
AI will absolutely be used to help developers make games in the coming years, but we are far far away from saying "Make me a FPS game based on Doctor who" and have a fun playable experience pop out.
As I've said before, it's all fun and games until Geordi tells the computer to make a villain capable of defeating Data.
But all of that is at small scale. It writes a small utility or a method or routine. It doesn't write million line programs.
A million line program is just a few thousand utilities or methods.
AI companies are actively working on self-improving AI today. Its coming and it's coming a lot sooner than you think.
I've already seen mods (specifically one for Mount & Blade 2) where most of the "creative" work was made by an ai. The mod added new dialog made with chatgpt, new art made with midjourney, and even new music made with some music generating ai I hadn't heard of before. The mod author just told the ais what he wanted, then arranged it all in the format it needed to be in to work as a mod for that game. I don't think the mod description mentioned that any of the coding of the mod was done by an ai, but it wouldn't be that hard to train an ai to do.
I don't think it will be long at all before we see full games where even the engine itself is made by an ai, with the human element limited to just guiding it woth prompts and checking it over its output to make sure it all works the way they want it to.
You'd think the core devs from M&B2 would have used AI voice generation a while ago, since as until recently everyone spoke with British accents, except for the pretend Greeks who spoke with Star Wars Villian British accents and the British who spoke with French accents.
Oh hey! I didn't realize there was another AI thread.
I legit saw that link and thought "oh interesting, I wonder who the expert is. I sure hope it isn't Gary Marcus." Then I clicked the video and....
'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says
A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported.
The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.
As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante.
Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.
"Without Eliza, he would still be here," she told the outlet.
The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.
But don't worry about the tech's body count, folks. The company that made the chatbot said they "worked around the clock" once they heard about the suicide to implement new functionality that "[served up] a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms" whenever someone discusses something that could be unsafe.
You can really see the difference it makes...
Having AI generated the conversations we overhear NPCs have would really help with immersion. We wouldn't keep hearing the same dialog "I used to be an adventurer like you ....". As long as the main story is still scripted, even if AI generated, then at least we wouldnt be consistent, but a game that one person loves but falls flat for someone else. Though I might want to go for the ride of completely random story generated by an AI.
I thought this was interesting.
There are nuggets of truth in that video, but so much of it is misinformation.
Pages