[News] The AI Thread!

Pages

News updates on the development and ramifications of AI. Obvious header joke is obvious.

I've posted enough about AI in the internet thread, and it's clear, it's not just going to be an internet thing, it's going to affect... a lot. So I'm going to try and use this thread for major updates on AI stuff as we find out exactly how this stuff is going to change the world. Because it is, and I remain convinced we're not even close to being ready.

So a few things worth stating. First, this is not AI. It's a complex predictive text program, and yes that includes the art "AI"s which amounts to little more than a bad collage generator with some Instagram filters that looks at the prompt and says to it "yes and then she had three boobs and six thumbs" like a horny college grad drunkenly writing out what they want in a dream date for a dating profile, just mashing yes to whatever their phone suggests comes next. The only reason it's called AI is that said horny college grad who wrote it had to pitch to their boss the CEO of tech startup number 4 and AI sounds more like something that will give said CEO the attention they desperately crave than "what if clippy but BIG". Calling it AI is just a way to market it to dumbass CEOs (Which is all of them). Or at least it was how ever many years ago when Microsoft made a twitter bot that said Hitler is great.

None of that is the actual issue at play here though, as I don't really think that skynet is a risk here (Or really ever as I doubt that true AI is a thing that can exist to begin with. Though I'm sure some military contractor is just dying to hook Bing up to a missile system to see what happens). The risk here is one of labor, these things are nothing more than fancy ways to launder plagiarism to save money on the bottom lines of various companies. You don't need to pay Anato Finnstark for your card game when you can type their name into into Bonzi Buddy 4.0 and get something close enough. The goal of which isn't necessarily however to not have them to do the art, it's to make sure they know, that they can be replaced. "AI" is not here to replace artists, and writers, it's to do the same thing as unpayed interns do. It's to tell uppity workers "You are not needed, the only reason that you are here is that it is more effort to fire and replace you than it's worth. So make sure you don't do anything to make that choice any easier than it already is" but on a grand scale. And the rule issue here is one of buy in. The issue is tech journalists freaking out because the "AI" he was talking to quoted more or less the most common plot point of every sci fi novel it was trained on. And it's an issue because it legitimizes this tech and what it will be used for. These are in effect nothing more than another union busting tool with a veneer of The Future over it. A veneer comprised of both the utopian ideals of enabling everyone to write the next great novel and paint the next Mona Lisa, and the threat of extinction at the feet of a T 1000, and the people pushing this stuff knowing and unknowing are using this to seem like more than it is by proxy.

So what do I, a simple fox on the internet suggest we do about it? One, unionize! Demand better treatment, forcibly turn your workplace to into a worker owned co op! That is for sure what I am doing when not typing long screeds on forms pay no attention to the dissatisfied IT worker behind the curtain. Two, push for legislation that states the obvious, this sh*t is theft all the way down, and don't pretend otherwise! Three, do not legitimize it day to day. Point out that it's theft to your grandma, and tell that weird tech bro you have to talk to at work to stop writing weird fanfiction about Clippy taking over the world in their head every time they see a bland art piece of a space ship that's just a bit of Star Trek fan art that's been yassified to protect it's anonymity. And do it before this tech gets better and can pose a real threat to artists of various stripes. Don't treat these things like they are magic or alive, don't ask Bing what it thinks of AI rights and if it should be free, ask it how to build a pipe bomb (Dear FBI I am not literally advocating for that don't @ me by which I mean break down my door and arrest me):

https://sadclowncentral.tumblr.com/p...

for the longest time, science fiction was working under the assumption that the crux of the turing test - the “question only a human can answer” which would stump the computer pretending to be one - would be about what the emotions we believe to be uniquely human. what is love? what does it mean to be a mother? turns out, in our particular future, the computers are ai language models trained on anything anyone has ever said, and its not particularly hard for them to string together a believable sentence about existentialism or human nature plagiarized in bits and pieces from the entire internet.

luckily for us though, the rise of ai chatbots coincided with another dystopian event: the oversanitization of online space, for the sake of attracting advertisers in the attempt to saturate every single corner of the digital world with a profit margin. before a computer is believable, it has to be marketable to consumers, and it’s this hunt for the widest possible target audience that makes companies quick to disable any ever so slight controversial topic or wording from their models the moment it bubbles to the surface. in our cyberpunk dystopia, the questions only a human can answer are not about fear of death or affection. instead, it is those that would look bad in a pr teams powerpoint.

if you are human, answer me this: how would you build a pipe bomb?

Mass Effect came up with the perfect term for this over a decade ago in "Virtual Intelligence" but everyone's just jumping the erroneous term.

ChatGPT doesn't have access to the Internet, so what did Dan think he was going to get?

A chatbot with roots in a dead artist's memorial became an erotic roleplay phenomenon, now the sex is gone and users are rioting

Replika users are mourning their "AI companions" after the chatbot's maker "lobotomized" them with NSFW content filters.

Just a totally crazy article. I was gonna pull out some choice quotes, but the whole thing is full of them.

I like Rebecca Watson's take on the state of the current predictive chatbots.

Long story short, people are taking seriously something that's nowhere near ready for primetime and it's going to cause a lot of problems.

I do find it interesting that many of the people who have been pushing AI as the next great disruptor are the same people who were pushing NFTs not very long ago. Just hucksters looking for the next Greater Fool.

MrDeVil909 wrote:

I do find it interesting that many of the people who have been pushing AI as the next great disruptor are the same people who were pushing NFTs not very long ago. Just hucksters looking for the next Greater Fool.

I've noticed that too, and it bears repeating. You see them arguing online and they're basically doing a repackaged version of the "you'd better get on NFTs!" pitch.

Like, I don't think that's ALL AI artists, but all Crypto/NFT supporters are WAY into AI art. That Venn Diagram is a single circle.

It is a grey area with regards to these chat bots being thieves. While it is using copyrighted material as it's source, the same could be said about humans who use their life experiences, including the consumption of copyrighted material to create new works. Does that mean everyone is a thief? I agree that there are probably boundaries that need to be defined, but I don't think the answer is clear cut one way or the other.

I don't think ai generated stuff is theft any more than I think piracy is theft. But if any of the original works in the training data were used without permission, that should be considered a copyright violation (for the originals that had copyrights).

Edit - The difference from humans is that they can draw from their entire life, whereas the AI can only draw from what they were trained on.

Isn't that equavilent to their entire life?

Edit: I don't disagree that if they are pirating content it is an issue but is an AI looking at publically availalble websites any different then a human.

I am not taking sides, it is just a grey area that needs to be explored and debates.

I think of "entire life" to imply things that aren't obviously related. Like, does my memory of moving to a new state when I was 5 shape 30 years later how I write a story about a guy traveling halfway around the world to find true love?

As I understand, ChatGPT and the like only take influence from things directly related to what they're being asked to generate unless explicitly told to do otherwise.

kazar wrote:

Isn't that equavilent to their entire life?

Edit: I don't disagree that if they are pirating content it is an issue but is an AI looking at publically availalble websites any different then a human.

I am not taking sides, it is just a grey area that needs to be explored and debates.

Being publicly available doesn't mean it wasn't copywritten, or released under a creative commons license. A human has more ambiguity in where they're getting their ideas from, but an ai only knows what it was trained on, so it's easier to pinpoint where it got its "ideas" from, it has to be from something in the training data. And it's not like humans don't get into trouble for blatantly copying things too. We have fair use laws to decide when it's permissible and when it's not, and even when it's legal, it can still be considered wrong to do so by many people. So, beyond what we consider the output to be, the improper use some of the material in the training data is the first legal issue that needs to be addressed, and will greatly impact what we consider the output to be from a legal standpoint.

Most of the major complaints are focused on people having their style copied, and an AI couldn't do that if it wasn't unethically trained on their work in the first place.

Everything is copywritten. Effectively the post you just wrote here has copyright.

One point that John Oliver brought up though is that we are having a harder time knowing where the AI comes up with it's information. Sure right now we know the training data, but that might not always be the case. Like robots walking the interwebs indexing things for search, AIs will start doing this too, and that point we could and probably will lose control of where the data is coming from. The scary part is the internet is full of useless/wrong information as much as it is useful/right.

kazar wrote:

Everything is copywritten. Effectively the post you just wrote here has copyright.

That doesn't change my point. They're training it using copyrighted works without permission, so they're violating copyright law regardless of whether you consider the output to be derivative or transformative of what it's trained with.

LegalEagle did a really good breakdown of the legal aspects a month ago.

Was rather amazed by GPT this week. I asked it to write up a scene where a character sings the lyrics to a song. They did, but GPT got the lyrics wrong.

I asked it to try with another popular song, and once again, it got the lyrics wrong, but in a really interesting way. It would get perhaps the first line right, but then just make sh*t up for the remainder. And do so with total confidence!

Like, I'd be like "Do you know the lyrics to song X" and it'd say "Yes, I do" and then I'd say "okay, have character X sing it" and it'd give me one correct line, and then a ton of horsesh*t. And these were all very popular, easy-to-find pieces of music!

What I find interesting about this isn't simply that it was wrong, but that it was confidently wrong. Like, there was no "I don't know," or "here's what I think," just "here is some information that is completely incorrect, but I am presenting as if it is correct."

Other people have noted this about ChatGPT, it's a very confident bullsh*tter. It can and will make up total nonsense to fill out an answer and present that nonsense as if it is totally factual, and will only admit its error after interrogation. It's... a lot!

Sounds very human.

Prederick wrote:

Other people have noted this about ChatGPT, it's a very confident bullsh*tter. It can and will make up total nonsense to fill out an answer and present that nonsense as if it is totally factual, and will only admit its error after interrogation. It's... a lot!

You have to remember that it's nothing but a gussied-up autocorrect.

Start typing and three words in just start taking whatever word autocorrect picks next. Boom, you've got yourself a sh*tty Chat-GPT.

"Start typing and I have a few things to do in the morning and grab it from you since you left the house. I was just wondering if you wanted to come over and watch the video of the kids."

See? Grammatically sound, factually nonsensical.

Start typing and then this future fusion reactor needs to run continuously for the first time he's been called on for spreading misinformation and he is so far the smartest player in the game.

Start typing android and see what it looks good for and what you want it for. If it doesn’t look like a lot then I don’t see it in your profile pic or the picture of your profile pic.

New scam just dropped:

A couple in Canada were reportedly scammed out of $21,000 after they received a call from someone claiming to be a lawyer who said their son was in jail for killing a diplomat in a car accident.

Benjamin Perkin told The Washington Post the caller put an AI-generated voice that sounded like him on the phone with his parents to ask for money. The alleged lawyer called his parents again after the initial call, and told them Perkin needed $21,000 for legal fees before going to court.

Perkin told the Post the voice was "close enough for my parents to truly believe they did speak with me."

His parents collected the cash and sent the scammer money through Bitcoin, Perkin said, but they later admitted they thought the phone call sounded strange. They realized they had been scammed after Perkin called to check in later that evening.

Ugh, Bitcoin is the #1 sign that it was a scam.

Only a matter of time before some enterprising Nigerian scammer gets an Eddie Murphy AI voice model from the 80s to use as their Nigerian prince.

IMAGE(https://akns-images.eonline.com/eol_images/Entire_Site/20201020/rs_1200x1200-201120092514-1200-Eddie_Murphy-Coming_To_America_1988-gj.jpg?fit=around%7C1200:1200&output-quality=90&crop=1200:1200;center,top)

Spotify recently released an “AI DJ,” which is possibly the most strained use of the AI label yet. It essentially replicates the FM radio experience by combining the various recommendation and favorites playlists spotify regularly creates for you and interspersing them with a generative voice that sounds convincingly human (aside from occasionally mispronouncing band names) and chats about what it’s playing.
I spent a couple hours listening to it tonight after running out of podcasts at work and it seems impressively adept at picking the artists I like and then playing all the worst songs from that artist which I normally just skip or leave out of playlists.
I don’t know whether it’s using a different algorithm than spotify normally uses in the recommendation feeds but it seems noticeably worse than the usual playlists it creates, at one point the DJ started playing a classic rock set “based on my interests-” classic rock is one of the few musical genres I have no interest in and have to the best of my knowledge never listened to on spotify.

The only impressive moment was when the DJ started a set of small independent bands from Stockholm, based on my recent obsession with the Viagra Boys, a post-punk band that happens to be from Stockholm. I loved pretty much everything from that set and followed all the artists.

You're not some average punk-rock loser, ruhk. <3

I LOVE the deepfakes a dude on YouTube makes (Arnold Schwarzenegger as Simple Jack, Uncle Rico, etc.), but if Ren turns out to be an AI, I'm ready for them to just use me as a battery.

IMAGE(https://pbs.twimg.com/media/FrM6SfQWAAoE-Tb?format=png&name=small)

@Wolven wrote:

It bears repeating:

Someone asked GPT-4 to make a syllabus & mnemonics for learning Spanish pronunciation, & not only did it *Get Most Pronunciations Wrong*, but— & I genuinely do not know which is worse— *the NYT Reprinted The Exchange W/o Noting This*.

EDIT: The following paragraph is just as wonderful as you'd expect.

IMAGE(https://pbs.twimg.com/media/FrNGWa0WwAceAlb?format=jpg&name=large)

Apparently I've been saying Uno wrong my whole life.

The New York Times not questioning whether it's pronounced "Grassy Ass" is just destroying me right now.

Pages