[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

It's not that the emperor has no clothes, it's just that the emperor is strutting around in a jockstrap with a body built by McDonald's.

*Legion* wrote:

strutting around in a jockstrap with a body built my McDonald's

Wait, what's wrong with that?

iaintgotnopants wrote:
*Legion* wrote:

strutting around in a jockstrap with a body built my McDonald's

Wait, what's wrong with that?

Needs to ditch the jock

You ask a hallucination generation engine to hallucinate some information for you, and then are surprised it's not real?

iaintgotnopants wrote:
*Legion* wrote:

strutting around in a jockstrap with a body built my McDonald's

Wait, what's wrong with that?

I'm not about to argue the merits of pants with someone with your username.

AI’s quiet creep into music punctuated by ‘SpongeBob’ voices and a secretive artist called Glorb

At first, the YouTube videos look like scenes from Nickelodeon’s popular “SpongeBob SquarePants” cartoon.

SpongeBob, the title cheery yellow character, appears outside his pineapple-shaped home, while Mr. Krabs, SpongeBob’s cranky boss, is at the Krusty Krab restaurant he runs. But unlike in the show, the characters in the videos aren’t singing jolly songs about life in the underwater city of Bikini Bottom. Instead, they’re rapping about drugs and guns.

The mastermind behind the raps is an artist named Glorb. Their music, which has been streamed millions of times on Spotify and YouTube, appears to use artificial intelligence to replicate the iconic characters’ voices.

As AI tools continue to evolve rapidly, it has become easier for artists like Glorb to make music using generative AI — and become successful in their own right. However, experts who focus on AI and music said questions surrounding copyright and ownership still linger as a new era of technology dawns in the music industry.

“It opens up so many more possibilities for someone, you know, to essentially have, like, a fan fiction version of a song because they love the artist,” said Josh Antonuccio, an associate professor and the director of the School of Media Arts & Studies at the Ohio University Scripps College of Communication.

The SpongeBob-inspired tracks have turned Glorb — who keeps their identity anonymous — into an online sensation. On Spotify, Glorb averages just under a million listeners a month — their most popular song, “The Bottom 2,” has amassed more than 11 million streams. The artist’s music videos, which feature character models from the show, have also racked up millions of views on YouTube.

Glorb, who declined to be interviewed, isn’t publicly affiliated with Nickelodeon. A spokesperson for the Paramount-owned network didn’t immediately respond to a request for comment. Representatives for YouTube and Spotify also didn’t immediately respond to requests for comment.The music industry could see an influx of artists who use some kind of AI, especially as technology continues to advance, said Tracy Chan, the CEO of Splash, a generative AI music company. Already, generative AI music programs like Suno, which allows users to enter prompts and generate songs based on the text suggestions, have been hailed as the ChatGPT of music.

“I think it’s important that we figure out how to both, as an industry … how do you kind of balance that we’re creating more and more content, which is ultimately good, but also kind of rewarding the folks that are, you know, kind of the source material, so to speak,” Chan said.

Glorb isn’t the first to use the technology to create original music. In some cases, major artists have been involved with AI renditions of their work. In June, Paul McCartney announced The Beatles would release one final record, “Now and Then,” using AI technology to extract the voice of the late John Lennon. The singer Grimes, a champion of AI, released elf.tech, a platform on which artists can use an AI replication of Grimes’ voice in their music. The terms of the agreement include that Grimes receives part of the royalties earned from any music that includes the AI version of her voice.

But in other instances, AI-generated music using artists’ work has sparked some concerns from those in the music industry.

In April 2023, an artist named Ghostwriter went viral for the track “heart on my sleeve,” which used AI voice replications of the rapper Drake and the singer The Weeknd. The song was quickly removed from several platforms, including YouTube, where a message read: “This video is no longer available due to a copyright claim by Universal Music Group.”

Shortly before the Ghostwriter song circulated online, UMG (which has no relationship to NBCUniversal, the parent company of NBC News) had urged streaming services to prohibit AI programs from using its copyrighted music to train themselves.

“We have a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators,” UMG, which is considered one of the so-called Big Three global music companies, said in a statement to The Financial Times. “We expect our platform partners will want to prevent their services from being used in ways that harm artists.”

Part of the problem stems from the fact that music streaming platforms have few tools to detect and track how much AI music is on their apps, Chan said.

He compared traditionally created music to a fingerprint — streaming platforms can compare other songs against that fingerprint, and when they find a track that matches it, they can assess the upload and remove it if necessary. AI-generated music doesn’t have that hypothetical fingerprint. Therefore, it’s much harder to track and remove. Because there is limited technology to track AI music uploaded to various platforms, it’s hard to know how much of it is out there, Chan said.

“You’ve got to believe that it exists, but, again, is it reaching mass consumption? Probably not yet,” he said. “Because once it kind of hits the culture, so to speak, that’s where I think kind of a lot of the rights holders like labels and such [will] take action against those platforms and ask them to take it down.”

Lawmakers are already beginning to consider how to regulate AI-generated voices in music.

Last month, Tennessee Gov. Bill Lee signed the Ensuring Likeness Voice and Image Security Act — also known as the “ELVIS Act.” The law, which claims to be the first of its kind, “build[s] upon existing state rule protecting against the unauthorized use of someone’s likeness by adding ‘voice’ to the realm it protects,” Lee’s office said in a news release in January.

Many in the industry, including the Recording Academy and Warner Music Group CEO Robert Kyncl, praised the legislation.

Antonuccio, the Ohio University associate professor, said a wave of technology-infused music should both excite and frighten the industry and consumers.Even if more laws are introduced, Antonuccio said, trying to curb the tsunami of content that uses generative AI voices will remain almost impossible.

“There is an extreme generative remix culture that we are just beginning to enter into,” he said. “And I think there are some exciting parts of that, but frankly, I think there’s, there’s many things that should concern all of us with that.”

Ok so isn't Glorb now liable to be sued by the SpongeBob voice actors?

Jonman wrote:

Ok so isn't Glorb now liable to be sued by the SpongeBob voice actors?

The quote near the bottom might make the answer why does it matter?.

trying to curb the tsunami of content that uses generative AI voices will remain almost impossible.

The estimated amount of AI content that will be produced will overwhelm the legal system if everyone tries to sue over infringement.

kazar wrote:

The estimated amount of AI content that will be produced will overwhelm the legal system if everyone tries to sue over infringement.

Easy solution - AI lawyers!

I'm only mostly joking, because that as sure as shit is coming.

kazar wrote:
Jonman wrote:

Ok so isn't Glorb now liable to be sued by the SpongeBob voice actors?

The quote near the bottom might make the answer why does it matter?.

trying to curb the tsunami of content that uses generative AI voices will remain almost impossible.

The estimated amount of AI content that will be produced will overwhelm the legal system if everyone tries to sue over infringement.

You don't need to have everyone suing for every instance. The lawsuits will start whenever the person behind an instance tries profiting off it in some way. So I'd expect Glorb to be sued if they've tried to monetize their creations, but if they haven't yet, Nickelodeon might be willing to let Glorb slide so long as they make it clear that nothing they've made is officially affiliated with SpongeBob.

Stengah wrote:

So I'd expect Glorb to be sued if they've tried to monetize their creations,

quoted article wrote:

The mastermind behind the raps is an artist named Glorb. Their music, which has been streamed millions of times on Spotify and YouTube

Jonman wrote:
Stengah wrote:

So I'd expect Glorb to be sued if they've tried to monetize their creations,

quoted article wrote:

The mastermind behind the raps is an artist named Glorb. Their music, which has been streamed millions of times on Spotify and YouTube

No shit, but are they monetized streams or not?

Stengah wrote:
Jonman wrote:
Stengah wrote:

So I'd expect Glorb to be sued if they've tried to monetize their creations,

quoted article wrote:

The mastermind behind the raps is an artist named Glorb. Their music, which has been streamed millions of times on Spotify and YouTube

No shit, but are they monetized streams or not?

AFAIK, the only two hurdles to monetization on Spotify are being signed to a digital media distribution company (which is a requirement to have music on Spotify in the first place) and hitting a minimum streaming threshold (which is only 1000 streams per month)

It used to be a lower threshold, but they did just change it this month.

Tons of stuff have been demonetized as a result, but to be fair, a non-zero chunk of that was the equivalent of Twitter engagement bait accounts and those awful YouTube channels trying to game the payment system:

Spotify says that “tens of millions” of the 100 million tracks in its library have been streamed at least once but fewer than 1,000 times annually, representing 0.5% of the streamer’s stream-share royalty pool. Based on Spotify’s current per-stream rate, 1,000 annual streams generates around $3, often below the minimum that many distributors require before making payouts to artists. Under the status quo, the money Spotify pays out on those songs remains with the distributor until the threshold for payment to the artists is reached. Under the new policy, Spotify will withhold those royalties and roll them into the stream-share pool, now limited to songs with more than 1,000 streams.

The other major policy changes announced today are targeted at practices the company considers fraudulent: streaming bots and short-form “functional noise” content. Spotify currently removes songs from its library when it detects artificial streams generated from bots or scripts; starting in 2024, the company plans to penalize labels and distributors with per-track penalties when “flagrant” artificial streaming is detected, but did not specify what those penalties would be, or how its tools detect such activity.

The company also seeks to target “bad actors” that publish short-form noise tracks such as whale sounds, ASMR, and white noise, then stack them in playlists to generate what it deems “outsized payments.”

In the outgoing system, Spotify has paid the same royalty rate for a five-minute track and a 30-second track, meaning that a 100-track playlist of 30-second tracks could generate significantly more royalties than a 10-track playlist of five-minute tracks, despite being the same 50 minutes of content.

The company plans to increase the minimum length of “functional noise recordings” required to generate royalties from 30 seconds to two minutes, as well as “work with licensors to value noise streams at a fraction of the value of music streams.”

Spotify did not specify what fraction of the current rate it seeks, what criteria it will use to determine which tracks are functional noise, or by what means it would determine which tracks met such criteria.

Anyway, yada-yadda-yadda here come the pedos:

Paedophiles are being urged to use artificial intelligence to create nude images of children to extort more extreme material from them, according to a child abuse charity.

The Internet Watch Foundation (IWF) said a manual found on the dark web contained a section encouraging criminals to use “nudifying” tools to remove clothing from underwear shots sent by a child. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

“This is the first evidence we have seen that perpetrators are advising and encouraging each other to use AI technology for these ends,” said the IWF.

The charity, which finds and removes child sexual abuse material online, warned last year of a rise in sextortion cases, where victims are manipulated into sending graphic images of themselves and are then threatened with the release of those images unless they hand over money. It also flagged the first examples of AI being used to create “astoundingly realistic” abuse content.

The anonymous author of the online manual, which runs to nearly 200 pages, boasts about having “successfully blackmailed” 13-year-old girls into sending nude imagery online. The IWF said the document had been passed to the UK’s National Crime Agency.

Last month the Guardian revealed that the Labour party was considering a ban on nudification tools that allow users to create images of people without their clothes on.

The IWF has also said 2023 was “the most extreme year on record”. Its annual report said the organisation found more than 275,000 webpages containing child sexual abuse last year, the highest number recorded by the IWF, with a record amount of “category A” material, which can include the most severe imagery including rape, sadism and bestiality. The IWF said more than 62,000 pages contained category A content, compared with 51,000 in the prior year.

The IWF found 2,401 images of self-generated child sexual abuse material – where victims are manipulated or threatened into recording abuse of themselves – taken by children aged between three and six years old. Analysts said they had seen abuse taking place in domestic settings including bedrooms and kitchens.

Susie Hargreaves, the chief executive of the IWF, said opportunistic criminals trying to manipulate children were “not a distant threat”. She said: “If children under six are being targeted like this, we need to be having age-appropriate conversations now to make sure they know how to spot the dangers.”

Hargreaves added that the Online Safety Act, which became law last year and imposes a duty of care on social media companies to protect children, “needs to work”.

Tom Tugendhat, the security minister, said parents should talk to their children about using social media. “The platforms you presume safe may pose a risk,” he said, adding that tech companies should introduce stronger safeguards to prevent abuse.

According to research published last week by the communications regulator, Ofcom, a quarter of three- to four-year-olds own a mobile phone and half of under-13s are on social media. The government is preparing to launch a consultation in the coming weeks that will include proposals to ban the sale of smartphones to under-16s and raise the minimum age for social media sites from 13 to as high as 16.

Stengah wrote:

You don't need to have everyone suing for every instance. The lawsuits will start whenever the person behind an instance tries profiting off it in some way. So I'd expect Glorb to be sued if they've tried to monetize their creations, but if they haven't yet, Nickelodeon might be willing to let Glorb slide so long as they make it clear that nothing they've made is officially affiliated with SpongeBob.

I am not arguing against Glorb being sued or even if the plaintiffs would win. Just that in the long run, there will be so many cases of people trying to profit off of it, that the legal system won't be able to keep up. So my point was, why does it matter?

Jonman wrote:

Easy solution - AI lawyers!

I'm only mostly joking, because that as sure as shit is coming.

IMAGE(https://y.yarn.co/5452ad24-047b-4477-bf04-effdc8b082ad_text.gif)

kazar wrote:
Stengah wrote:

You don't need to have everyone suing for every instance. The lawsuits will start whenever the person behind an instance tries profiting off it in some way. So I'd expect Glorb to be sued if they've tried to monetize their creations, but if they haven't yet, Nickelodeon might be willing to let Glorb slide so long as they make it clear that nothing they've made is officially affiliated with SpongeBob.

I am not arguing against Glorb being sued or even if the plaintiffs would win. Just that in the long run, there will be so many cases of people trying to profit off of it, that the legal system won't be able to keep up. So my point was, why does it matter?

I mean, if that's the argument you're going with, why does anything matter? SerIously though, it matters because that's how they protect their copyright/trademark/whatever. They play whac-a-mole with the bigger direct infringers while they wait for more meaningful cases against the tools that enable them to work their way through the system. A lot of companies are likely waiting to see how the NYT vs OpenAI case goes before filing suits of their own.

i did say "my point was" but i really should have said "my question was".

I expect to see a lot more of this before November. The truth eventually came out, but it took several months and helped that the perpetrator was an idiot who searched for voice cloning tools while on the school's network. If all you want to do is short-term damage, or say, wreck someones faith in their community, it's trivially easy.

Microsoft’s heavy bet on AI pays off as it beats expectations in latest quarter

Profits at Microsoft beat Wall Street’s expectations as its heavy bet on artificial intelligence continued to bear fruit in the latest quarter.

The technology giant has invested billions of dollars into AI in a bid to turbocharge its growth, particularly of its cloud computing services. Its cloud computing revenue surged by more than 20%.

Microsoft’s AI tools “are orchestrating a new era of AI transformation, driving better business outcomes across every role and industry,” said Satya Nadella, the chief executive of Microsoft.

As the group races to integrate AI across its software and services, Nadella said its Azure cloud computing business saw the pace of deals worth $100m and $10m increase by double-digit percentages. It has also started selling its Copilot AI software add-on to small businesses.

Total revenue at Microsoft increased 17% to $61.86bn during the first three months of 2024, the third quarter of its financial year, surpassing analyst expectations of some $60.88bn. Earnings per share increased 20% to $2.94, ahead of the expected $2.83.

Shares in the group rose 4% during after-hours trading in New York on Thursday.

With a stock market value of nearly $3tn, Microsoft is the world’s largest public company. Shares in the firm have increased by more than 30% over the past year – an impressive rise, although less than the rallies of Amazon and Google, whose stocks have risen by more than 60% and 40%, respectively.

With a multibillion-dollar investment in the ChatGPT developer OpenAI, Microsoft has sought to position itself at the heart of AI’s rise – and the destination for the industry’s top talent. In March, the company announced it had hired a co-founder of Google’s Deepmind AI subsidiary, Mustafa Suleyman, as well as a co-founder of $4bn startup Inflection AI and several of its employees.

In November, after OpenAI’s board ousted its co-founder and chief executive, Sam Altman, he briefly said he would decamp to Seattle and join Microsoft in a major coup for Nadella. The majority of OpenAI’s employees said they would join Altman, which would effectively obliterate the company. Altman was reinstated as chief executive days later.

Microsoft is now trying to monetize its dominance in this space. The company reported in the final months of 2023 that AI contributed 6% of the revenue growth within its powerhouse Azure cloud computing division; up from 3% the previous quarter, according to Yahoo Finance.

Was just trying to search Instagram to see if the Nick the Greek (Greek fast food chain) location that’s soon to open in Santa Maria has an Instagram page yet.

Apparently the Instagram search bar has been taken over by “Meta AI”, and so it piped my search to it and gave me this bullsh*t:

IMAGE(https://i.imgur.com/4ObaigH.jpg)

That bio is, of course, belonging to Jimmy the Greek, who was neither named Nick nor even remotely from Santa Maria.

Tell me again how “AI” is the next big thing.

The Department of Homeland Security has announce the foxes that will be in charge of henhouse security, I mean the list of people on the Artificial Intelligence Safety and Security Board.

Sam Altman, CEO, OpenAI
Dario Amodei, CEO and Co-Founder, Anthropic
Ed Bastian, CEO, Delta Air Lines
Rumman Chowdhury, Ph.D., CEO, Humane Intelligence
Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
Bruce Harrell, Mayor of Seattle, Washington; Chair, Technology and Innovation Committee, United States Conference of Mayors
Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law
Vicki Hollub, President and CEO, Occidental Petroleum
Jensen Huang, President and CEO, Nvidia
Arvind Krishna, Chairman and CEO, IBM
Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute
Wes Moore, Governor of Maryland
Satya Nadella, Chairman and CEO, Microsoft
Shantanu Narayen, Chair and CEO, Adobe
Sundar Pichai, CEO, Alphabet
Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy
Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable
Adam Selipsky, CEO, Amazon Web Services
Dr. Lisa Su, Chair and CEO, Advanced Micro Devices (AMD)
Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution
Kathy Warden, Chair, CEO and President, Northrop Grumman
Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights.

There are a few good names in there, but it's an overwhelmingly corporate board.

This is what's known as Regulatory Capture.

Stengah wrote:

The Department of Homeland Security has announce the foxes that will be in charge of henhouse security, I mean the list of people on the Artificial Intelligence Safety and Security Board.

Sam Altman, CEO, OpenAI
Dario Amodei, CEO and Co-Founder, Anthropic
Ed Bastian, CEO, Delta Air Lines
Rumman Chowdhury, Ph.D., CEO, Humane Intelligence
Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
Bruce Harrell, Mayor of Seattle, Washington; Chair, Technology and Innovation Committee, United States Conference of Mayors
Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law
Vicki Hollub, President and CEO, Occidental Petroleum
Jensen Huang, President and CEO, Nvidia
Arvind Krishna, Chairman and CEO, IBM
Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute
Wes Moore, Governor of Maryland
Satya Nadella, Chairman and CEO, Microsoft
Shantanu Narayen, Chair and CEO, Adobe
Sundar Pichai, CEO, Alphabet
Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy
Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable
Adam Selipsky, CEO, Amazon Web Services
Dr. Lisa Su, Chair and CEO, Advanced Micro Devices (AMD)
Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution
Kathy Warden, Chair, CEO and President, Northrop Grumman
Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights.

There are a few good names in there, but it's an overwhelmingly corporate board.

Stengah wrote:

foxes that will be in charge of henhouse security

Wow. Just, wow. You think you know a guy...

(Jokes aside what a farce that list is dear lord)

All of the women on that board are on the bottom of the list. Did they make their list, realize it's all guys, go 'aw f*ck lets find some ladies'?

polypusher wrote:

All of the women on that board are on the bottom of the list. Did they make their list, realize it's all guys, go 'aw f*ck lets find some ladies'?

looks like it's alphabetical by middle/last name, so weird coincidence maybe?

True, and I missed an Alexandra and Vicki nearer the top.

*slavers in Mormon* Binders... full...