[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

That's the problem, really. Our "ai" is coming while we're still neck-deep is a end-stage capitalist hellscape, so it's naturally being used to funnel more wealth to the wealthy and threatening everyone else's ability to earn a living. Star Trek's "ai" came well after they'd moved past capitalism so the concept of having to earn a living wasn't even a thing anymore, their "ai" was just a new tool to help them create what they wanted to create, and didn't put anyone out of a job. It's not an inherently bad technology, it's just not a good time for that particular advancement. It's going to cause more harm than good due to our current societal problems.

The problem with AI is that only corporations and governments can amass the financial resources to develop them, and often, to develop them, they are using existing material to train the AI.

Two externalised problems arise from this. One, they aren't paying for the use of the training of the AI and its output even if that output is a mimicry of the original work; two, any benefits of their output will belong to the tech companies which are pumping vast sums of money into their AI platforms.

So you can see this for what it really is - they get a new product without paying the source of the material, in much the same way the Australian Government's CSIRO division which invented wifi struggled to sue everyone globally for breaching its patent and invention rights. I think the government collected about 1.5 billion globally? But for something as impactful as wifi that was nothing compared to what should have been paid in royalties. Yet in the AI side of things they are probably paying even less than that in royalties.

Stengah wrote:

Man, I know you don't know how collective bargaining works. It's very obvious now. It's not hand-waving, it was an attempt to explain to you how badly you're misjudging the power dynamics and how that has screwed up your perception of whether the proposed agreement is a good one, or a bad one.

Useful. (But I guess this wasn't really meant for me.)

Stengah wrote:

This is some great fallacious reasoning, that its must be a fair enough deal...

I've explained my position to you about five times now, and I've literally never mentioned fairness. Sooo... yeah.

Bfgp wrote:

Fenomas, I'm curious but are you financially involved/invested in AI? It seems you're inherently biased towards the proliferation of AI and freedom to train and exploit existing material. I'm not casting judgement either way - just feels like you may be inferring things which aren't necessarily realistic based on market power.

Good faith question: are you basing that on something I said, or on stuff that was said to me?

Not throwing shade, I know it's tedious to read verbose arguments between other people, and one tends to skim. But the only thing I've been trying to argue here is that I don't think the SAG-AFTRA agreement is bad just because it doesn't have an AI ban. That's not an anti-union position - obviously I want the artists to the the best deal they can get, because just like everybody else on earth I despise the studios. I just don't think holding out for an AI ban would have gotten the union a better deal, and I don't think the union negotiators are bad for not getting one.

That's basically my position here. If you think I said something wrong or biased it would be useful if you quoted it so I'd know what you're referring to, but please keep in mind that just because someone replied to say I'm wrong to believe both parties have equal bargaining power doesn't mean that I believe that (and may not have even mentioned it).

Bfgp wrote:

A creator's work is their lifeblood. .. It's just that copyright has become hard to enforce because lawyers cost a lot and infringement is everywhere thanks to digital tools and the ability to disseminate material online.

Sure, I think everyone understands and sympathizes with those things. But my deal on AI is basically that I'm very interested in understanding what the tech and legal realities are, and most people seem more interested in what they wish the realities would be.

Like on copyright for example - a lot of people seem very committed to the idea that training AI, or using it, ought to violate copyright. My view is that it probably doesn't - because I've heard the arguments that it does, and they seem very weak. But no matter how carefully I say that here, somebody seems to interpret it to mean that I also want it to be true, or I'm biased towards it, or I'm also pro-AI or anti-artist in ways that haven't been mentioned. None of those are true, I literally just think the arguments against the premise are a lot stronger than the arguments for it.

As for how that affects artists - I mean, there's at least a possibility that no matter what we do, generative AI will do to artists what cars did to the horse and buggy. I don't want that, and I also don't think it's likely, but if that's what the reality is then I do want to understand it. OTOH generative AI could conceivably force society to change how it views intellectual property, maybe even in ways that leave artists better than they are now. If there are arguments for that I want to understand them too. I have no idea how it will play out, but I try very hard not to think in pro-vs-anti-AI terms, because I strongly suspect that such arguments will someday look as absurd as camera-vs-painting arguments look to us now.

Genuinely sorry to go on so long, but I hope that clears a few things up.

fenomas wrote:
Stengah wrote:

Man, I know you don't know how collective bargaining works. It's very obvious now. It's not hand-waving, it was an attempt to explain to you how badly you're misjudging the power dynamics and how that has screwed up your perception of whether the proposed agreement is a good one, or a bad one.

Useful. (But I guess this wasn't really meant for me.)

I know, talking with you on anything related to ai rarely is.

kazar wrote:

Sometimes people see all the cool things that something can lead to. For me, AI opens so many doors to cool things.

As an example, imagine an AI watching Star Wars A New Hope, and then creating a fully 3D representation of all the scenes including the characters. Then you put on a VR headset and experience the movie as if you are standing shoulder to shoulder with Luke, Han, Leia, Obi-wan and Chewie. And then, you decide to jump into a ship and fly somewhere and the AI just creates that somewhere on the fly.

Honestly, it might be interesting as a novelty for a bit, but after that novelty wears off, I can't see it having much appeal compared to things that have been crafted with intention by people, trying to convey something.

UK will refrain from regulating AI 'in the short term'

Besides the obvious point that this approach offers no protections to anyone against any of the possible abuses/risks of AI, I also agree with the quote from the article here:

The article wrote:

Critics of the government’s hands-off approach have questioned whether an under-regulated AI sector would deter investors that seek transparency and security.

The UK had an ambition of becoming an international standard-setter in AI governance,” said Greg Clark, chair of the House of Commons science, innovation and technology select committee. “But without legislation [ . . . ] to give expression to its preferred regulator-led approach, it will most likely fall behind both the US and the EU.”

On the basis that the U.K. hasn’t gotten around to regulating electric scooters. I’m not 100% sure the U.K. has the state capacity to regulate AI.

State failure is real. The about half the world whole world got to experience what is like to be ruled by the arbitrary whims of Etonians[1], now the English do too.

[1] Yes, I am aware Sunak went to Westminster but that’s just Pepsi to Coke. There is one brand leader.

YouTube Shorts Challenges TikTok With Music-Making AI for Creators

TikTok’s tools for adding music to short videos helped turn short-form video into a phenomenon. Now Google is giving some YouTube Shorts creators an AI feature called Dream Track that can generate songs, including lyrics, melody, and accompaniment, in the styles of seven different artists including Charlie Puth, Demi Lovato, Sia, and T-Pain with a tool called Dream Track.

To whip up a 30-second clip with Dream Track a creator just has to enter a prompt, such as “a ballad about how opposites attract, upbeat acoustic,” then select which artist the song should be styled on.

The new AI capabilities might help Google lure users from TikTok, where AI tools for adding visual or audio effects are hugely popular. YouTube says it is looking into how artists whose work helped train its music-generating algorithms will receive a cut of future ad revenue generated by videos featuring AI-generated audio. That would represent a test of a novel way for artists to profit from AI built in part on their work.

Dream Track uses an AI algorithm called Lyria developed by Google Deepmind, the unit charged with keeping the company at the cutting edge of AI. YouTube’s global head of music, veteran music mogul Lyor Cohen, who helped launch the careers of artists including Public Enemy, Run-DMC, and the Beastie Boys, told WIRED on Wednesday that he was blown away after hearing a demo of its output at Google DeepMind’s London headquarters in May. “I knew we not only had something unique and special, but something that I believed that the music industry would dig and want to work with,” Cohen says.

Cohen says the seven artists who opted to let Dream Track replicate their styles did so out of a desire to embrace generative AI on their terms. “Our partners, many of whom lived the Napster days, didn't want to play defense, they wanted to play offense, and they were excited about the possibilities,” he says. In August, YouTube announced that it was creating an incubator to engage with artists on ways of using generative AI.

Lyria has also been used to build a second tool announced today called Music AI, which lets artists in YouTube’s incubator program conjure, remix, and modify tracks in new ways. Demis Hassabis, Google DeepMind CEO, says that this software can automatically convert a song from one genre to another—say from hip hop to country. It can also generate a full instrumental melody and backing track from a whistled tune, and convert abstract text input such as “sunshine” into a musical interpretation.

Hassabis says that this last trick is a good example of the kind of “multimodal” AI capabilities that powerful models increasingly exhibit. The latest version of OpenAI’s ChatGPT can work with audio and images in addition to text. Google DeepMind is developing a powerful AI model of its own, called Gemini, that is rumored to have multimodal capabilities.

The recent proliferation of AI tools capable of creating images, passages of text, and music has sparked protest from some artists and authors who feel that the inclusion of their work in AI systems’ training data without permission or payment is unfair. A growing movement involves blocking companies from scraping web content to feed to generative AI programs or trying to have copyrighted material removed from common datasets.

Some musicians are embracing the AI revolution despite such issues. The artist Grimes told WIRED recently that she plans to open source her musical persona so that anyone can replicate her style with AI.

The musicians involved with YouTube’s latest AI experiments seem less troubled, no doubt because they have some say about how their work is being repurposed—and may see a cut of the spoils in time. "I'm extremely excited and inspired by the realm of musical possibilities that come from allowing the human mind to collaborate with the nonhuman mind,” a statement from Charlie Puth says. “I am open-minded and hopeful that this experiment with Google and YouTube will be a positive and enlightening experience,” says another from Demi Lovato.

Google says it is using a technology called Synth-ID to add watermarks inaudible to the human ear to music generated using Lyria so that it can be identified as such. The company says Lyria was trained on “a broad set of music content,” so it will be interesting if it can figure out a way to credit every artist who contributed.

Wired also has a Op-Ed about this up as well.

On Thursday, Google DeepMind announced Lyria, which it calls its “most advanced AI music generation model to date” and a pair of “experiments” for music making. One is a set of AI tools that allow people to, say, hum a melody and have it turn into a guitar riff, or transform a keyboard solo into a choir. The other is called Dream Track, and it allows users to make 30-second YouTube Shorts using the AI-generated voices and musical styles of artists like T-Pain, Sia, Demi Lovato, and—yes—Sivan almost instantly. All anyone has to do is type in a topic and pick an artist off a carousel, and the tool writes the lyrics, produces the backing track, and sings the song in the style of the musician selected. It’s wild.

My freak-out about this isn’t a fear of a million fake Troy Sivan’s haunting my dreams; it’s that the most creative work shouldn’t be this easy, it should be difficult. To borrow from A League of Their Own’s Jimmy Dugan, “It’s supposed to be hard. If it wasn’t, everyone would do it. The hard is what makes it great.” Yes, asking a machine to make a song about fishing in the style of Charli XCX is fun (or at least funny), but Charli XCX songs are good because they’re full of her attitude, something that comes through even when she writes for other people, like she did on Icona Pop’s “I Love It.” To borrow again, from a sign hoisted during the Hollywood writers strike, “ChatGPT doesn’t have childhood trauma.”

Not that these tools have no use. They are, more than anything, meant to help cultivate ideas and, for Dream Track, “test new ways for artists to connect with their fans.” It’s about making new experimental noises for YouTube, rather than Billboard chart-toppers. As Lovato, who, along with other artists allowed DeepMind to use their music for this project, said in a statement, AI is upending how artists work and “we need to be a part of shaping what that future looks like.”

Google’s latest AI music toy comes at a tricky time. Generative AI creates something of a digital minefield when it comes to copyright, and YouTube, which Google owns, has been trying to handle both an influx of AI-made music and the fact that it has agreements with labels to pay when artists’ work shows up on the platform. A few months ago, when “Heart on My Sleeve”—an AI-generated song by “Drake” and “The Weeknd”—went viral, it was ultimately pulled from several streaming services following complaints from the artists’ label, Universal Music Group.

But even if, say, the manager of Johnny Cash’s estate isn’t seeking to stop AI-generated covers of “Barbie Girl,” the technology still presents a conundrum for artists: They can either work with companies like Google to create AI tools using their music, make their own tools (like Holly Herndon and Grimes have), push back and see whether copyright law applies to music made from AI models trained on their work, or do nothing. It’s a question seemingly every artist is thinking about right now, or at least getting asked about.

AI chief quits over 'exploitative' copyright row

A senior executive at the tech firm Stability AI has resigned over the company's view that it is acceptable to use copyrighted work without permission to train its products.

Ed Newton-Rex was head of audio at the firm, which is based in the UK and US.

He told the BBC he thought it was "exploitative" for any AI developer to use creative work without consent.

But many of large AI firms, including Stability AI, argue that taking copyright content is "fair use".

The "fair use" exemption to copyright rules means the permission of the owners of the original content is not required.

The US copyright office is currently conducting a study about generative AI and policy issues.

Mr Newton-Rex stressed that he was talking about all AI firms which share this view - and the majority of them do.

Replying to his former member of staff in a post on X (Twitter), Stability AI founder Emad Mostaque said the firm believed fair use "supports creative development".

AI tools are trained using vast amounts of data, much of which is often taken, or "scraped", from the internet without consent.

Generative AI - products which are used to create content like images, audio, video and music - can then produce similar material or even directly replicate the style of an individual artist if requested.

Mr Newton-Rex, who is also a choral composer, said that he "wouldn't jump" at the chance to offer his own music to AI developers for free.

"I wouldn't think 'yes, I'll definitely give my compositions to a system like this'. I don't think I'd consent," he said.

He added that plenty of people create content "often for literally no money, in the hope that one day that copyright will be worth something".

But, ultimately, without consent their work was instead being used to create their own competitors and even potentially replace them entirely, he said.

He built an AI audio creator for his former employer called Stability Audio but said he had chosen to licence the data it was used to train on and share revenue from it with rights holders. He acknowledged that this model would not work for everybody.

"I don't think there's a silver bullet," he said.

"I know many people on the rightsholder side who are who are excited about the potential agenda today and want to work with it, but they want to do it under the right circumstances."

He said he remained optimistic about the benefits of AI and was not planning to leave the industry.

"I think that ethically, morally, globally, I hope we'll all adopt this approach of saying, 'you need to get permission to do this from the people who wrote it, otherwise, that's not okay'," he said.

The use of copyright material to train AI tools is controversial.

Some creatives, including the US comedian Sarah Silverman and Game of Thrones writer George RR Martin, have initiated legal action against AI firms, arguing that they have taken their work without permission and then used it to train products which can recreate content in their style.

A track featuring AI-generated voices of music artists Drake and The Weeknd was removed from Spotify earlier this year after it was discovered that it had been created without their consent.

But the boss of Spotify later said he would not ban AI from the platform completely,

Earlier this year, Stability AI faced legal action from the Getty image archive, which claimed it had scraped 12 million of its pictures and used them in the training of its AI image generator, Stable Diffusion.

Some news organisations, including the BBC and The Guardian, have blocked AI firms from lifting their material from the internet.

Sam Altman out as CEO.

Was catching up on old verge podcast yesterday which makes it more weird as they just recently had their event.

Highlight for me in the podcast was telling developers to build anything and don't worry Open AI has the lawyers to figure stuff out. I'm sure that will work out well.

More:

OpenAI fires co-founder and CEO Sam Altman for lying to company board

OpenAI CEO and co-founder Sam Altman was fired for lying to the board of his company, according to an announcement issued Friday.

A statement from the company reads, “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.” Mira Murati, OpenAI’s CTO, will become interim CEO in his place, according to the statement.

Altman became one of the most important people in the artificial intelligence world after his company released ChatGPT in November 2022, a generative AI chatbot that accrued more than 100 million users in less than a year.

Hi Fenomas, thanks for responding to me. I may have formed that impression across this thread and I think there's one in the Everything Else subforum as well. Apologies if it felt accusatory from my end, sadly posting by text can lack context and come off surprisingly cold.

I'm a lawyer and I've always been on the enforcement of copyright side of things. As an example, one of my best clients is considered a market leader and everyone keeps copying their works. It gets tiring having to chase and shut down infringement. It can have a really big financial cost beyond what you might think. Thus I have no doubt I am biased to some degree against AI and I don't mind disclosing this.

It's quite ironic because as a lawyer, one of the biggest forms of flattery is another lawyer or perhaps a judge copying your work (there's an exception in copyright for use in litigation...it wouldn't extend to someone copying your contract for example). But in my line of work, we want to copy the best work because it makes the rule of law better.

Would it make humanity better generally to copy the best work? Perhaps so, but there is a limit I think because if you think about how humanity flourished in the Renaissance and beyond, you needed private patrons to earn your crust; and I guess if AI develops to the point there is no value in creative arts then we are bound possibly to a terminal point of reiterating rather than creating.

I mean I'm excited too for AI and what it could do to make society better. But I fear it will more readily become a tool of economic and political suppression.

Wow, the Altman news is pretty spicy. The prevailing rumor in the mill seems to be that it was primarily a split between Altman and Sutskever, with other board members siding with one or the other, and the "lied to the board" doesn't mean much. Plenty of theories for what the split was about, but no evidence for any of them yet.

Bfgp, I certainly don't envy you for working with copyright law - it's always seemed to me like one of the most vaguely-specified areas of law, even compared to other kinds of IP. Personally I'm a programmer, and we have our own unique relationship with copying and IP - there's no shortage of programmers who doubt that copyright should apply to code at all.

But you skipped past the elephant in the room: as a lawyer, do you think training or using generative AI violates copyright? Not in the "ought-to" sense, but in the practical sense of whether you think the various AI copyright lawsuits will succeed on their merits. I guess they may not be happening where you practice, but even so I'd be interested to hear your views.

That's an elephant I'm leaving to the courts haha!

Copyright principles are fairly well settled internationally. I'm content to see how it plays out because each case is going to be different due to different inputs and outputs. For example, do we stop people writing high fantasy because Tolkien's works popularised thinks like orcs elves and the like? Or for using magical items like rings?

Copyright principles may have their limit though; a human can copy something to a human limit. Machines don't think like our brains do and heck all brains are wired uniquely. That's what makes digital life so troubling for copyright - it's the ability to replicate the original work to 100% fidelity if desired.

From the Ars article

According to reporting from Kara Swisher and The Information, it's looking like the ouster of Altman stemmed from an internal disagreement over the direction of the company with regard to a focus on profits over safety, with Chief Scientist Illya Sutskever apparently behind the board maneuvering.

They link to a pay walled article, but it has this before asking for a subscription.

OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing artificial intelligence safely enough, according to people with knowledge of the situation.

Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.

At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.

Ferret wrote:

What on earth's going on over there?

It sounds like a factional split, with one side framing the split as over safety vs profit - the other would presumably put it differently. But one of the people who quit in solidarity with Altman has been described as one of the company's biggest safety proponents, so it's a bit murky.

I reckon there are enough people involved that all the details will be out before long, so probably not much use speculating.

Ferret wrote:

What on earth's going on over there?

Maybe they asked their own AI to devise a cunning stratagem that would increase the visibility of their company, and this is what it came up with.

Yeah, I can understand there being an difference in priorities easily enough. I guess I'm just surprised at the method the resolution of those differences took. It also seems like it was pretty sudden... but it may be only that the execution of the firing was sudden and that the dispute had been long boiling behind the scenes.

But, as you say, useless to speculate, and doubtless we'll learn more sooner or later, probably sooner.

Ferret wrote:

What on earth's going on over there?

OpenAI board in discussions with Sam Altman to return as CEO

The article wrote:

Altman holding talks with the company just a day after he was ousted indicates that OpenAI is in a state of free-fall without him. Hours after he was axed, Greg Brockman, OpenAI’s president and former board chairman, resigned, and the two have been talking to friends and investors about starting another company. A string of senior researchers also resigned on Friday, and people close to OpenAI say more departures are in the works.

Ferret wrote:
Ferret wrote:

What on earth's going on over there?

OpenAI board in discussions with Sam Altman to return as CEO

The article wrote:

Altman holding talks with the company just a day after he was ousted indicates that OpenAI is in a state of free-fall without him. Hours after he was axed, Greg Brockman, OpenAI’s president and former board chairman, resigned, and the two have been talking to friends and investors about starting another company. A string of senior researchers also resigned on Friday, and people close to OpenAI say more departures are in the works.

Unsurprisingly, the move upset a bunch of the investors, and got a more negative reaction from the rest of the tech world than the board had expected.

It's very much looking like it was a long-running disagreement over the speed at which they commercialize their ai that came to a head and caused the firing. Because we live in the worst timeline, I expect that either Altman returns to OpenAi with the condition that he gets more complete control over everything, or he takes all the investors and a bunch of the talent and starts a new ai company where he has complete control over things. Either way, the lure of money will win out, and Altman will continue to dismiss or ignore concerns about developing ai too fast.

Ferret wrote:

I guess I'm just surprised at the method the resolution of those differences took. It also seems like it was pretty sudden... but it may be only that the execution of the firing was sudden and that the dispute had been long boiling behind the scenes.

Oh yeah, it was definitely very unusual and sudden, basically a coup. And the language in the announcement is well out of the norm, and the not having investors or partners on board, and even the timing of announcing it during market hours, etc.

I guess the board here is mostly people who have been there since the whole thing was a small nonprofit, so it sounds like there's an element of all this happening by playground rules compared to what would be normal for the board of a $10-100B tech company.

Microsoft is reported to be leading the pressure campaign to get Altman reinstated. No one on the board has actually said anything about considering taking him back, everyone's just citing unnamed sources or "people close to Altman" which doesn't exactly make them neutral parties with no interest in trying to control the narrative.

I'll be highly disappointed if the board relents, but not terribly surprised.

If the board was being principled and really did think Altman was choosing profits over safety, or had some other legitimate reason, i think they need to get that out there sooner rather than later. Without letting people know the exact reason they're just letting Altman and his supporters control the story, and they're doing a great job of making the board seem incompetent. And honestly, the board not giving a concrete and specific reason for such an abrupt and unusual firing in the first place is pretty incompetent in and of itself, so who knows. I'd like to think it was a principled firing, but it could easily have been a more personal spat between Altman and Sutskever that just blew up, and neither side actually thinks they're moving too fast.

  • OpenAI fires interim CEO Mira Murati, apparently for trying to re-hire Altman
  • OpenAI hires cofounder/ex-CEO of Twitch as interim CEO
  • He says Altman's removal was handled badly, and had nothing to do with AI safety
  • OpenAI and Microsoft still BFFs, Altman and the staff who left with him all joining MS
  • 500+ OpenAI employees threaten to leave unless board resigns and reinstates Altman
  • Sutskever is among them

So that's all tidy and wrapped up with a bow, we probably won't hear any more about it.

(edited to add more bullet points)

fenomas wrote:
  • OpenAI hires cofounder/ex-CEO of Twitch as interim CEO
  • He says Altman's removal was handled badly, and had nothing to do with AI safety
  • OpenAI and Microsoft still BFFs, Altman and the staff who left with him all joining MS

So that's all tidy and wrapped up with a bow, we probably won't hear any more about it.

So much insert hand waving nothing to see here meme

I'm sure the news will be buzzing about this for awhile poking and prodding but the I'm sure they have all retreated to the castle and raised the draw bridge for any follow up questions.

Finally with the hiring of Sam Altman Microsoft has done the impossible! They have ushered in the year of Linux on the desktop!

*sobs quietly*

Feels like a precursor to a Microsoft acquisition

TheGameguru wrote:

Feels like a precursor to a Microsoft acquisition

Not wrong here is my guess. Microsoft put a second tranche equity investment into OpenAI. It was a big injection. Y'know, the kind of number you expect for the next YouTube/Instagram. Turmoil in the upper ranks would suppress the stock valuation; then they'll pick over the carcass and take the engineers who further their profit motive.

OpenAI is a non profit org though. Not sure how that plays in being acquired by a for profit company.