
News updates on the development and ramifications of AI. Obvious header joke is obvious.
That's the problem, really. Our "ai" is coming while we're still neck-deep is a end-stage capitalist hellscape, so it's naturally being used to funnel more wealth to the wealthy and threatening everyone else's ability to earn a living. Star Trek's "ai" came well after they'd moved past capitalism so the concept of having to earn a living wasn't even a thing anymore, their "ai" was just a new tool to help them create what they wanted to create, and didn't put anyone out of a job. It's not an inherently bad technology, it's just not a good time for that particular advancement. It's going to cause more harm than good due to our current societal problems.
The problem with AI is that only corporations and governments can amass the financial resources to develop them, and often, to develop them, they are using existing material to train the AI.
Two externalised problems arise from this. One, they aren't paying for the use of the training of the AI and its output even if that output is a mimicry of the original work; two, any benefits of their output will belong to the tech companies which are pumping vast sums of money into their AI platforms.
So you can see this for what it really is - they get a new product without paying the source of the material, in much the same way the Australian Government's CSIRO division which invented wifi struggled to sue everyone globally for breaching its patent and invention rights. I think the government collected about 1.5 billion globally? But for something as impactful as wifi that was nothing compared to what should have been paid in royalties. Yet in the AI side of things they are probably paying even less than that in royalties.
Man, I know you don't know how collective bargaining works. It's very obvious now. It's not hand-waving, it was an attempt to explain to you how badly you're misjudging the power dynamics and how that has screwed up your perception of whether the proposed agreement is a good one, or a bad one.
Useful. (But I guess this wasn't really meant for me.)
This is some great fallacious reasoning, that its must be a fair enough deal...
I've explained my position to you about five times now, and I've literally never mentioned fairness. Sooo... yeah.
Fenomas, I'm curious but are you financially involved/invested in AI? It seems you're inherently biased towards the proliferation of AI and freedom to train and exploit existing material. I'm not casting judgement either way - just feels like you may be inferring things which aren't necessarily realistic based on market power.
Good faith question: are you basing that on something I said, or on stuff that was said to me?
Not throwing shade, I know it's tedious to read verbose arguments between other people, and one tends to skim. But the only thing I've been trying to argue here is that I don't think the SAG-AFTRA agreement is bad just because it doesn't have an AI ban. That's not an anti-union position - obviously I want the artists to the the best deal they can get, because just like everybody else on earth I despise the studios. I just don't think holding out for an AI ban would have gotten the union a better deal, and I don't think the union negotiators are bad for not getting one.
That's basically my position here. If you think I said something wrong or biased it would be useful if you quoted it so I'd know what you're referring to, but please keep in mind that just because someone replied to say I'm wrong to believe both parties have equal bargaining power doesn't mean that I believe that (and may not have even mentioned it).
A creator's work is their lifeblood. .. It's just that copyright has become hard to enforce because lawyers cost a lot and infringement is everywhere thanks to digital tools and the ability to disseminate material online.
Sure, I think everyone understands and sympathizes with those things. But my deal on AI is basically that I'm very interested in understanding what the tech and legal realities are, and most people seem more interested in what they wish the realities would be.
Like on copyright for example - a lot of people seem very committed to the idea that training AI, or using it, ought to violate copyright. My view is that it probably doesn't - because I've heard the arguments that it does, and they seem very weak. But no matter how carefully I say that here, somebody seems to interpret it to mean that I also want it to be true, or I'm biased towards it, or I'm also pro-AI or anti-artist in ways that haven't been mentioned. None of those are true, I literally just think the arguments against the premise are a lot stronger than the arguments for it.
As for how that affects artists - I mean, there's at least a possibility that no matter what we do, generative AI will do to artists what cars did to the horse and buggy. I don't want that, and I also don't think it's likely, but if that's what the reality is then I do want to understand it. OTOH generative AI could conceivably force society to change how it views intellectual property, maybe even in ways that leave artists better than they are now. If there are arguments for that I want to understand them too. I have no idea how it will play out, but I try very hard not to think in pro-vs-anti-AI terms, because I strongly suspect that such arguments will someday look as absurd as camera-vs-painting arguments look to us now.
Genuinely sorry to go on so long, but I hope that clears a few things up.
Stengah wrote:Man, I know you don't know how collective bargaining works. It's very obvious now. It's not hand-waving, it was an attempt to explain to you how badly you're misjudging the power dynamics and how that has screwed up your perception of whether the proposed agreement is a good one, or a bad one.
Useful. (But I guess this wasn't really meant for me.)
I know, talking with you on anything related to ai rarely is.
Sometimes people see all the cool things that something can lead to. For me, AI opens so many doors to cool things.
As an example, imagine an AI watching Star Wars A New Hope, and then creating a fully 3D representation of all the scenes including the characters. Then you put on a VR headset and experience the movie as if you are standing shoulder to shoulder with Luke, Han, Leia, Obi-wan and Chewie. And then, you decide to jump into a ship and fly somewhere and the AI just creates that somewhere on the fly.
Honestly, it might be interesting as a novelty for a bit, but after that novelty wears off, I can't see it having much appeal compared to things that have been crafted with intention by people, trying to convey something.
UK will refrain from regulating AI 'in the short term'
Besides the obvious point that this approach offers no protections to anyone against any of the possible abuses/risks of AI, I also agree with the quote from the article here:
Critics of the government’s hands-off approach have questioned whether an under-regulated AI sector would deter investors that seek transparency and security.
The UK had an ambition of becoming an international standard-setter in AI governance,” said Greg Clark, chair of the House of Commons science, innovation and technology select committee. “But without legislation [ . . . ] to give expression to its preferred regulator-led approach, it will most likely fall behind both the US and the EU.”
On the basis that the U.K. hasn’t gotten around to regulating electric scooters. I’m not 100% sure the U.K. has the state capacity to regulate AI.
State failure is real. The about half the world whole world got to experience what is like to be ruled by the arbitrary whims of Etonians[1], now the English do too.
[1] Yes, I am aware Sunak went to Westminster but that’s just Pepsi to Coke. There is one brand leader.
Sam Altman out as CEO.
Was catching up on old verge podcast yesterday which makes it more weird as they just recently had their event.
Highlight for me in the podcast was telling developers to build anything and don't worry Open AI has the lawyers to figure stuff out. I'm sure that will work out well.
Hi Fenomas, thanks for responding to me. I may have formed that impression across this thread and I think there's one in the Everything Else subforum as well. Apologies if it felt accusatory from my end, sadly posting by text can lack context and come off surprisingly cold.
I'm a lawyer and I've always been on the enforcement of copyright side of things. As an example, one of my best clients is considered a market leader and everyone keeps copying their works. It gets tiring having to chase and shut down infringement. It can have a really big financial cost beyond what you might think. Thus I have no doubt I am biased to some degree against AI and I don't mind disclosing this.
It's quite ironic because as a lawyer, one of the biggest forms of flattery is another lawyer or perhaps a judge copying your work (there's an exception in copyright for use in litigation...it wouldn't extend to someone copying your contract for example). But in my line of work, we want to copy the best work because it makes the rule of law better.
Would it make humanity better generally to copy the best work? Perhaps so, but there is a limit I think because if you think about how humanity flourished in the Renaissance and beyond, you needed private patrons to earn your crust; and I guess if AI develops to the point there is no value in creative arts then we are bound possibly to a terminal point of reiterating rather than creating.
I mean I'm excited too for AI and what it could do to make society better. But I fear it will more readily become a tool of economic and political suppression.
Wow, the Altman news is pretty spicy. The prevailing rumor in the mill seems to be that it was primarily a split between Altman and Sutskever, with other board members siding with one or the other, and the "lied to the board" doesn't mean much. Plenty of theories for what the split was about, but no evidence for any of them yet.
Bfgp, I certainly don't envy you for working with copyright law - it's always seemed to me like one of the most vaguely-specified areas of law, even compared to other kinds of IP. Personally I'm a programmer, and we have our own unique relationship with copying and IP - there's no shortage of programmers who doubt that copyright should apply to code at all.
But you skipped past the elephant in the room: as a lawyer, do you think training or using generative AI violates copyright? Not in the "ought-to" sense, but in the practical sense of whether you think the various AI copyright lawsuits will succeed on their merits. I guess they may not be happening where you practice, but even so I'd be interested to hear your views.
That's an elephant I'm leaving to the courts haha!
Copyright principles are fairly well settled internationally. I'm content to see how it plays out because each case is going to be different due to different inputs and outputs. For example, do we stop people writing high fantasy because Tolkien's works popularised thinks like orcs elves and the like? Or for using magical items like rings?
Copyright principles may have their limit though; a human can copy something to a human limit. Machines don't think like our brains do and heck all brains are wired uniquely. That's what makes digital life so troubling for copyright - it's the ability to replicate the original work to 100% fidelity if desired.
Whew boy.
OpenAI President Greg Brockman quits as shocked employees hold all-hands meeting
3 senior OpenAI researchers resign in the wake of Sam Altman's shock dismissal as CEO, report says
What on earth's going on over there?
From the Ars article
According to reporting from Kara Swisher and The Information, it's looking like the ouster of Altman stemmed from an internal disagreement over the direction of the company with regard to a focus on profits over safety, with Chief Scientist Illya Sutskever apparently behind the board maneuvering.
They link to a pay walled article, but it has this before asking for a subscription.
OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing artificial intelligence safely enough, according to people with knowledge of the situation.
Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.
What on earth's going on over there?
It sounds like a factional split, with one side framing the split as over safety vs profit - the other would presumably put it differently. But one of the people who quit in solidarity with Altman has been described as one of the company's biggest safety proponents, so it's a bit murky.
I reckon there are enough people involved that all the details will be out before long, so probably not much use speculating.
What on earth's going on over there?
Maybe they asked their own AI to devise a cunning stratagem that would increase the visibility of their company, and this is what it came up with.
Yeah, I can understand there being an difference in priorities easily enough. I guess I'm just surprised at the method the resolution of those differences took. It also seems like it was pretty sudden... but it may be only that the execution of the firing was sudden and that the dispute had been long boiling behind the scenes.
But, as you say, useless to speculate, and doubtless we'll learn more sooner or later, probably sooner.
What on earth's going on over there?
OpenAI board in discussions with Sam Altman to return as CEO
Altman holding talks with the company just a day after he was ousted indicates that OpenAI is in a state of free-fall without him. Hours after he was axed, Greg Brockman, OpenAI’s president and former board chairman, resigned, and the two have been talking to friends and investors about starting another company. A string of senior researchers also resigned on Friday, and people close to OpenAI say more departures are in the works.
Ferret wrote:What on earth's going on over there?
OpenAI board in discussions with Sam Altman to return as CEO
The article wrote:Altman holding talks with the company just a day after he was ousted indicates that OpenAI is in a state of free-fall without him. Hours after he was axed, Greg Brockman, OpenAI’s president and former board chairman, resigned, and the two have been talking to friends and investors about starting another company. A string of senior researchers also resigned on Friday, and people close to OpenAI say more departures are in the works.
Unsurprisingly, the move upset a bunch of the investors, and got a more negative reaction from the rest of the tech world than the board had expected.
It's very much looking like it was a long-running disagreement over the speed at which they commercialize their ai that came to a head and caused the firing. Because we live in the worst timeline, I expect that either Altman returns to OpenAi with the condition that he gets more complete control over everything, or he takes all the investors and a bunch of the talent and starts a new ai company where he has complete control over things. Either way, the lure of money will win out, and Altman will continue to dismiss or ignore concerns about developing ai too fast.
I guess I'm just surprised at the method the resolution of those differences took. It also seems like it was pretty sudden... but it may be only that the execution of the firing was sudden and that the dispute had been long boiling behind the scenes.
Oh yeah, it was definitely very unusual and sudden, basically a coup. And the language in the announcement is well out of the norm, and the not having investors or partners on board, and even the timing of announcing it during market hours, etc.
I guess the board here is mostly people who have been there since the whole thing was a small nonprofit, so it sounds like there's an element of all this happening by playground rules compared to what would be normal for the board of a $10-100B tech company.
Microsoft is reported to be leading the pressure campaign to get Altman reinstated. No one on the board has actually said anything about considering taking him back, everyone's just citing unnamed sources or "people close to Altman" which doesn't exactly make them neutral parties with no interest in trying to control the narrative.
I'll be highly disappointed if the board relents, but not terribly surprised.
If the board was being principled and really did think Altman was choosing profits over safety, or had some other legitimate reason, i think they need to get that out there sooner rather than later. Without letting people know the exact reason they're just letting Altman and his supporters control the story, and they're doing a great job of making the board seem incompetent. And honestly, the board not giving a concrete and specific reason for such an abrupt and unusual firing in the first place is pretty incompetent in and of itself, so who knows. I'd like to think it was a principled firing, but it could easily have been a more personal spat between Altman and Sutskever that just blew up, and neither side actually thinks they're moving too fast.
- OpenAI fires interim CEO Mira Murati, apparently for trying to re-hire Altman
- OpenAI hires cofounder/ex-CEO of Twitch as interim CEO
- He says Altman's removal was handled badly, and had nothing to do with AI safety
- OpenAI and Microsoft still BFFs, Altman and the staff who left with him all joining MS
- 500+ OpenAI employees threaten to leave unless board resigns and reinstates Altman
- Sutskever is among them
So that's all tidy and wrapped up with a bow, we probably won't hear any more about it.
(edited to add more bullet points)
- OpenAI hires cofounder/ex-CEO of Twitch as interim CEO
- He says Altman's removal was handled badly, and had nothing to do with AI safety
- OpenAI and Microsoft still BFFs, Altman and the staff who left with him all joining MS
So that's all tidy and wrapped up with a bow, we probably won't hear any more about it.
So much insert hand waving nothing to see here meme
I'm sure the news will be buzzing about this for awhile poking and prodding but the I'm sure they have all retreated to the castle and raised the draw bridge for any follow up questions.
Finally with the hiring of Sam Altman Microsoft has done the impossible! They have ushered in the year of Linux on the desktop!
*sobs quietly*
Feels like a precursor to a Microsoft acquisition
Feels like a precursor to a Microsoft acquisition
Not wrong here is my guess. Microsoft put a second tranche equity investment into OpenAI. It was a big injection. Y'know, the kind of number you expect for the next YouTube/Instagram. Turmoil in the upper ranks would suppress the stock valuation; then they'll pick over the carcass and take the engineers who further their profit motive.
OpenAI is a non profit org though. Not sure how that plays in being acquired by a for profit company.
Pages