
News updates on the development and ramifications of AI. Obvious header joke is obvious.
Another take on building trusted AI systems and avoiding hallucinatory results, from Palantir.
This is also based on human-in-the-loop, as well as making the inner workings process flows transparent.
Critics argue that if the only way your system can function is by using work against people’s wishes, then perhaps the system itself is fundamentally morally flawed.
There's really no "perhaps" about it.
The Fanfic Sex Trope That Caught a Plundering AI Red-Handed
Sudowrite, a tool that uses OpenAI’s GPT-3, was found to have understood a sexual act known only to a specific online community of Omegaverse writers.Very embarrassed that I realized upon reading the subhead that I knew exactly what trope they were talking about. I'm going to declare this article moderately NSFW, due to the topic being discussed, even if it's a perfectly SFW article in a reputable publication.
I don't get that Wired article - isn't a popular fandom trope exactly the sort of thing you'd expect LLMs to know about? It seemed like the author wanted to write about the ethics of training on open sources but stuck in a bunch of stuff about smut fandom to be salacious.
I don't get your complaint. The trope isn't just something popular in fanfic, it's something found only in certain kinds of fanfic, so it's focused on that specific trope because of the recent reddit post that goes into how Sudowrite knowing about the trope means OpenAI included specific fanfic sites in GPT-3's training set. OpenAI has not publicly detailed any of what it used in its training set, so that's news in itself, but that knowledge is also making lots of fanfic writers rethink how freely they share their non-commercial work, which is important culturally.
How do I switch to the future where robots do all the manual labor and people do all the writing and art jobs and not the other way around?
OpenAI has not publicly detailed any of what it used in its training set, so that's news in itself,
GPT's pretraining datasets are listed on its wikipedia page. It's mostly crawled web content, so safe to assume that any notable fandom is in there somewhere, but they're openly available if you want to check.
(If you're feeling any deja vu it's because you and I had this same conversation about stable diffusion, in the other AI thread.)
----
Edit: if anyone besides me was struggling to figure out why this Omegaverse stuff sounds familiar, it finally came to me - there was a big thing a couple years ago where some of the fandom authors were suing each other over rights to certain parts of the trope, then a video essayist covered it and went super viral, then one of the fandom authors tried to sue her, etc. Here's the rather epic video, it's a pretty wild ride.
There's a worthwhile difference between knowing that something could have been scraped and knowing that something certainly has been scraped.
No part of Snow Crash, Neuromancer or Cyberpunk 2077 was this depressing.
Au contraire - every bit of every one of those pieces of work is suffused with the omnipresence of advertising and corporatism.
"Whether anyone asked for them or not" seems to sum up a vast swathe of tech
Huh, is my hatred of fascism from personal insecurities? Or hatred of macaroni and cheese? The 1988 Los Angeles Dodgers?
Huh, is my hatred of fascism from personal insecurities? Or hatred of macaroni and cheese? The 1988 Los Angeles Dodgers?
No.
So, the latest debacle in overly relying on AI is apparently a US attorney getting a show cause action for using GPT-3 to draft submissions and the AI citing non-existent cases as authority of its submissions.
I was in a tax appeal case last week and, it would seem, neither our revenue authority or we as the opposing party could find a judicially decided case on point. I say this with some relief, because months ago I'd been researching with no success and was doubting myself. But to hear the government's legal team (which outnumbered us more than 2:1 in the courtroom) also had no luck was highly comforting. Judges rely on lawyers to do the right thing in this regard, and in any event, any false authority should be picked up pretty quickly as experienced practitioners should immediately search for citations and references.
The other stupid part about this is that hearing preparation requires the lawyers to compile copies of cases cited and relied upon; at that point, it would become patently obvious if a phantom case is cited.
"Artificial intelligence is an extinction-level event!" Says people who stand to make a lot of money by pushing how powerful AI is.
If all it takes to end humanity is Clippy: Deluxe Edition than honestly it's time. Hell all of those listed bullet points are just features Open AI wants to sell companies on dressed up to be scary and therefor seem worth the cash they want for it. And of them only the top two seem all that likely to be real issues and the 3rd one is laughable. "This could end life on the planet! So we have to make sure everyone has it so that that doesn't happen" which we all know is true and is the reason we all have nuclear warheads stored in our closets. To make sure no one accidentally sets one off
The latest iteration of AI bros not understanding anything is the artistic boondoggle of "What if you could see the rest of the Mona Lisa?" - because these bros don't even understand that composition is something that makes art.
The results are.... something. Gog-eyed wonder at a picture that doesn't even answer the question, what if you could see the rest of the Mona Lisa. Unless she actually was a disembodied torso floating in a barren landscape.
Others in the series include: What if Botticelli forgot how light worked?
See this nonsense here: https://twitter.com/heykody/status/1...
Artificial intelligence could lead to extinction, experts warn
Not if human intelligence gets us there first.
I have been wondering what would happen if you told one of these AI things to "kill all humans" every day. Like, how long before it actually does it? What method would it use?
I have been wondering what would happen if you told one of these AI things to "kill all humans" every day. Like, how long before it actually does it? What method would it use?
Wait long enough.
Pages