[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

Drake’s AI clone is here — and Drake might not be able to stop him

Acertain type of music has been inescapable on TikTok in recent weeks: clips of famous musicians covering other artists’ songs, with combinations that read like someone hit the randomizer button. There’s Drake covering singer-songwriter Colbie Caillat, Michael Jackson covering The Weeknd, and Pop Smoke covering Ice Spice’s “In Ha Mood.” The artists don’t actually perform the songs — they’re all generated using artificial intelligence tools. And the resulting videos have racked up tens of millions of views.

For Jered Chavez, a college student in Florida, the jump from messing around with AI tools one night to having a wildly viral hit came in late March. He posted a video featuring Drake, Kendrick Lamar, and Ye (formerly Kanye West) singing “Fukashigi no Karte,” a theme song of an anime series. It’s collected more than 12 million views in the month since.

Chavez has been generating new clips at a steady rate since then, getting millions more views across dozens of videos by running a cappella versions of songs through AI models that are trained to sound like the most recognizable musicians in the world. TikTok loves them, and they are cheap, quick, and simple to make.

“I was very surprised [at] how easy it was. Right out of the AI, it sounds pretty good. It sounds real,” Chavez says of the process. “It’s honestly kind of scary how easy these things are to do.”

So far, platforms haven’t removed Chavez’s videos, but the threats could be coming soon — if big artists and labels can figure out how to stop him.

Music industry power players are already getting other AI-generated music pulled from streaming services by citing copyright infringement. But that argument is far from straightforward, legal experts say. There’s no precedent for whether real Drake can stop robot Drake on the basis of copyright — and yet copyright has once again become the most effective way to yank something off the internet that someone doesn’t like.

“It’s easy to use copyright as a cudgel in this kind of circumstance to go after new creative content that you feel like crosses some kind of line, even if you don’t have a really strong legal basis for it, because of how strong the copyright system is,” Nick Garcia, policy counsel at Public Knowledge, says.

That’s the case with perhaps the most notable AI-generated song so far, “Heart on My Sleeve,” which went viral earlier this month for its somewhat convincing pantomime of a Drake and The Weeknd song. The song, posted by an anonymous TikToker going by the name of Ghostwriter, amassed millions of streams before Spotify, Apple Music, TikTok, and YouTube removed it. In the case of YouTube, the culprit for removal was what felt like an unforced error: the otherwise original song inexplicably included a Metro Boomin production tag at the beginning. Universal Music Group claimed it was an unauthorized sample and successfully got the song pulled. In this case, a copyright claim worked — but just barely. Other original songs, like an AI Drake song called “Winter’s Cold,” have been pulled from streaming platforms, too, based on alleged copyright infringement.

And here's a roundup of three other stories.

Wendy’s, Google Train Next-Generation Order Taker: an AI Chatbot

Wendy’s is automating its drive-through service using an artificial-intelligence chatbot powered by natural-language software developed by Google and trained to understand the myriad ways customers order off the menu.

With the move, Wendy’s is joining an expanding group of companies that are leaning on generative AI for growth.

The Dublin, Ohio-based fast-food chain’s chatbot will be officially rolled out in June at a company-owned restaurant in Columbus, Ohio, Wendy’s said. The goal is to streamline the ordering process and prevent long lines in the drive-through lanes from turning customers away, said Wendy’s Chief Executive Todd Penegor.

Wendy’s didn’t disclose the cost of the initiative beyond saying the company has been working with Google in areas like data analytics, machine learning and cloud tools since 2021.

“It will be very conversational,” Mr. Penegor said about the new artificial intelligence-powered chatbots. “You won’t know you’re talking to anybody but an employee,” he said.

AI startup Anthropic wants to write a new constitution for safe AI

Anthropic is a bit of an unknown quantity in the AI world. Founded by former OpenAI employees and keen to present itself as the safety-conscious AI startup, it’s received serious funding (including $300 million from Google) and a space at the top table, attending a recent White House regulatory discussion alongside reps from Microsoft and Alphabet. Yet the firm is a blank slate to the general public; its only product is a chatbot named Claude, which is primarily available through Slack. So what does Anthropic offer, exactly?

According to co-founder Jared Kaplan, the answer is a way to make AI safe. Maybe. The company’s current focus, Kaplan tells The Verge, is a method known as “constitutional AI” — a way to train AI systems like chatbots to follow certain sets of rules (or constitutions).

Creating chatbots like ChatGPT relies on human moderators (some working in poor conditions) who rate a system’s output for things like hate speech and toxicity. The system then uses this feedback to tweak its responses, a process known as “reinforcement learning from human feedback,” or RLHF. With constitutional AI, though, this work is primarily managed by the chatbot itself (though humans are still needed for later evaluation).

Humane’s new wearable AI demo is wild to watch — and we have lots of questions

Another take on building trusted AI systems and avoiding hallucinatory results, from Palantir.

This is also based on human-in-the-loop, as well as making the inner workings process flows transparent.

The Fanfic Sex Trope That Caught a Plundering AI Red-Handed

Sudowrite, a tool that uses OpenAI’s GPT-3, was found to have understood a sexual act known only to a specific online community of Omegaverse writers.

Very embarrassed that I realized upon reading the subhead that I knew exactly what trope they were talking about. I'm going to declare this article moderately NSFW, due to the topic being discussed, even if it's a perfectly SFW article in a reputable publication.

Critics argue that if the only way your system can function is by using work against people’s wishes, then perhaps the system itself is fundamentally morally flawed.

There's really no "perhaps" about it.

Prederick wrote:

The Fanfic Sex Trope That Caught a Plundering AI Red-Handed

Sudowrite, a tool that uses OpenAI’s GPT-3, was found to have understood a sexual act known only to a specific online community of Omegaverse writers.

Very embarrassed that I realized upon reading the subhead that I knew exactly what trope they were talking about. I'm going to declare this article moderately NSFW, due to the topic being discussed, even if it's a perfectly SFW article in a reputable publication.

AI is Silicon Valley's desperate, last-ditch attempt to avoid a stock market wipeout

AI is Silicon Valley's last-ditch attempt to avoid a stock market wipeout

Silicon Valley has entered the Hail Mary phase of its business cycle — a desertic part of a tech-industry downturn where desperation can turn into recklessness.

The biggest players of the last decade are facing an existential crisis as their original products lose steam and seismic shifts in the global economy force them to search for new sources of growth. Enter generative AI — algorithms like the viral program ChatGPT that seem to mimic human intelligence by spitting out text or images. While everyone in Silicon Valley is suddenly, ceaselessly talking about this new tech, it is not the kind of artificial intelligence that can power driverless cars, or Jetson-like robot slaves, or bring about the singularity. The AI that companies are deploying is not at that world-changing level yet, and candidly, experts will tell you it's unclear if it ever will be. But that hasn't stopped the tech industry from trying to ride the wave of excitement and fear of this new innovation.

As soon as it was clear that OpenAI, the creator of ChatGPT, had a cultural hit, it was off to the races. Hoping to cash in on the craze, Microsoft poured $10 billion into OpenAI in January and launched an AI-powered version of their search engine, Bing, soon after. Google has scrambled to keep up, launching their own AI-inflected search engine, Bard, in March. Nearly every other major tech company has followed suit, insisting that their business will be at the forefront of the AI revolution. Venture capitalists — who've been miserly with their money since the market turned last year — have started writing checks for AI startups. And in a surefire sign that something has exploded beyond recognition, Elon Musk started claiming the whole thing was his idea all along.

All of this hype is more of a billionaire ego brawl than a real revolution in technology, one AI startup consultant and a longtime researcher who spoke on the condition of anonymity to speak candidly about products in development told me. "I hate to frame the story as another gang of bros, but that's what OpenAI is," they said. "They're going through riffs and tiffs." To get a piece of that sweet AI-craze money, even the most powerful tech moguls are trying to make it seem as if their company is the real leader in AI, embracing the timeless truth passed down by Will Ferrell's fictional race car driver Ricky Bobby: "If you ain't first, you're last."

Wall Street, never one to miss a trend, has also embraced the AI hype. But as Daniel Morgan, a senior portfolio manager at Synovus Trust, said in an interview with Bloomberg TV, "This AI hype doesn't really trickle down into any huge profit growth. It's just a lot of what can happen in the future." AI-driven products are not bringing in big bucks yet, but the concept is already pumping valuations.

That is what makes the hype cycle a Hail Mary: Silicon Valley is hoping and praying that AI hype can keep customers and investors distracted until their balance sheets can bounce back. Sure, rushing out an unproven new technology to distract from the problems of the tech industry and global economy may be a bit ill-advised. But, hey, if society suffers a little along the way, well — that's what happens when you move fast and break things.

I don't get that Wired article - isn't a popular fandom trope exactly the sort of thing you'd expect LLMs to know about? It seemed like the author wanted to write about the ethics of training on open sources but stuck in a bunch of stuff about smut fandom to be salacious.

I don't get your complaint. The trope isn't just something popular in fanfic, it's something found only in certain kinds of fanfic, so it's focused on that specific trope because of the recent reddit post that goes into how Sudowrite knowing about the trope means OpenAI included specific fanfic sites in GPT-3's training set. OpenAI has not publicly detailed any of what it used in its training set, so that's news in itself, but that knowledge is also making lots of fanfic writers rethink how freely they share their non-commercial work, which is important culturally.

How do I switch to the future where robots do all the manual labor and people do all the writing and art jobs and not the other way around?

Stengah wrote:

OpenAI has not publicly detailed any of what it used in its training set, so that's news in itself,

GPT's pretraining datasets are listed on its wikipedia page. It's mostly crawled web content, so safe to assume that any notable fandom is in there somewhere, but they're openly available if you want to check.

(If you're feeling any deja vu it's because you and I had this same conversation about stable diffusion, in the other AI thread.)


Edit: if anyone besides me was struggling to figure out why this Omegaverse stuff sounds familiar, it finally came to me - there was a big thing a couple years ago where some of the fandom authors were suing each other over rights to certain parts of the trope, then a video essayist covered it and went super viral, then one of the fandom authors tried to sue her, etc. Here's the rather epic video, it's a pretty wild ride.

There's a worthwhile difference between knowing that something could have been scraped and knowing that something certainly has been scraped.

Remember Timberland? The music producer who got famous in the 90s? Been wondering what he's been up to?

Timbaland Hatches AI Startup That Will Give Music Stars Like Biggie Life After Death

Fresh off his controversial collaboration with the late rapper Biggie Smalls, which he previewed on Instagram earlier this month, the Grammy-winning music producer Timbaland told Forbes he has a plan to commercialize artificial intelligence software that will revolutionize how songs are made.

“It’s going to really be a new way of creating and a new way of generating money with less costs,” the influential beatmaker told Forbes in an exclusive interview. “I’m already here. This is what I’m doing. I’m going to lead the way.”

Timbaland, born Timothy Mosley, said he believes AI voice filters — which allow an artist to assume the voice of another artist — will open up an unprecedented world of creativity in music. Up-and-coming artists with a good cadence or flow, but not a great voice, could use filters to achieve more success. Established artists will be able to share AI replicas of their voices with each other to test collaborations and save time. And a producer could get exclusive rights to use the voice of “a music legend who’s no longer with us,” he said, and fans will eagerly wait for the project to drop.

Mosley said there are a host of legal issues centering on copyright and revenue-sharing to resolve before the future of music can happen, but he’s already got a startup and AI voice filter technology that he wants to sell to usher in the new era.



Mosley said part of his motivation for embracing AI voice filters so early is because Black America far too often isn’t represented in the bounty of wealth that comes with creating or investing in technologies that end up changing the world.

“We’re the culture man, so I at least got to come in the door,” he said. “Usually someone else gets to it and it blows up.”

Yes, Google’s AI-infused search engine will have ads

Google Ads is getting into the generative AI game. Today, the company unveiled products that it says will inject generative AI into its advertising business, like copywriting tools and image generators.

Perhaps most notably, it also released further details on how ads will fit into its new generative-AI search engine, something it’s calling the Search Generative Experience, which is currently available via waitlist. These ads will largely appear above or below the generative text spit out by the search engine, all labeled with a “sponsored” tag. At the moment, advertisers also won’t be able to opt in or out of the new search inventory, and the kind of ads users see will depend on the specific search query, Dan Taylor, Google’s VP of global ads, said during a press briefing.

Search is no slouch for Google—the company’s “search and other” category raked in nearly $40 billion last quarter, and its search engine commands a 91% market share in the US, according to SimilarWeb. Google first announced its search engine’s generative-AI facelift during the company’s I/O conference earlier this month, on the heels of its first real search competitor in decades: Microsoft and its ChatGPT-charged Bing.

For now, search ads within its conversational AI search engine are largely “experiments within an experiment,” Taylor said, alluding to a new program called Search Labs, where Google is testing this tech.

Taylor compared AI’s impact on advertising to the shift to mobile advertising. The company is still testing what kinds of searches merit the “generative experience” and whether it would make sense to place an ad there.

No part of Snow Crash, Neuromancer or Cyberpunk 2077 was this depressing.

Prederick wrote:

No part of Snow Crash, Neuromancer or Cyberpunk 2077 was this depressing.

Au contraire - every bit of every one of those pieces of work is suffused with the omnipresence of advertising and corporatism.

Generative AI Podcasts Are Here. Prepare to Be Bored

HERE’S THE THING about podcasts: There are too many of them.

More than 4 million, to be precise, according to the database Podcast Index. In the past three days alone, nearly 103,000 individual podcast episodes were published online, a deluge of audio content so voluminous that listeners need never run out of options. You could spend the rest of your life working through the existing true crime catalog on Apple Podcasts or the sports chat shows on Spotify and end up dying of old age in 2070 while Michael Barbaro reads an ad for Mailchimp to your corpse.

In the ongoing generative AI gold rush, though, opportunistic entrepreneurs are looking for entry into even the most saturated markets. A wave of startups, including ElevenLabs, WondercraftAI, and Podcastle, have introduced easy-to-use tools to generate AI voices in minutes. So, as if on cue, AI podcasts are here, whether anyone asked for them or not.

In these early days, nobody’s keeping track of how many listeners this strange new genre of podcast has. Major hubs like Apple Podcasts and Spotify don’t have separate charts for robot hosts. There are, however, a few individual AI podcasts that have clearly found audiences, at least for their first crop of episodes.

THE FIRST AI-GENERATED podcast to take off cheated a little—it used the cloned voice of the world’s most popular human podcast host. The Joe Rogan AI Experience is a series of simulations of Rogan gabbing with (equally fake) guests like OpenAI CEO Sam Altman and former president Donald Trump. Shortly after the first episode came out, the real Rogan tweeted a link to it. “This is going to get very slippery, kids,” he wrote.

On YouTube, the dupe racked up more than half a million views. Some listeners didn’t even care that it was AI. “This is actually good enough for me. Good stuff,” one wrote.

The Joe Rogan AI Experience was created by a Rogan fan named Hugo. (He declined to give WIRED his full name because he does not want to be professionally associated with the project.) He has a Patreon to support production of the show and recently turned on monetization on YouTube, but he doesn’t expect to make any real income off it—especially as he’s aware that he doesn’t have consent to use Rogan’s voice or likeness, and that podcasting platforms may end up banning this type of impersonation.

Hugo created the series because he wanted to showcase what AI voice tools can do. Although he carefully edits the episodes to make them flow for listeners—they can take days or weeks to get right—he doesn’t think the conversations themselves are particularly enthralling, even if they’re reasonably accurate imitations. “Apart from listening to the podcast because of its technological advancement, there’s no point,” Hugo says. “It’s just wasted time.”

It’s unclear whether the audience will hang around, or if they simply wanted to check out something unusual and new; Hugo has released four episodes, and each subsequent installment has pulled a smaller audience than the last.

WIRED spoke with several other creators of AI-generated podcasts who echoed Hugo's take. They enjoyed playing around with the technology, but they consider the end results a byproduct of experimentation. Israel-based sound engineer Lior Sol, for example, created a trippy podcast called Myself, I Am and That using ElevenLabs’ tools. He made a clone of his voice and then a clone of that clone in an extremely meta conversation. “I’m definitely having fun with it,” he says. But that doesn’t mean he’s chasing big audiences. Right now, his listeners number in the dozens. His friends like it, he likes it—it’s an art project, and a chance to fiddle around with new tech, not an attempt to make something commercial.

This makes me chuckle because the number of podcasts I'm subscribed to is already like my Netflix queue. 4 I actually listen to regularly, and like 25 I'm "getting around to."

Like sure, toss in some garbage AI churn in there, I'll get to it in 2035. Maybe. If I'm off that day. And I'm not busy. And I don't get distracted by doing literally anything else.

"Whether anyone asked for them or not" seems to sum up a vast swathe of tech

‘Those who hate AI are insecure’: inside Hollywood’s battle over artificial intelligence

On the picket lines outside Los Angeles film studios, artificial intelligence has become a central antagonist of the Hollywood writers’ strike, with signs warning studio executives that writers will not let themselves be replaced by ChatGPT

That hasn’t stopped tech industry players from selling the promise of a future in which AI is an essential tool for every part of Hollywood production, from budgeting and concept art, to script development, to producing a first cut of a feature film with a single press of a button.

The writer’s strike has put the spotlight on escalating tensions over whether an AI-powered production process will be a dream or a nightmare for most Hollywood workers and for their audiences.

Los Angeles’s AI boosters tout the latest disruptive technology as a democratising force in film, one that will liberate creators by taking over dull and painstaking tasks like motion capture, allowing them to turn their ideas into finished works of art without a budget of millions or tens of millions of dollars. They envision a world in which every artist has a “holographic vision board”, which will enable them to instantly see any possible idea in action.

Critics say that studio executives simply want to replace unionized artists with compliant robots, a process that can only lead to increasingly mediocre, or even inhuman, art.

All these tensions were on display last week when tech companies that specialise in AI, including Dell, Hewlett-Packard Enterprise and Nvidia, were among the sponsors of an “AI on the Lot” conference in Hollywood, which attracted an estimated 400 people to overflowing sessions about how artificial intelligence was disrupting every facet of film production. One tech investor described the mood as both high energy and high anxiety.

The day before the AI conference, a crowdfunded plane had flown over multiple studios with a banner message: “Pay the writers, you AI-holes.” But several speakers at the AI LA conference argued that fear of artificial intelligence is for the weak.

“The people who hate it or are fearful of it are insecure about their own talent,” said Robert Legato, an Academy Award-winning visual effects expert who has worked on films like Titanic, the Jungle Book and the Lion King.

“It’s like a feeling amplifier,” said Pinar Seyhan Demirdag, an artist turned AI entrepreneur. “If you feel confident, you will excel. If you feel inferior –,” she paused. The tech crowd laughed.

Huh, is my hatred of fascism from personal insecurities? Or hatred of macaroni and cheese? The 1988 Los Angeles Dodgers?

Rat Boy wrote:

Huh, is my hatred of fascism from personal insecurities? Or hatred of macaroni and cheese? The 1988 Los Angeles Dodgers?


So, the latest debacle in overly relying on AI is apparently a US attorney getting a show cause action for using GPT-3 to draft submissions and the AI citing non-existent cases as authority of its submissions.

I was in a tax appeal case last week and, it would seem, neither our revenue authority or we as the opposing party could find a judicially decided case on point. I say this with some relief, because months ago I'd been researching with no success and was doubting myself. But to hear the government's legal team (which outnumbered us more than 2:1 in the courtroom) also had no luck was highly comforting. Judges rely on lawyers to do the right thing in this regard, and in any event, any false authority should be picked up pretty quickly as experienced practitioners should immediately search for citations and references.

The other stupid part about this is that hearing preparation requires the lawyers to compile copies of cases cited and relied upon; at that point, it would become patently obvious if a phantom case is cited.

Artificial intelligence could lead to extinction, experts warn

Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI and Google Deepmind - have warned.

Dozens have supported a statement published on the webpage of the Centre for AI Safety.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" it reads.

But others say the fears are overblown.

Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement.

The Centre for AI Safety website suggests a number of possible disaster scenarios:

- AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons

- AI-generated misinformation could destabilise society and "undermine collective decision-making"

- The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"

- Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"

"Artificial intelligence is an extinction-level event!" Says people who stand to make a lot of money by pushing how powerful AI is.

If all it takes to end humanity is Clippy: Deluxe Edition than honestly it's time. Hell all of those listed bullet points are just features Open AI wants to sell companies on dressed up to be scary and therefor seem worth the cash they want for it. And of them only the top two seem all that likely to be real issues and the 3rd one is laughable. "This could end life on the planet! So we have to make sure everyone has it so that that doesn't happen" which we all know is true and is the reason we all have nuclear warheads stored in our closets. To make sure no one accidentally sets one off

The latest iteration of AI bros not understanding anything is the artistic boondoggle of "What if you could see the rest of the Mona Lisa?" - because these bros don't even understand that composition is something that makes art.

The results are.... something. Gog-eyed wonder at a picture that doesn't even answer the question, what if you could see the rest of the Mona Lisa. Unless she actually was a disembodied torso floating in a barren landscape.

Others in the series include: What if Botticelli forgot how light worked?

See this nonsense here: https://twitter.com/heykody/status/1...

Not if human intelligence gets us there first.

Bruce wrote:

The latest iteration of AI bros not understanding anything is the artistic

The last one I saw was a dude just totally not understanding why movies are shot and framed the way they are and just going "what if we could expand the frame in every movie?"

Like, I do think a bunch of these dudes know what they're doing and that this is just outrage bait for engagement, but simultaneously, I don't doubt that they are genuinely so lacking even the most basic understandings of art that they genuinely think they're improving things.

EDIT: Yeah, they're just baiting outrage now.

I have been wondering what would happen if you told one of these AI things to "kill all humans" every day. Like, how long before it actually does it? What method would it use?

iaintgotnopants wrote:

I have been wondering what would happen if you told one of these AI things to "kill all humans" every day. Like, how long before it actually does it? What method would it use?

Wait long enough.