[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

The Editors Protecting Wikipedia from AI Hoaxes

A group of Wikipedia editors have formed WikiProject AI Cleanup, “a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia.”

The group’s goal is to protect one of the world’s largest repositories of information from the same kind of misleading AI-generated information that has plagued Google search results, books sold on Amazon, and academic journals.

“A few of us had noticed the prevalence of unnatural writing that showed clear signs of being AI-generated, and we managed to replicate similar ‘styles’ using ChatGPT,” Ilyas Lebleu, a founding member of WikiProject AI Cleanup, told me in an email. “Discovering some common AI catchphrases allowed us to quickly spot some of the most egregious examples of generated articles, which we quickly wanted to formalize into an organized project to compile our findings and techniques.”

IMAGE(https://cdn.bsky.app/img/feed_thumbnail/plain/did:plc:7p7g2rae47jmgxg3ncwslczi/bafkreihbsb422yvewxu2mvkkvnfuykuh44e5e5vkhokddtie76foacoh44@jpeg)

And if one looks for the opinions of teachers and professors online, they agree, things are great!

IMAGE(https://pbs.twimg.com/media/GZ80LO8asAAIA8y?format=jpg&name=large)

Elizabeth Laraki, a design partner at Electric Capital, shared a story on X this week about how her speaker photo, when shared on social, looked slightly different than it did when she provided it to the conference she was attending. Laraki contacted the conference and discovered that their social media manager fed it into an AI and it basically invented a hint of a bra underneath her clothes.

We've known for a while that many popular image generators are trained on pornographic material, as well as child sexual abuse material. We also know that generators have all sorts of biases built into them and will over-sexualize photos of women and certain races. So this is not a huge surprise.

But the thing that I find the most interesting here is that what happened to Laraki was because the conference's social media manager wanted to better format her photo to share on social platforms. I've tried to articulate this point a few different ways over the years, but I've never been quite satisfied with it. I am continually amazed at how much of the supposed utility of generative AI is based around solving completely made up problems created by social platforms. Can't post words online anymore because a platform like Instagram wants an image? They can make you one. Can edit your image to fit the aspect ratio that the platform currently wants? AI can do it. Now the platform wants video because it's decided video is worth more to advertisers? The AI can spit out a video. It's all totally arbitrary. Just bots being forced to make content to please other bots.

FFS

I didn't even notice the "hint of a bra" at first until reading the blurb. I thought it was going to be about the "hint" of the Star of David that is suddenly missing in the second picture.

iaintgotnopants wrote:

I didn't even notice the "hint of a bra" at first until reading the blurb. I thought it was going to be about the "hint" of the Star of David that is suddenly missing in the second picture.

The more I look at it, the more confused I get. Why get rid of the pockets?

Why run it through an AI at all?

That number seems high.

If the unrestrained spending continues, it could eventually result in a tech stock crash – referred to by Acemoglu as “AI winter” – as the technology falls out of favor with executives.

Blockchain, metaverse, AI... and here in reality, the world's still being run on decades-old UNIX code, Perl scripts, and Excel macros.

*Legion* wrote:

That number seems high.

If the unrestrained spending continues, it could eventually result in a tech stock crash – referred to by Acemoglu as “AI winter” – as the technology falls out of favor with executives.

Blockchain, metaverse, AI... and here in reality, the world's still being run on decades-old UNIX code, Perl scripts, and Excel macros.

My workplace just transitioned away from IBM Personal Communications, that was introduced in the late 70s.

The software we use to write reports is in Visual Basic or in Excel macros.

*Legion* wrote:

Blockchain, metaverse, AI... and here in reality, the world's still being run on decades-old UNIX code, Perl scripts, and Excel macros.

I can't remember the sci-fi series (maybe Vinge's A Fire Upon The Deep?) that leverages that exact point, where interstellar spaceships are running on a kludge of 200 year old code that's been constantly patched that entire time and is a giant mess. I loved that concept.

Generative artificial intelligence is built by gobbling up mass amounts of photos, videos and text. The people who take the photos, create the videos and write the text generally hope to get paid directly for their work, and aren’t always thrilled when their work ends up in training datasets.

Some companies seem to have accepted creator outrage as a cost of doing business. Others, like software maker Adobe Inc., can’t risk alienating their core bases of customers and contributors.

For those whose work is used to build its generative AI, Adobe has been handing out cash bonuses. Photographers, illustrators and videographers who contribute to Adobe Stock, its marketplace of images from which it harvests training data, saw a second annual AI-earmarked payment hitting their accounts in recent weeks.

Payouts ranged from barely enough to cover breakfast to a few thousand dollars for major video contributors, according to posts on industry forums. Sums were based on whose work was most useful in building the model and on the success of the stock business broadly, Adobe Chief Strategy Officer Scott Belsky said in an interview.

Much of the stock content comes from a small share of professional contributors for whom totals are “actually pretty significant,” Belsky said. He compared it to the way Taylor Swift probably dominates streaming revenue on a service such as Spotify.

It’s all part of a difficult balancing act for Adobe in rapidly developing AI features without creative professionals feeling like their skills are becoming useless or their jobs are at risk.

At its annual product conference this week, Adobe announced free AI training courses, additional tools for artists to credential their work, and argued job opportunities in the industry are increasing. The company calls its steps in developing AI features for apps like Photoshop, which have been used billions of times, “the most creator-friendly approach in the industry.”

Other major media marketplaces also have cut contributors in on some AI revenue. Shutterstock Inc. has licensed its library for AI training to firms like OpenAI and Meta Platforms Inc. Getty Images Holdings Inc. has struck a deal to help provide training data for Canva Inc., an emerging startup rival of Adobe.

Getty is seen as a leader in protecting artists’ rights and is comfortable in the value it offers creators, said Chief Product Officer Grant Farhall. He pointed toward the company’s ongoing copyright infringement lawsuit against startup Stability AI, which manages Stable Diffusion, a popular AI image-generation tool. Shutterstock declined to comment.

Adobe’s Belsky said feedback he receives “suggests that we're paying better” than the other marketplaces to creators.

Still, not all Adobe contributors are thrilled. You can’t opt-out of having submissions to Adobe Stock used for AI training, and some wonder about the long-term viability of the career as an increasing number of images can be created via prompting an AI tool. Today, there are more than 152 million pieces of AI-generated content on Adobe Stock, amounting to 28% of the total library.

As creators shared their payouts online, one photographer who has been on the forum for nearly a decade said he wasn’t sure how to feel. “It’s still disappointing, because I am training my competition.”—Brody Ford

Apropos of nothing, I was in Manhattan on Monday, and walked past an ad for a tattoo shop that had CLEARLY been made with AI, and... well... future's gonna future.

I mean, anyone paying attention saw this coming.

Microsoft introduces ‘AI employees’ that can handle client queries

Microsoft is introducing autonomous artificial intelligence agents, or virtual employees, that can perform tasks such as handling client queries and identifying sales leads, as the tech sector strives to show investors that the AI boom can produce indispensable products.

The US tech company is giving customers the ability to build their own AI agents as well as releasing 10 off-the-shelf bots that can carry out a range of roles including supply chain management and customer service.

Early adopters of the Copilot Studio product, which launches next month, include the blue chip consulting firm McKinsey, which is building an agent to process new client inquiries by carrying out tasks such as scheduling follow-up meetings. Other early users include law firm Clifford Chance and retailer Pets at Home.

Microsoft is flagging AI agents, which carry out tasks without human intervention, as an example of the technology’s ability to increase productivity – a measure of economic efficiency, or the amount of output generated by a worker for each hour worked.

Microsoft chief executive, Satya Nadella, who disclosed the AI agents at a company event in London, said the tool would reduce “drudgery” and raise productivity by freeing up time to carry out more valuable tasks.

“These tools are fundamentally changing outsourcing, increasing value and reducing waste,” he said.

Nadella described Copilot Studio, which does not require coding expertise from its users, as a “no-code way for you to be able to build agents”. Microsoft is powering the agents with several AI models developed in-house and by OpenAI, the developer of ChatGPT.

Microsoft is also developing an AI agent that can carry out transactions on behalf of users. The company’s head of AI, Mustafa Suleyman, has said he has seen “stunning demos” where the agent makes a purchase independently, but that it has also suffered “car crash moments” in development. Sulyeman added, nonetheless, that an agent with these capabilities will emerge “in quarters, not years”.

Asked about fears of AI’s impact on employment, Charles Lamanna, a corporate vice-president at Microsoft, told the Guardian that agents would do away with the “mundane, monotonous” aspects of a job.

“I think it’s much more of an enabler and an empowerment tool than anything else,” he said.

Lamanna said the advent of AI tools such as agents in the modern office environment is comparable to the arrival of personal computers several decades ago.

“The personal computer didn’t show up on every desk to begin with but eventually it was on every desk because it brought so much capability and information to the fingertips of every employee,” he said.

“We think that AI is going to have the same type of journey. It’s showing up in a subset of departments and processes, but it’s only a matter of time till it shows up to all parts of an organisation.”

Andrew Rogoyski, a director at the Institute for People-Centred AI, at the University of Surrey, said AI agents could help tech companies to produce a return for investors who backed the technology strongly. In June, Goldman Sachs asked if a $1tn investment in AI over the next few years will “ever pay off”.

“AI companies have consumed a lot of investment money and need to generate some returns,” said Rogoyski. “Assistive agents is a way of showing everyday benefits, although how much revenue these will generate is an open question.”

However, he issued a warning that agents have been discussed as a concept for years but that “we’ve yet to deliver an agent that is as capable as a human worker”.

They need to be rally, really careful about AI chatbots providing info.

Airline held liable for its chatbot giving passenger bad advice - what this means for travellers

Spoiler:

When Air Canada's chatbot gave incorrect information to a traveller, the airline argued its chatbot is "responsible for its own actions".

Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada's chatbot promised a discount that wasn't available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother's funeral and then apply for a bereavement fare after the fact.

According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn't offer the discount. Instead, the airline said the chatbot was a "separate legal entity that is responsible for its own actions". Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.

The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees. "It should be obvious to Air Canada that it is responsible for all the information on its website," read tribunal member Christopher Rivers' written response. "It makes no difference whether the information comes from a static page or a chatbot." The BBC reached out to Air Canada for additional comment and will update this article if and when we receive a response.

Gabor Lukacs, president of the Air Passenger Rights consumer advocacy group based in Nova Scotia, told BBC Travel that the case is being considered a landmark one that potentially sets a precedent for airline and travel companies that are increasingly relying on AI and chatbots for customer interactions: Yes, companies are liable for what their tech says and does.

"It establishes a common sense principle: If you are handing over part of your business to AI, you are responsible for what it does," Lukacs said. "What this decision confirms is that airlines cannot hide behind chatbots."

Air Canada is hardly the only airline to dive head-first into AI – or to have a chatbot go off the rails. In 2018, a WestJet chatbot sent a passenger a link to a suicide prevention hotline, for no obvious reason. This type of mistake, in which generative AI tools present inaccurate or nonsensical information, is known as "AI hallucination". Beyond airlines, more major travel companies have embraced AI technology, ChatGPT specifically: In 2023, Expedia launched a ChatGPT plug-in to help with trip planning.

Lukacs expects the recent tribunal ruling will have broader implications for what airlines can get away with – and highlights the risks for businesses leaning too heavily on AI.
Getty Images (Credit: Getty Images)Getty Images
"What this decision confirms is that airlines cannot hide behind chatbots." – Gabor Lukacs
How air travellers can protect themselves

In the meantime, how can passengers stand guard against potentially wrong information or "hallucinations" fed to them by AI? Should they be fact-checking everything a chatbot says? Experts say: Yes, and no.

"For passengers, the only lesson is that they cannot fully rely on the information provided by airline chatbots. But, it’s not really passengers' responsibility to know that," says Marisa Garcia, an aviation industry expert and senior contributor at Forbes. "Airlines will need to refine these tools further [and] make them far more reliable if they intend for them to ease the workload on human staff or ultimately replace human staff."

Garcia expects that, over time, chatbots and their accuracy will improve, "but in the meantime airlines will need to ensure they put their customers first and make amends quickly when their chatbots get it wrong," she says – rather than let the case get to small claims court and balloon into a PR disaster.

Travellers may want to consider the benefits of old-fashioned human help when trip-planning or navigating fares. "AI has advanced rapidly, but a regulatory framework for guiding the technology has yet to catch up," said Erika Richter of the American Society of Travel Advisors. "Passengers need to be aware that when it comes to AI, the travel industry is building the plane as they're flying it. We're still far off from chatbots replacing the level of customer service required – and expected – for the travel industry."

Globally, protections for airline passengers are not uniform, meaning different countries have different regulations and consumer protections. Lukacs notes that Canadian passenger regulations are particularly weak, while the UK, for example, inherited the Civil Aviation Authority and regulations from the 2004 European Council Directive.

"It's important to understand that this is not simply about the airlines," he said. Lukacs recommends passengers who fall victim to chatbot errors take their cases to small claims court. "They may not be perfect, but overall a passenger has a chance of getting a fair trial."

There’s something off about this year’s “fall vibes”

A rain-soaked street at dusk, pictured through the window of a coffee shop. String lights hang between old brick buildings, a church steeple in the distance. In the foreground, a candlelit table with mugs of coffee, tea, and … a corked glass jug of beige liquid? Next to a floating hunk of sourdough? And also the table is covered in water?

This is the platonic ideal of “autumn,” according to one photo that’s gone viral both on X, where it’s been seen almost 12 million times, and Pinterest, where it’s the first picture that comes up when you search “fall inspo.” At first glance, you’d be forgiven for thinking it’s a tiny street in Edinburgh or the part of Boston that looks like Gilmore Girls. But like so many other viral autumnal vibes photos this year, the image, with its nonsensical details and uncanny aura, appears to be AI-generated.

AI “autumn vibes” imagery makes up a ton of the most popular fall photos on Pinterest right now, from a moody outdoor book display on yet another rain-soaked street to a sunlit farmers market to several instances of coffee cups perched on tousled bedspreads. All of them appear normal until you zoom in and realize the books don’t contain actual letters and the pillows are actually made of bath mat material.

It’s not just limited to Pinterest or “vibes”: AI-generated content is now infiltrating social media in ways that have a meaningful impact on people’s lives. Knitters and crocheters hoping to craft fall sweaters are being inundated with nonsensical AI patterns and inspo images on Reddit. A fake restaurant has gained 75,000 followers on Instagram by claiming to be “No. 1 in Austin” and posting over-the-top seasonal food items like a croissant shaped like Moo Deng. Meanwhile, folks hoping to curl up with a cozy fantasy novel or a bedtime story for their kids are confronted with a library of Chat GPT-generated nonsense “written” by nonexistent authors on the Kindle bookstore, while their YouTube algorithms serve them bot-generated fall ambiance videos. Autumn, it seems, is being eaten by AI.

Not everyone is — and please excuse the following pun — falling for it. When the fake cafe photo went viral on X, it caused a deluge of quote-tweets asking why the hell anyone needed to use AI when you could just as easily post one of the many actual photos taken in real cities that do, in fact, look like this.

olloquially, all this garbage is widely considered “slop,” a term for the spammy AI-generated images, text, and videos that clog up internet platforms and make it more difficult and unpleasant than ever to be online. In reality, this moment of peak slop is the natural culmination of platforms that incentivize virality and engagement at all costs — no matter how low-quality the content happens to be. But the crux of the issue now is the sheer scale of it: Scammers and spammers can unleash a barrage of text and images with the click of a button, so searches for legitimate information or a casual scroll through social media require even more time and effort to bypass the junk. Misinformation about crucial news events and election coverage is spreading on platforms. Academic and literary publications are being spammed with low-quality submissions, making it harder to suss out genuine creative or scholarly work.

Speaking as a beginner artist, the sheer amount of AI slop clogging the internet now is obvious and frustration, in that it's made finding good reference increasingly difficult, if not impossible for certain search terms. And it's extra frustrating because the shit can't even churn out an actual decent reference image if I asked it to.

The time it would take to get Dall-E to actually spit out a good, dynamic pose is exponentially longer than just googling it is. Or was.

Yeah I saw AI art used in Japanese restaurants while abroad this month. Some of it was kind of cool but there were definitely uncanny elements that really detracted from the impression once you noticed them.

AI images are generating outrage amongst students in our high school examinations this year. Guardian link here albeit the outrage is that examiners can use AI but students cannot.

Prederick wrote:

The time it would take to get Dall-E to actually spit out a good, dynamic pose is exponentially longer than just googling it is. Or was.

Not sure of it will be helpful, but https://www.adorkastock.com/ is a good place to look for poses for reference.

Can A.I. Be Blamed for a Teen’s Suicide?

(NY Times Paywall)

The article explains the teen became emotionally attached to the AI chatbot, however, the chatbot never directly walked the teen into committing suicide.

It's the lack of proper guidance/intervention from the platform that is being challenged.

It's good to get the occasional reminder that my bubble online and Real Life are not the same, as I'm currently overhearing one of the Union guys at my job gush about how awesome he thinks AI is.

EDIT: This was made worse by doing a YT search for "Latin/Cuban Jazz" today to listen to while in the office, and getting recommended a bunch of AI slop. At least it does make me more appreciative of the people actually willing to do the work and find the actual music made by the actual artists for us all to enjoy and learn about.

Disney Poised to Announce Major AI Initiative | Exclusive

The Walt Disney Co. is poised to announce a major AI initiative that will transform its creative output, individuals with knowledge told TheWrap on Thursday.

The initiative is said to involve “hundreds” of people at the company and will primarily focus on post-production and visual effects.

One of the individuals said it would also involve parks and experiences, but not customer-facing.

A Disney spokesman declined to comment for this story.

A company insider told TheWrap that Disney was working on its own AI initiatives but not as expansively as suggested by the other sources. The insider said it was “too early” to say when an announcement was coming.

The news was quickly traveling through tech circles and Wall Street, even with few details immediately available.

Disney follows Lionsgate, which struck a deal with AI company Runway in September to aid in “augmenting” work in the pre- and post-production process on films and television series.

“Disney has always leaned into technology partnerships,” LightShed Ventures analyst Rich Greenfield told TheWrap. “It makes a tremendous amount of sense that Disney is heavily focused on this, but also putting substantial resources behind it.”

The announcement will mark a sea change in the industry, as one of Hollywood’s biggest companies embraces AI at a time when Hollywood is grappling with how to use the new technology amid pushback from the creative community.

Disney CEO Bob Iger has previously said AI is a tool like any other. “Walt Disney himself was a big believer in using technology in the early days to tell better stories. And he thought that technology in the hands of a great storyteller was unbelievably powerful,” Iger said at the Canva Create showcase in May.

“Don’t fixate on its ability to be disruptive — fixate on [tech’s] ability to make us better and tell better stories. Not only better stories, but to reach more people,” Iger said at the time.

He continued: “You’re never going to get in the way of it. There isn’t a generation of human beings that has ever been able to stand the way of technological advancement,” Iger said. “What we try to do is embrace the change that technology has created, and use it as the wind behind our backs instead of wind in our faces.”

Disney is uniquely poised to integrate AI into its operations, as one of the most diversified and data-intensive entertainment companies on earth, producing countless models and collecting tons of data about everything from the way that guests of its theme parks spend their money (and time) to what you’re watching on Disney+, the company’s direct-to-consumer streaming platform.

An imminent announcement of an AI partnership will surely produce blowback from the creative community, especially if the initiative will mean cuts to creative departments that are already feeling the pinch. The company eliminated more than 4,000 staff members (“cast members” in Disney-speak) in the spring of 2023 and increased its target to 8,000. (It ended up with about 7,000 layoffs by the end of the year.) The last round of layoffs happened in September and impacted roughly 300 people.

While AI is commonly utilized in Disney productions – everything from calculating the way that Ember’s fire moved in “Elemental” to creating a more lifelike young Luke Skywalker in “The Mandalorian” – a concerted effort from Disney to use the technology in all aspects of production is significant shift.

What makes this announcement even more of a hot-button topic is the move to unionize various visual effects departments, from the Marvel Studios team to the group behind “Avatar,” with AI being a sticking point.

Man who made 'depraved' child images with AI jailed

A student who used AI technology "in the worst possible way" to turn photographs of real children into "depraved" indecent images has been jailed for 18 years.

Hugh Nelson, 27, from Bolton, used a computer programme to generate the images which he shared and sold online to other paedophiles over an 18-month period, making £5,000.

Bolton Crown Court heard the graphic design student, who pleaded guilty to 16 child sexual abuse offences, also encouraged the rape of children in online chatrooms.

Detective Chief Inspector Jen Tattersall, of Greater Manchester Police, said Nelson was "an extremely dangerous man who thought he could get away with what he was doing by using modern technology".

Nelson pleaded guilty to various counts of making, possessing and distributing indecent images of children and three counts of encouraging the rape of a child under the age of 13.

He also admitted to a count of attempting to cause a child under 16 to engage in sexual activity and one of publishing an obscene article.

Nelson, of Briggsfold Road, was sentenced to 18 years in jail, including six years on licence, and was placed on the sex offenders register.

Nelson's parents sat in the court's public gallery as he appeared via video link from HMP Forest Bank.

His mother wept into the crook of her arm as her son was jailed.

"There appears to have been no limit to the depth of depravity exhibited in the images that you were prepared to create and to distribute to others," Judge Martin Walsh said as he passed the sentence.

"The nature and content of the communications which you entered into is utterly chilling."

Jeanette Smith, from the Crown Prosecution Service, warned that those thinking of using AI "in the worst possible way" should be "aware that the law applies equally to real indecent photographs and AI or computer-generated images of children".

Took commissions

Nelson used a computer programme to create the images and sold or exchanged them in an encrypted chatroom for paedophiles.

The court heard he took requests from people who wanted him to create explicit images of children being harmed both sexually and physically.

His offences were uncovered when he began speaking to an undercover police officer in May last year.

Nelson told the officer he took commissions from customers for the images with some requests coming from France, Italy and the United States.

Prosecutor David Toal said: "The defendant said he had over 60 characters in total, ranging from six months to middle-aged, and he charged £80 to create a new character."

Nelson was arrested at his home in Egerton in June 2023 and his devices were seized and examined.

He told officers his offending "had got out of control" after he had met other paedophiles online.

His defence lawyer Bob Elias said he was a "lonely, socially isolated" man who had "plunged down the rabbit hole to this sort of fantasy life and became completely engrossed in it".

Derek Ray-Hill, of the Internet Watch Foundation, said: “Technology is now enabling previously unthought of violations of innocent children.

"We are discovering more and more synthetic and AI images of child sexual abuse and they can be disturbingly life-like.

“That Nelson profited from making this material to order after clients sent him images to manipulate is on another horrifying level.

"I hope this drives home the message. This material, even synthetic versions of it, is criminal.

"If you make or possess it, you are breaking the law."

Prederick wrote:

EDIT: This was made worse by doing a YT search for "Latin/Cuban Jazz" today to listen to while in the office, and getting recommended a bunch of AI slop. At least it does make me more appreciative of the people actually willing to do the work and find the actual music made by the actual artists for us all to enjoy and learn about.

Most Lo-fi is derivative crap "lullabies for adults" already and I'm annoyed by this.

JFC!!

https://www.youtube.com/live/twGSTXL...

Watch Elon try to describe the current and future state of AI. Unfounded predictions and warnings. Just utter nonsense from start to finish.

IT's fun to technically be on the same side of an issue with someone you dislike, but you have to be like "all of his reasonings are stupid and shitty though."

Adorkastock is great - except for their Martial Arts and weaponry poses, which are just ridiculous for the most part. Why they can't find a friend with MA experience, I don't know. It's painful to see.

Aww I feel bad for them. Macnas, the theatre company that they were claiming were doing the parade are awesome and I would definitely have came out for them. Macnas have been doing these for decades now and have got really good at putting them on.

On Sunday they did do a parade in Galway. My brother was there, said it was class.

Instagram offered me a host of AI characters to chat with today. My favorite was the busty, female vaugue-ishly Sonic the hedgehog character looking bot, whose text bubble started with "Hi, Let's chat daddy."

TV Writers Found 139,000 of Their Scripts Trained AI. Hell Broke Loose

Whenever AI came up during last year’s WGA and SAG-AFTRA strikes, it was a contentious issue, but one that seemed to exist as an abstraction, fodder for pithy picket signs.

But last year’s theoretical fear became a real, deeply personal one with last week’s discovery by The Atlantic of more than 139,000 TV and film scripts in a data set being used to train AI. It set writer group chats aflame, and apparently no one was safe from having their work hoovered up by AI, with the search function built by The Atlantic revealing that AI had used 508 scripts credited to Shonda Rhimes, 346 from Ryan Murphy and 742 of Matt Groening’s episodes of Futurama and The Simpsons. (Showrunners and writers spent much of this past week frantically typing in their name into the search field, coming back horrified.)

The training data isn’t uploaded scripts but rather subtitles from those TV episodes and movies, sourced from a site called OpenSubtitles.org. If you’re wondering if your show or film’s script is floating around in this data set, search here.

Writer and programmer Alex Reisner, who built The Atlantic’s search tool to examine the data, wrote:

I can now say with absolute confidence that many AI systems have been trained on TV and film writers’ work. Not just on The Godfather and Alf, but on more than 53,000 other movies and 85,000 other TV episodes: Dialogue from all of it is included in an AI-training data set that has been used by Apple, Anthropic, Meta, Nvidia, Salesforce, Bloomberg, and other companies. I recently downloaded this data set, which I saw referenced in papers about the development of various large language models (or LLMs). It includes writing from every film nominated for Best Picture from 1950 to 2016, at least 616 episodes of The Simpsons, 170 episodes of Seinfeld, 45 episodes of Twin Peaks, and every episode of The Wire, The Sopranos, and Breaking Bad.”

(If you’re wondering why you can’t find any newer films or titles from newer streaming services like Apple TV+ or Disney+, the subtitles were extracted in 2018, Reisner tells me.)

“I’m livid. I’m completely outraged. It’s disgusting,” Teen Titans’ David Slack tells me after discovering 42 of his credited scripts in the database, including ones for Person of Interest, Lie to Me and In Plain Sight. “It’s a huge amount of my work . . . These are things that I poured my heart and soul into.”