[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

WIRED article: "TIRED: Generative AI"

ChatGPT: "yo what the f*ck"

Procreate’s anti-AI pledge attracts praise from digital creatives

Many Procreate users can breathe a sigh of relief now that the popular iPad illustration app has taken a definitive stance against generative AI. “We’re not going to be introducing any generative AI into our products,” Procreate CEO James Cuda said in a video posted to X. “I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists.”

The creative community’s ire toward generative AI is driven by two main concerns: that AI models have been trained on their content without consent or compensation, and that widespread adoption of the technology will greatly reduce employment opportunities. Those concerns have driven some digital illustrators to seek out alternative solutions to apps that integrate generative AI tools, such as Adobe Photoshop.

“Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future,” Procreate said on the new AI section of its website. “We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us.”

The announcement has already attracted widespread praise from creatives online who are discontent with how other companies have handled the increasing deluge of generative AI tools. Clip Studio Paint, a rival illustration app, scrapped plans to introduce image-generation features after the announcement was condemned by its user base. Other companies like drawing tablet maker Wacom and Magic: The Gathering-owner Wizards of the Coast have also issued apologies for (unintentionally) using AI-generated assets in their products following similar community reactions.

Even Adobe, which attempted a more “ethical” approach to building generative AI tools — having repeatedly said that its own Firefly models are trained on content that’s licensed or out of copyright — has been slammed by those who feel the company has turned its back on independent artists and creators. Adobe further clarified that it doesn’t train AI on user content in June following intense backlash over a terms of service agreement update, but other unfavorable changes introduced over the years have given it an unshakable reputation as a company that creators love to hate.

Procreate is extremely well received by comparison. The company has stuck to a $12.99 one-time purchase model instead of moving to a rolling subscription like Adobe and Clip Studio Paint did, and has expanded into offering products for animation and (eventually) desktop users. Making such a firm pledge against introducing generative AI is likely just the icing on the cake for creatives who feel alternative options are dwindling.

Cuda said “We don’t exactly know where this story’s gonna go, or how it ends, but we believe that we’re on the right path to supporting human creativity.”

I’ve been using Procreate for over a decade, it’s a great piece of software.

Has anyone tried that Procreate animation tool? Dreams?

ruhk wrote:

I’ve been using Procreate for over a decade, it’s a great piece of software.

Wish it was available on Desktop. Til' then, I remain a CSP boy.

Prederick wrote:
ruhk wrote:

I’ve been using Procreate for over a decade, it’s a great piece of software.

Wish it was available on Desktop. Til' then, I remain a CSP boy.

Clip Studio is superior in many ways, Procreate’s primary strength is that it was designed for tablets and feels more natural with that interface. I’ve tried using CSP on both my iPad and Surface and it’s just a pain to use that way.

I remember that I had bought Procreate at some point for my iPad and figured I'd boot it up for the hell of it only to find despite feeling like I'd seen my apple pencil last week in a drawer somewhere that I have no idea where the damned thing is. So now I'm taking 10 million photos of my apartment to feed into chat gpt 8 so it can help me find the most likely lottery numbers to win big so I can buy a new ipad and pencil. Wish me luck!

Cut to a news report about all of kansas city burning down from the heat from training a model in the middle of summer with a bot net made up of shitty apartment smart thermostats

Hank Green published a video about AI and copyright. A lot of the content has already been brought up here but it is a good break down on what is going on with AI and YouTube today.

‘Megalopolis’ Trailer’s Fake Critic Quotes Were AI-Generated, Lionsgate Drops Marketing Consultant Responsible For Snafu

Ever hear about smoeone doing something so pointlessly stupid you're left just kind of impressed?

Lionsgate has parted ways with Eddie Egan, the marketing consultant who came up with the “Megalopolis” trailer that included fake quotes from famous film critics.

The studio pulled the trailer on Wednesday, after it was pointed out that the quotes trashing Francis Ford Coppola’s previous work did not actually appear in the critics’ reviews, and were in fact made up.

Sources tell Variety it was not Lionsgate or Egan’s intention to fabricate quotes, but was an error in properly vetting and fact-checking the phrases provided by the consultant. The intention of the trailer was to demonstrate that Coppola’s revered work, much like “Megalopolis,” has been met with criticism. It appears that AI was used to generate the false quotes from the critics.

For instance, the trailer claimed that Pauline Kael wrote in the New Yorker that “The Godfather” was “diminished by its artsiness.” Kael in fact loved the movie.

When Variety prompted AI service ChatGPT to provide negative criticism about Coppola’s work from well-known reviewers, the responses provided were strikingly similar to the quotes included in the trailer.

Egan has worked closely with Adam Fogelson, the chair of Lionsgate’s film group, for more than 20 years. The two worked together at Universal and later at STX. Fogelson was chairman of Universal Pictures until 2013 and then chairman of the STX film group. Fogelson was hired as vice chair of the Lionsgate film group two years ago, and named chairman in January.

That trailer was already totally off-putting already, it was horribly self-indulgent. Like Coppola had removed a couple of ribs to fellate himself, so to see this is truly funny.

A banned promoter of cancer ‘cures’ was hijacked by genAI. Now the internet is ‘flooded with garbage’

Five years ago, Barbara O’Neill was permanently banned from providing any health services in New South Wales or other Australian states.

O’Neill, whose website describes her as “an international speaker on natural healing”, was found by the NSW Health Care Complaints Commission (HCCC) in 2019 to have given highly risky health advice to vulnerable people, including the use of bicarbonate soda as a cancer treatment.

Since then, her views have found a much larger audience overseas and online, supported by elements of the Seventh-day Adventist (SDA) church and US media networks. So far this year, O’Neill has spoken in the US, the UK and Ireland and advertised retreats in Thailand for thousands of dollars. A Facebook page managed in her name is promoting plans for O’Neill to tour Australia later this year despite the commission’s ruling.

But O’Neill’s story reveals not only the limits of a state health regulator. Beyond her own promotional efforts, a vast scam economy has grown up that profits from her notoriety without her authorisation.

Clips of O’Neill’s health teachings, often dating as far back as 2012, now feed a voracious economy of unaffiliated Facebook pages and groups – more than 180 at one point – that are branded with her name and share lecture clips and recipes but are outside the control of O’Neill. Many are controlled by accounts based in Morocco, but attempts to contact administrators went unanswered.

Old clips of O’Neill are being used to sell herbal teas, Celtic salt and castor oil on TikTok, as Vox found. AI-generated content of O’Neill on the app now goes even further, making up entirely new claims about her and her health advice.

Of all the uses it may have, AI has absolutely turbochaged the Internet Scam Economy.

And boosts the Dead Internet Theory.

IMAGE(https://pbs.twimg.com/media/GV25GBsXcAAkLwT?format=jpg&name=large)

IMAGE(https://pbs.twimg.com/media/GV25K_UW8AAUxcK?format=jpg&name=medium)

IMAGE(https://pbs.twimg.com/media/GV25xNbWMAA5qql?format=jpg&name=large)

If you’re asking a student a question, the machine can answer, just let the machine answer it, and everyone can move on with their lives.

Prederick wrote:

IMAGE(https://i.imgur.com/MroOVwg.png)

Not my burner account, I swear.

Microsoft's Copilot falsely accuses court reporter of crimes he covered

German journalist Martin Bernklau typed his name and location into Microsoft's Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.

The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.

Copilot even went so far as to claim that it was "unfortunate" that someone with such a criminal past had a family and, according to SWR, provided Bernklau's full address with phone number and route planner.

I asked Copilot today who Martin Bernklau from Germany is, and the system answered, based on the SWR report, that "he was involved in a controversy where an AI chat system falsely labeled him as a convicted child molester, an escapee from a psychiatric facility, and a fraudster." Perplexity.ai drafts a similar response based on the SWR article, explicitly naming Microsoft Copilot as the AI system.

Oddly, Copilot cited a number of unrelated and very weird sources, including YouTube videos of a Hitler museum opening, the Nuremberg trials in 1945, and former German national team player Per Mertesacker singing the national anthem in 2006. Only the fourth linked video is actually from Martin Bernklau.

Urban Wolfe being an AI explains why its initial reply was gibberish.

I guess what studios are still left in California are going to relocate to the nearest AI friendly state.

IMAGE(https://pbs.twimg.com/media/GUyUxM8WUAAetLI?format=jpg&name=small)

Google’s custom AI chatbots have arrived

Google will soon let Gemini subscribers create custom chatbots that can serve as a gym buddy, cooking partner, writing editor, and more. Users can give the chatbots — called Gems — distinct personalities and specialties by simply describing a set of instructions.

Google first introduced Gems during I/O in May. In an example prompt shown by Google, users can create a “knowledgeable, casual, and friendly” Gem that can help people plan low- or no-water gardens. For users who don’t want to create a custom chatbot right away, Google is offering some premade Gems, including a learning coach, an idea brainstormer, a career guide, a coding partner, and an editor.

North Carolina musician arrested, accused of Artificial Intelligence-assisted fraud caper

NEW YORK (AP) — A North Carolina musician was arrested and charged Wednesday with using artificial intelligence to create hundreds of thousands of songs that he streamed billions of times to collect over $10 million in royalty payments, authorities in New York said.

Michael Smith, 52, of Cornelius, North Carolina, was arrested on fraud and conspiracy charges that carry a potential penalty of up to 60 years in prison.

U.S. Attorney Damian Williams said in a news release that Smith’s fraud cheated musicians and songwriters between 2017 and this year of royalty money that is available for them to claim.

He said Smith, a musician with a small catalog of music that he owned, streamed songs created with artificial intelligence billions of times “to steal royalties.”

A lawyer for Smith did not immediately return an email seeking comment.

Christie M. Curtis, who leads New York’s FBI office, said Smith “utilized automatic features to repeatedly stream the music to generate unlawful royalties.”

“The FBI remains dedicated to plucking out those who manipulate advanced technology to receive illicit profits and infringe on the genuine artistic talent of others,” she said.

An indictment in Manhattan federal court said Smith created thousands of accounts on streaming platforms so that he could stream songs continuously, generating about 661,000 streams per day. It said the avalanche of streams yielded annual royalties of $1.2 million.

The royalties were drawn from a pool of royalties that streaming platforms are required to set aside for artists who stream sound recordings that embody musical compositions, the indictment said.

According to the indictment, Smith used artificial intelligence to create tens of thousands of songs so that his fake streams would not alert streaming platforms and music distribution companies that a fraud was underway.

It said Smith, beginning in 2018, teamed up with the chief executive of an artificial intelligence music company and a music promoter to create the songs.

Smith boasted in an email last February that he had generated over four billion streams and $12 million in royalties since 2019, authorities said.

The indictment said that when a music distribution company in 2018 suggested that he might be engaged in fraud, he protested, writing: “This is absolutely wrong and crazy! ... There is absolutely no fraud going on whatsoever!”

Came across this and felt it warranted a post:
Why AI Isn't Going to Make Art

The computer scientist François Chollet has proposed the following distinction: skill is how well you perform at a task, while intelligence is how efficiently you gain new skills. I think this reflects our intuitions about human beings pretty well. Most people can learn a new skill given sufficient practice, but the faster the person picks up the skill, the more intelligent we think the person is. What’s interesting about this definition is that—unlike I.Q. tests—it’s also applicable to nonhuman entities; when a dog learns a new trick quickly, we consider that a sign of intelligence.

In 2019, researchers conducted an experiment in which they taught rats how to drive. They put the rats in little plastic containers with three copper-wire bars; when the mice put their paws on one of these bars, the container would either go forward, or turn left or turn right. The rats could see a plate of food on the other side of the room and tried to get their vehicles to go toward it. The researchers trained the rats for five minutes at a time, and after twenty-four practice sessions, the rats had become proficient at driving. Twenty-four trials were enough to master a task that no rat had likely ever encountered before in the evolutionary history of the species. I think that’s a good demonstration of intelligence.

Now consider the current A.I. programs that are widely acclaimed for their performance. AlphaZero, a program developed by Google’s DeepMind, plays chess better than any human player, but during its training it played forty-four million games, far more than any human can play in a lifetime. For it to master a new game, it will have to undergo a similarly enormous amount of training. By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills. It is currently impossible to write a computer program capable of learning even a simple task in only twenty-four trials, if the programmer is not given information about the task beforehand.

Self-driving cars trained on millions of miles of driving can still crash into an overturned trailer truck, because such things are not commonly found in their training data, whereas humans taking their first driving class will know to stop. More than our ability to solve algebraic equations, our ability to cope with unfamiliar situations is a fundamental part of why we consider humans intelligent. Computers will not be able to replace humans until they acquire that type of competence, and that is still a long way off; for the time being, we’re just looking for jobs that can be done with turbocharged auto-complete.

Anthropic’s Mike Krieger wants to build AI products that are worth the hype

Today, I’m talking with Mike Krieger, the new chief product officer at Anthropic, one of the hottest AI companies in the industry.

Anthropic was started in 2021 by former OpenAI executives and researchers who set out to build a more safety-minded AI company — a real theme among ex-OpenAI employees lately. Anthropic’s main product right now is Claude, the name of both its industry-leading AI model and a chatbot that competes with ChatGPT.

Anthropic has billions in funding from some of the biggest names in tech, primarily Amazon. At the same time, Anthropic has an intense safety culture that’s distinct among the big AI firms of today. The company is notable for employing some people who legitimately worry AI might destroy mankind, and I wanted to know all about how that tension plays out in product design.

On top of that, Mike has a pretty fascinating résumé: longtime tech fans likely know Mike as the cofounder of Instagram, a company he started with Kevin Systrom before selling it to Facebook — now, Meta — for $1 billion back in 2012. That was an eye-popping amount back then, and the deal turned Mike into founder royalty basically overnight.

He left Meta in 2018, and a few years later, he started to dabble in AI — but not quite the type of AI we now talk about all the time on Decoder. Instead, Mike and Kevin launched Artifact, an AI-powered news reader that did some very interesting things with recommendation algorithms and aggregation. Ultimately, it didn’t take off like they hoped. Mike and Kevin shut it down earlier this year and sold the underlying tech to Yahoo.

I was a big fan of Artifact, so I wanted to know more about the decision to shut it down as well as the decision to sell it to Yahoo. Then I wanted to know why Mike decided to join Anthropic and work in AI, an industry with a lot of investment but very few consumer products to justify it. What’s this all for? What products does Mike see in the future that make all the AI turmoil worth it, and how is he thinking about building them?

I’ve always enjoyed talking product with Mike, and this conversation was no different, even if I’m still not sure anyone’s really described what the future of this space looks like.

Okay, Anthropic chief product officer Mike Krieger. Here we go.

Apple's ad for the new Visual Intelligence for the iPhone 16 is, at a minimum oddly presented.

A new restaurant on the corner? I wonder what its menu is, or what its hours could be. Better take a photo of it to reverse image search it instead of walking a little closer to see the menu and hours posted near the door.

Oh wow! Someone’s dog? Let me get on my knees and take a photo of the dog to learn what the breed is, which my phone actually already does, instead of simply talking to another human being that I’m 42” away from.

Call me when it offers personal doxing of Neo-Nazis marching through neighborhoods...

Oprah is doing a special on AI on ABC tonight.

Oprah Winfrey hosts an eye-opening new special that explores the profound impact of artificial intelligence on people's daily lives, demystifying the technology and empowering viewers to understand and navigate the rapidly evolving AI future. The one-hour primetime event, "AI and the Future of Us: An Oprah Winfrey Special," airs THURSDAY, SEPT. 12 (8:00-9:03 p.m. EDT), on ABC and the next day on Hulu. The special features Winfrey's exclusive interviews with some of the most important and powerful people in AI including the following:

- Sam Altman, CEO of Open AI, will explain how AI works in layman's terms and discusses the immense personal responsibility that must be borne by the executives of AI companies.

- Microsoft Co-Founder and Chair of the Gates Foundation Bill Gates will lay out the AI revolution coming in science, health and education, and warns of the once-in-a-century type of impact AI may have on the job market.

- YouTube creator and technologist Marques Brownlee will walk Winfrey through mind-blowing demonstrations of AI's capabilities.

- Tristan Harris and Aza Raskin, co-founders of Center for Humane Technology, walk Winfrey through the emerging risks posed by powerful and superintelligent AI — sounding the alarm about the need to confront those risks now.

- FBI Director Christopher Wray reveals the terrifying ways criminals and foreign adversaries are using AI.

- Pulitzer Prize-winning author Marilynne Robinson reflects on AI's threat to human values and the ways in which humans might resist the convenience of AI.

"AI and the Future of Us: An Oprah Winfrey Special" provides a serious, entertaining and meaningful base for every viewer to understand AI, and empowers everyone to be a part of one of the most important global conversations of the 21st century.

.......sure.

Oprah presents: Foxes Guarding the Henhouse.

In surprisingly useful AI news, researchers have discovered that their tailor-made chatbot can reduce people's belief in conspiracy theories by up to 20%. It does so by finding out exactly why they believed in a given conspiracy theory and then debunking those specific claims.

Participants first answered a series of open-ended questions about the conspiracy theories they strongly believed and the evidence they relied upon to support those beliefs. The AI then produced a single-sentence summary of each belief, for example, "9/11 was an inside job because X, Y, and Z." Participants would rate the accuracy of that statement in terms of their own beliefs and then filled out a questionnaire about other conspiracies, their attitude toward trusted experts, AI, other people in society, and so forth.

Then it was time for the one-on-one dialogues with the chatbot, which the team programmed to be as persuasive as possible. The chatbot had also been fed the open-ended responses of the participants, which made it better to tailor its counter-arguments individually. For example, if someone thought 9/11 was an inside job and cited as evidence the fact that jet fuel doesn't burn hot enough to melt steel, the chatbot might counter with, say, the NIST report showing that steel loses its strength at much lower temperatures, sufficient to weaken the towers' structures so that it collapsed. Someone who thought 9/11 was an inside job and cited demolitions as evidence would get a different response tailored to that.

Participants then answered the same set of questions after their dialogues with the chatbot, which lasted about eight minutes on average. Costello et al. found that these targeted dialogues resulted in a 20 percent decrease in the participants' misinformed beliefs—a reduction that persisted even two months later when participants were evaluated again.