[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

*Legion* wrote:

At least until the AIs start generating nazi rage bait. Then we’re finished.

You haven't been on Twitter. Bless.

You can tell those pizza are fake because the pieces aren't cut into squares, like they should be.

AI Spending to Surpass $13 Billion by 2028, Media Analysts Predict

AI spending is expected to crest above $13 billion by 2028, with the spread falling fairly evenly across analytics, development/delivery and customer experiences like personalization and discovery, media analysts announced at a Series Mania presentation on Thursday.

However, the analysts do not anticipate the content creation apocalypse that has underscored much AI coverage of late.

Leading off a daylong series of panels that confronted those two troubling vowels on everyone’s mind from a panoply of industry perspectives, research directors from Omdia and Plum Research instead sought to give context – to assuage fears and misconception by framing machine learning more as a tool than as a weapon.

“AI will not replace humans,” said Omdia’s Maria Rua Aguete, echoing a common refrain. “But humans that know how to use AI will replace those who don’t, because they will be more efficient, more creative, and [better] prepared.”

Tellingly, very few of the anticipated uses involved OpenAI’s text-to-video generator Sora. The two analysts offered a more sanguine view of the recently launched model, pointing out strengths and weaknesses of a tool they predicted would more useful for short-form advertising, web clips and test videos than for film and television production due to the difficulties in controlling the output in terms of quality and reliability.

Instead, Plum Research’s Jonathan Broughton cited concrete ways generative learning could both ease and assist workflows in storyboarding and previz, while 3D models could better facilitate location scouting and lighting tests. By way of actual production, the analyst saw more uses for the tech in live sports given AI’s aptitude for object tracking and real-time prediction.

If anything, the postproduction sector should see the most pronounced effect, as assistive tools – and not generative – assume greater prominence in image grading, color correction and frame-rate enhancement, while real-time dubbing and captioning technology grows ever more accurate and cost-effective.

“In terms of business, AI is going to be really useful for making the inaccessible accessible and for solving problems we’re having with talent [shortages],” Broughton explained. “[Which means that] the main challenge right now is on the management side. It’s up to business leaders to understand how to deploy this within their organizations and to create processes within existing workflows.”

"talent [shortages]"? Corp speak for not wanting to pay people what they're worth.

Ok but how much is it going to cost?

NathanialG wrote:

You can tell those pizza are fake because the pieces aren't cut into squares, like they should be.

I prefer all of my pizza slices to have handles, TYVM.

Chatbot letdown: Hype hits rocky reality

What they're saying: Gary Marcus, a scientist who penned a blog post last year titled "What if generative AI turned out to be a dud?" tells Axios that, outside of a few areas such as coding, companies have found generative AI isn't the panacea they once imagined.

- "Almost everybody seemed to come back with a report like, 'This is super cool, but I can't actually get it to work reliably enough to roll out to our customers,'" Marcus said.

AI ethics expert Rumman Chowdhury tells Axios that the challenges are numerous and significant.

- "No one wants to build a product on a model that makes things up," says Chowdhury, CEO and co-founder of AI consulting firm Humane Intelligence.

- "The core problem is that GenAI models are not information retrieval systems," she says. "tThey are synthesizing systems, with no ability to discern from the data it's trained on unless significant guardrails are put in place."

- And, even when such issues are addressed, Chowdhury says the technology remains a "party trick" unless and until work is done to mitigate bias and discrimination.

Yes, but: This isn't the end of the road for generative AI, by any means. Every major new technology — even, or especially, a world-changing one — goes through this phase.

- The "trough of disillusionment" was first named and defined by consulting firm Gartner in 1995 as part of its theory of hype cycles in tech.

- Usable speech recognition was famously always five years away from reality until it finally arrived, and today is remarkably good even under less-than-ideal conditions.

- VR has famously entered several troughs, and its entry into the mainstream remains an open question.

The other side: Much of the industry remains very optimistic, envisioning years of sustained investment in ever more gigantic models requiring ever more enormous data centers powered by ever- more advanced chips.

- The continued enthusiasm despite the setbacks was palpable at Nvidia's GTC conference in Silicon Valley last week, according to Chetan Sharma, a longtime telecom industry consultant who was at the event.

- Researchers at the biggest tech companies and leaders from other industries touted promising work on how generative AI can aid lofty goals, such as curing cancer.

- Some tasks, like customer service and employee training, are seeing meaningful improvement from today's generative AI, while there's an emerging consensus that benefits in other areas will require better models and more refined data sets.

- "I think we are in that kind of mushy phase," Sharma told Axios.

Bruce Schneier has written quite a few posts about AI recently, but this short piece comparing the dangers of AI to what we have learned about the rise and damage of social media is really interesting.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.
In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

Pretty certain at this point that Sam Altman took Doctrow's description of the cycle of ensh*ttification as a bueprint, a la the Torment Nexus meme.
IMAGE(https://i.kym-cdn.com/photos/images/newsfeed/002/386/534/fd2.jpg)

Yeah I have a feeling a lot of companies are going to find out AI is not going to be cheap. As the cost for it will likely rise substantially eating away at the savings. That said certain areas will likely still be hit hard. Artists in particular. As one person I watch roughly said. There are plenty of big wigs who would gladly pay more for AI art over a human artist. Just so they can a little bit more control and power.

Usable speech recognition was famously always five years away from reality until it finally arrived, and today is remarkably good even under less-than-ideal conditions.

Great now every time I say something to a robot today and it gets it wrong and I have to re-issue the command in my loud, annoyed voice - ~70% of the time - I’m gonna think about this quote and be confronted with the enormous gulf between my definition of “remarkably good” and this author’s.

It’s not even 6 am and Alexa couldn’t even get my flash briefing on the first try.

Billie Eilish, Nicki Minaj, Stevie Wonder and more musicians demand protection against AI

A group of over 200 high-profile musicians have signed an open letter calling for protections against the predatory use of artificial intelligence that mimics human artists’ likenesses, voices and sound. The signatories span musical genres and eras, ranging from A-list stars such as Billie Eilish, J Balvin and Nicki Minaj to Rock and Roll Hall of Famers like Stevie Wonder and REM. The estates of Frank Sinatra and Bob Marley are also signatories.

The letter, which was issued by the Artist Rights Alliance advocacy group, makes the broad demand that technology companies pledge not to develop AI tools that undermine or replace human songwriters and artists.

“This assault on human creativity must be stopped. We must protect against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem,” the letter states.

The letter does not call for an outright ban on the use of AI in music or production, saying that responsible use of the technology could have benefits for the industry. Music producers have used artificial intelligence tools in a variety of ways in recent years, in one case employing AI to isolate John Lennon’s vocals from an old demo track and use them to create a “new” Beatles song which was released last year.

The Artist Rights Alliance letter is part of an industry-wide pushback from artists and creators against the use of generative artificial intelligence, as the technology continues to present ethical and legal issues surrounding copyright infringement and labor rights. Artist unions and advocacy organizations have sought to pressure lawmakers and tech companies to regulate the use of AI, while studios have become interested in its potential for reducing production costs.

Concern over AI being used to write songs and scripts, or produce images and video of actors and entertainers, was at the center of several contract negotiations and entertainment industry union strikes in 2023. The spread of pornographic AI-made images of Taylor Swift also drew additional attention to the malicious use of deepfakes, and earlier this year prompted lawmakers to introduce a bill aimed at criminalizing non-consensual, AI-generated sexualized imagery. Just last week, ChatGPT-maker OpenAI delayed the release of a program that can mimic voices over concerns of responsible use.

In March, Tennessee became the first US state to enact legislation that directly intends to protect musicians from having their vocal likeness AI generated for commercial purposes. The Ensuring Likeness, Voice, and Image Security Act or “Elvis Act” goes into effect on 1 July, and makes it illegal to replicate an artists’ voice without their consent. That legislation did not address artists’ work being used as data to train AI models, a practice that has resulted in several lawsuits against companies such as OpenAI and is mentioned in the letter.

“Some of the biggest and most powerful companies are, without permission, using our work to train AI models,” the letter states. “These efforts are directly aimed at replacing the work of human artists with massive quantities of AI-created ‘sounds’ and ‘images’ that substantially dilute the royalty pools that are paid out to artists.”

The Artist Rights Alliance is a non-profit organization run by music industry veterans, such as board member Rosanne Cash – daughter of Johnny Cash. It’s unclear how the organization recruited the artists who signed the letter, which include country stars like Kacey Musgraves, rappers such as Q-Tip and younger indie pop stars like Chappell Roan. The Artist Rights Alliance did not immediately return a request for comment.

Estates representing deceased artists are also among the signatories to the letter. There has been an increased debate within the entertainment industry over how artists’ likenesses can be used after their death, with AI tools demonstrating a growing ability to create realistic video based on old footage. Several AI versions of dead actors and musicians have appeared in film, video games and television in recent years, prompting controversy and ethical debates.

As AI tools become more publicly available and pervasive, musicians have increasingly been forced to stake out a position on what is a permissible use of artificial intelligence. A few artists, such as Grimes, have viewed generative AI‘s ability to create simulacrums of their work as an opportunity to experiment or to encourage fans to make songs using their vocal likeness.

Other musicians have expressed more negative feelings about imitations of their musical stylings. In January of last year, a fan asked ChatGPT to generate lyrics in the style of singer-songwriter Nick Cave and asked the artist what he thought of the result.

“This song is bullsh*t, a grotesque mockery of what it is to be human,” Cave responded.

Mmm, curious. Interested to see if this will make any waves in terms of legislation, because, whatever OpenAI et al may say, interviews with the guy in charge of nVidia whose tech is powering this stuff are the most truthful, where he's been saying "Suck it up, it's coming like it or not."

Amazon Ditches 'Just Walk Out' Checkouts at Its Grocery Stores

Amazon is phasing out its checkout-less grocery stores with “Just Walk Out” technology, first reported by The Information Tuesday. The company’s senior vice president of grocery stores says they’re moving away from Just Walk Out, which relied on cameras and sensors to track what people were leaving the store with.

Just over half of Amazon Fresh stores are equipped with Just Walk Out. The technology allows customers to skip checkout altogether by scanning a QR code when they enter the store. Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.

Prederick wrote:

Amazon Ditches 'Just Walk Out' Checkouts at Its Grocery Stores

Amazon is phasing out its checkout-less grocery stores with “Just Walk Out” technology, first reported by The Information Tuesday. The company’s senior vice president of grocery stores says they’re moving away from Just Walk Out, which relied on cameras and sensors to track what people were leaving the store with.

Just over half of Amazon Fresh stores are equipped with Just Walk Out. The technology allows customers to skip checkout altogether by scanning a QR code when they enter the store. Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.

Wha..? But why on earth? Was that whole setup even cheaper for them than just doing things the old way? I'd usually suppose it must have been if they tried it, but it seems so convoluted that that is hard to accept on face value. Did they just want it to seem amazing and special maybe? Weird.

@swiftonsecurity says:
1.) We’re gonna do AI
2.) For now we’ll have humans do it to train the AI
3.) Oh no the AI can’t do it
4.) Keep paying the humans

But with this one they added:
5) It's never going to work with AI. Shut it down.

Bruce wrote:

@swiftonsecurity says:
1.) We’re gonna do AI
2.) For now we’ll have humans do it to train the AI
3.) Oh no the AI can’t do it
4.) Keep paying the humans

But with this one they added:
5) It's never going to work with AI. Shut it down.

Pretty much.

The amount of 'AI companies' that are just larping until they figure it out is probably very high.

Too damn high.

Ah. Notsomuch.

A Gizmodo story went viral this week, claiming that Amazon’s “Just Walk Out” checkout system, which the company is sunsetting, “relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts.” And went further, explaining that “the cashiers were simply moved off-site, and they watched you as you shopped.”

It’s become a huge meme because it’s, frankly, an outrageous claim. But it’s also not true and would have been functionally impossible. The Information has covered “Just Walk Out” pretty extensively and has a much clearer description of how this worked.

“Amazon had more than 1,000 people in India working on Just Walk Out as of mid-2022 whose jobs included manually reviewing transactions and labeling images from videos to train Just Walk Out’s machine learning model,” The Information reported, quoting an unnamed source who had worked on the tech that powered the service. “The reliance on backup humans explains in part why it can take hours for customers to receive receipts after walking out of a store.”

The word “train” is very important there. And if you remove it, as Gizmodo did, you end up describing something very different!!!

I know I harp on this a lot, but understanding how these AI models work — and how human workers interact with them — is extremely important right now. Because confusion and backlash to AI is just as useful to AI companies as evangelism is. Yes, a bulk of what tech companies are now calling AI is heavily supported by, if not completely masking, work being done for pennies a day in the Global South. But, no, entire supermarkets can not be run by 1,000 people in India watching security cameras.

Feels a bit like splitting hairs to me. What's missing from the story is how accurate did the machine learning become?

Google considering charge for internet searches with AI, reports say

Google is reportedly drawing up plans to charge for AI-enhanced search features, in what would be the biggest shake-up to the company’s revenue model in its history.

The radical shift is a natural consequence of the vast expense required to provide the service, experts say, and would leave every leading player in the sector offering some variety of subscription model to cover its costs.

Google’s proposals, first reported by the Financial Times, would entail the company exclusively offering its new search feature to users of its premium subscription services, which customers already have to sign up to if they want to use artificial intelligence assistants in other Google tools such as Gmail and its office suite.

With that search experience, being trialled in beta for selected users, Google’s generative AI is used to respond to queries directly with a single answer, in a similar style to the conversational approach of ChatGPT and competitors.

“AI search is more expensive to compute than Google’s traditional search processes. So in charging for AI search Google will be seeking to at least recoup these costs,” said Heather Dawe, chief data scientist at the digital transformation consultancy UST.

Much of the focus within AI is on the huge expense of the computing power used to train cutting-edge generative models. In the last year Amazon ran a single training run that cost $65m (£51m), according to James Hamilton, an engineer, who expects, in the near future, the company to break the $1bn mark.

Last week, OpenAI and Microsoft announced plans to build a $100bn datacentre for AI training, while in January Mark Zuckerberg said his goal was to spend at least $9bn just on Nvidia GPUs alone.

But the cost of training AI is just a tenth of the total cost of the sector, according to the analyst Brent Thill at the investment firm Jefferies. Thill wrote in a briefing note: “The majority of AI compute spend today is directed to the running, not training, of models, and 90%+ of AI compute spend today is being directed towards inferencing [the process by which an AI model is queried], as inferencing spend has been growing much faster than training as more models and tools get put into production.”

He added: “Some have priced new Gen AI features at a monthly rate, betting that higher charges will cover usage expenses, while others have priced on a per-usage basis to protect themselves on the cost-side. Some have also incorporated into existing plans, hoping to drive [user] growth.”

Competitors in AI search offer similar subscription plans. Perplexity, an AI-powered search engine, runs no adverts but offers a $20 monthly “pro” tier that provides access to more powerful AI models and unlimited use.

Others, though, continue to offer their products at a loss. The AI features in Microsoft’s Bing are free to use but tied to the company’s Edge browser. The browsing and search startup Arc offers its products free to users and says it intends to raise revenue in future by charging companies for business features.

The only way they could actually get people to pay for AI searches would be by deliberately degrading the normal searches. I wouldn't put that past Google, but I also suspect it may be the one thing that could make Bing popular. (Yes, I know Bing also uses AI, but as far as I know they have no plans to charge for it.)

Google has already intentionally degraded services so they can advantage their business customers rhat pay for ads on the site. It's not a leap to think they will continue to do so.

How Hollywood’s Most-Feared AI Video Tool Works — And What Filmmakers May Worry About

As generative artificial intelligence marches on the entertainment industry, Hollywood is taking stock of the tech and its potential to be incorporated into the filmmaking process. No tool has piqued the town’s interest more than OpenAI’s Sora, which was unveiled in February as capable of creating hyper-realistic clips in response to a text prompt of just a couple of sentences. In recent days, the Sam Altman-led firm released a series of videos from beta testers who are providing feedback to improve the tech. The Hollywood Reporter spoke with some of those Sora testers about what it can, and can’t, really do.

IMAGE(https://cdn.discordapp.com/attachments/1059663396557029447/1225586531947905055/qfbmmc748hsc1.png?ex=6621ab67&is=660f3667&hm=dc010c58fc8070bee3dd41dc1184c0aad8c57b383b0833235400b0611432ef22&)

IMAGE(https://cdn.bsky.app/img/feed_thumbnail/plain/did:plc:qc6xzgctorfsm35w6i3vdebx/bafkreieestw6roy5noxbf2ega5hzhvyifwu6ofa4yvvei5qwnkvobkp55e@jpeg)

TIL Sam Altman doesn't know what a video game is.

Jonman wrote:

TIL Sam Altman doesn't know what a video game is.

OR he's only ever played E.T.

"Movies are going to become video games" sounds the absolute worst.

And books will become kaleidoscopes!