[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

PSA: If you're still paying TurboTax to file your taxes for you, either go to a proper professional if you need the help, or use a free direct filing service like https://directfile.irs.gov/ or https://www.freetaxusa.com/ (which I have used for the past 2 years and it works great)

Will K-pop's AI experiment pay off?

There’s an issue dividing K-pop fans right now - artificial intelligence.

Several of the genre’s biggest stars have now used the technology to create music videos and write lyrics, including boy band Seventeen.

Last year the South Korean group sold around 16 million albums, making them one of the most successful K-pop acts in history. But it’s their most recent album and single, Maestro, that’s got people talking.

The music video features an AI-generated scene, and the record might well include AI-generated lyrics too. At the launch of the album in Seoul, one of the band members, Woozi, told reporters he was "experimenting" with AI when songwriting.

“We practised making songs with AI, as we want to develop along with technology rather than complain about it," he said.

"This is a technological development that we have to leverage, not just be dissatisfied with. I practised using AI and tried to look for the pros and cons.”

On K-pop discussion pages, fans were torn, with some saying more regulations need to be in place before the technology becomes normalised.

Others were more open to it, including super fan Ashley Peralta. "If AI can help an artist overcome creative blocks, then that’s OK with me," says the 26-year-old.

Her worry though, is that a whole album of AI generated lyrics means fans will lose touch with their favourite musicians.

"I love it when music is a reflection of an artist and their emotions," she says. "K-pop artists are much more respected when they’re hands on with choreographing, lyric writing and composing, because you get a piece of their thoughts and feelings.

"AI can take away that crucial component that connects fans to the artists."

Ashley presents Spill the Soju, a K-pop fan podcast, with her best friend Chelsea Toledo. Chelsea admires Seventeen for being a self-producing group, which means they write their own songs and choreograph them too, but she’s worried about AI having an impact on that reputation.

“If they were to put out an album that’s full of lyrics they hadn’t personally written, I don’t know if it would feel like Seventeen any more and fans want music that is authentically them”.

For those working in K-Pop production, it’s no surprise that artists are embracing new technologies.

Chris Nairn is a producer, composer and songwriter working under the name Azodi. Over the past 12 years he’s written songs for K-pop artists including Kim Woojin and leading agency SM Entertainment.

Working with K-pop stars means Chris, who lives in Brighton, has spent a lot of time in South Korea, whose music industry he describes as progressive.

“What I've learned by hanging out in Seoul is that Koreans are big on innovation, and they're very big on ‘what's the next thing?’, and asking, ‘how can we be one step ahead?’ It really hit me when I was there,” he says.

“So, to me, it's no surprise that they're implementing AI in lyric writing, it's about keeping up with technology.”

Is AI the future of K-pop? Chris isn’t so sure. As someone who experiments with AI lyric generators, he doesn’t feel the lyrics are strong enough for top artists.

“AI is putting out fairly good quality stuff, but when you're at the top tier of the songwriting game, generally, people who do best have innovated and created something brand new. AI works by taking what’s already been uploaded and therefore can’t innovate by itself.”

If anything, Chris predicts AI in K-pop will increase the demand for more personal songs.

"There's going to be pressure from fans to hear lyrics that are from the artist's heart, and therefore sound different to any songs made using AI”.

Seventeen aren’t the only K-pop band experimenting with AI. Girl group Aespa, who have several AI members as well as human ones, also used the technology in their latest music video. Supernova features generated scenes where the faces of band members remain still as only their mouths move.

Podcaster and super-fan Chelsea says it "triggered" a lot of people.

“K-pop is known for amazing production and editing, so having whole scenes made of AI takes away the charm," she adds.

Chelsea also worries about artists not getting the right credit. “With AI in videos it’s harder to know if someone’s original artwork has been stolen, it’s a really touchy subject”.

Arpita Adhya is a music journalist and self-titled K-pop superfan. She believes the use of AI in the industry is demonstrative of the pressure artists are under to create new content.

“Most recording artists will put out an album every two years, but K-pop groups are pushing out albums every six to eight months, because there’s so much hype around them.”

She also believes AI has been normalised in the industry, with the introduction of AI covers which have exploded on YouTube. The cover tracks are created by fans and use technology to mimic another artist's voice.

It's this kind of trend that Arpita would like to see regulated, something western artists are calling for too.

Just last month megastars including Billie Eilish and Nicki Minaj wrote an open letter calling for the "predatory" use of AI in the music industry to be stopped.

They called on tech firms to pledge not to develop AI music-generation tools "that undermine or replace the human artistry of songwriters and artists, or deny us fair compensation for our work".

For Arpita, a lack of regulations means fans feel an obligation to regulate what is and isn’t OK.

“Whilst there are no clear guidelines on how much artists can and can’t use AI, we have the struggle of making boundaries ourselves, and always asking ‘what is right and wrong?’”

Thankfully she feels K-pop artists are aware of public opinion and hopes there will be change.

“The fans are the biggest part and they have a lot of influence over artists. Groups are always keen to learn and listen, and if Seventeen and Aespa realise they are hurting their fans, they will hopefully address that.”

I bregrudingly joined Tik Tok recently. I've mostly found it to be a pretty "blah," but I got an ad today for an AI tool called Livensa that allows you to "Relive your memories with AI" by creating animations of photos.

I know this thing is just slop trying to make a quick buck, but there's just so much stuff like this now and the basic conceit is just so fundamentally wrong. Because I am not reliving the memory of that moment, an AI is making some shit up.

I really hope this stays content farming slop with no more influence than late-night infomercials used to have, and doesn't become a mainstream thing.

AI prompts can boost writers’ creativity but result in similar stories, study finds

Once upon a time, all stories were written solely by humans. Now, researchers have found AI might help authors tell a tale.

A study suggests that ideas generated by the AI system ChatGPT can help boost the creativity of writers who lack inherent flair – albeit at the expense of variety.

Prof Oliver Hauser, a co-author of the research from the University of Exeter, said the results pose a social dilemma.

“It may be individually beneficial for you to use AI, but as a society if everyone used AI, we might all lose out on the diversity of unique ideas,” he said. “And, arguably, for creative endeavours we might sometimes need the ‘wild’ and ‘unusual’ ideas.”

The team asked 293 people to name 10 words that differed as much as possible from each other, allowing them to probe participants’ inherent creativity.

The researchers then randomly assigned participants one of three topics – an adventure in the jungle, on the open seas or on a different planet – and asked them to write an eight-sentence story appropriate for teenagers and young adults.

While a third of participants were offered no assistance, the others were split between those allowed to have one three-sentence starting idea pre-generated by ChatGPT, and those who could request five such ideas.

Overall, 82 of 100 participants took up the offer of a single AI-generated idea, while 93 of 98 participants offered access to five such ideas took at least one – and almost a quarter requested all five.

A further 600 participants, unaware of whether AI-generated ideas were used, read the resulting stories, and rated factors relating to novelty and usefulness – such as whether the story was publishable – on a nine-point scale.

The results, published in the journal Science Advances, reveal access to AI boosted these scores, with greater access associated with a larger effect: people with the option of five AI-generated ideas had an 8.1% increase, on average, in novelty ratings for their stories compared with people lacking the option of such help, while usefulness ratings rose by 9% on average.

“The effect sizes are not very large, but they were statistically significant,” said Dr Anil Doshi, a co-author of the study from University College London.

Stories written by people with the option of AI-generated ideas were also deemed more enjoyable, more likely to have plot twists and be better written.

However, it was writers with low inherent creativity that benefited most.

“We do not find that the most inherently creative people’s stories are being “supercharged” from AI ideas – this group of people is highly creative with and without the use of AI,” said Doshi.

The team also found participants with access to AI-generated ideas produced stories with greater similarity, something Doshi suggested is down to AI generating relatively predictable story ideas.

Hauser said such studies are important. “Evaluating the use of AI will be essential in making sure that we reap the benefits of this potentially transformative technology without falling prey to potential shortcomings,” he said.

Prederick wrote:

AI prompts can boost writers’ creativity but result in similar stories,

Does it though? Because it looks to me the second part invalidates the first.

EDIT: DAMMIT NHK.

The video is "Can AI solve global manga market challenge?"

The global rage for manga keeps on going with publishers unable to release translated editions as quickly as demand requires. This has fueled piracy as well as innovation in translation. NHK World's John LaDue went to find out what challenges lie ahead.

I won't post the link but a steam game/demo managed to get into new trending with what appears to be a bunch of chatgpt reviews. Super obvious to the point people started posting funny reviews about it. Interesting although I don't know how valve would fight this.

I don't know if they'll even try. Steam's content moderation record is.... spotty, at best.

Google’s corporate parent still prospering amid shift injecting more AI technology in search

SAN FRANCISCO (AP) — Google’s corporate parent Alphabet Inc. delivered another quarter of steady growth amid an AI-driven shift in the ubiquitous search engine that is the, foundation of its internet empire.

The second-quarter report released Tuesday indicated Google is still reeling in advertisers on the heels of the May introduction of an artificial intelligence feature that produces conversational responses to people’s search queries while downplaying its traditional display of related links to other websites.

Although the change sparked fear and outrage among online publishers worried their traffic will plummet, Google is still thriving and propelling Alphabet’s success.

Alphabet’s revenue for the April-June period climbed 14% from the same time last year to $84.74 billion. The Mountain View, California, earned $23.62 billion, or $1.89 per share, a 29% increase from the same time last year. It marked fourth-consecutive quarter that Alphabet’s year-over-year revenue growth has surpassed 10%, although the pace during the April-June period slowed slightly from the January-March span.

OpenAI to make its own search engine, which I'm sure will be great.

This isn't NFTs, but this feels very much like NFTs in that the vibe is that they are trying SO HARD to get us all to buy into this wholesale, because otherwise this very quickly becomes unprofitable.

OpenAI is announcing its much-anticipated entry into the search market, SearchGPT, an AI-powered search engine with real-time access to information across the internet.

The search engine starts with a large textbox that asks the user “What are you looking for?” But rather than returning a plain list of links, SearchGPT tries to organize and make sense of them. In one example from OpenAI, the search engine summarizes its findings on music festivals and then presents short descriptions of the events followed by an attribution link.

In another example, it explains when to plant tomatoes before breaking down different varieties of the plant. After the results appear, you can ask follow-up questions or click the sidebar to open other relevant links. There’s also a feature called “visual answers,” but OpenAI didn’t get back to The Verge before publication on exactly how this works.

earchGPT is just a “prototype” for now. The service is powered by the GPT-4 family of models and will only be accessible to 10,000 test users at launch, OpenAI spokesperson Kayla Wood tells The Verge. Wood says that OpenAI is working with third-party partners and using direct content feeds to build its search results. The goal is to eventually integrate the search features directly into ChatGPT.

It’s the start of what could become a meaningful threat to Google, which has rushed to bake in AI features across its search engine, fearing that users will flock to competing products that offer the tools first. It also puts OpenAI in more direct competition with the startup Perplexity, which bills itself as an AI “answer” engine. Perplexity has recently come under criticism for an AI summaries feature that publishers claimed was directly ripping off their work.

Just great.

Video game performers will go on strike over artificial intelligence concerns

LOS ANGELES (AP) — For more than a year and a half, leaders of Hollywood’s actors union have been negotiating with video game companies over a new contract that covers the performers who bring their titles to life.

But while negotiators with the Screen Actors Guild-American Federation of Television and Radio Artists have made gains in bargaining over wages and job safety in their video game contract, or interactive media agreement, leaders say talks have stalled over a key issue: protections over the use of artificial intelligence.

“It is the major obstacle to having an agreement, and this contract area has been for quite some time,” said Duncan Crabtree-Ireland, SAG-AFTRA’s executive director. “The fundamental issue is, at this moment, an unwillingness by this bargaining group to provide an equal level of protection from the dangers of AI for all our members.”

Union leaders say they aren’t “anti-AI altogether.” But voice actors and other video game performers are worried that unchecked use of AI could provide game makers with a means to displace them — by training an AI to replicate an actor’s voice, or to create a digital replica of their likeness without consent.

In some cases, the role of an AI voice is often invisible and used to clean up a recording in the later stages of production or to make a character sound older or younger at a different stage of their virtual life.

“Our concern is the idea that all of this work translates into grist for the mill that displaces us,” said Sarah Elmaleh, chair of the interactive negotiating committee. “They do not have to call us back, you do not have to be informed of what they’ve used your material to create.”

The union has held onto one last option in their battle over a contract: calling a strike. Crabtree-Ireland said that the union hopes to avoid a work stoppage, but will “do what it takes to make sure that our members are treated fairly.”

Prederick wrote:

OpenAI to make its own search engine, which I'm sure will be great.

The search engine starts with a large textbox that asks the user “What are you looking for?”

Pizza recipes.

I mean, that was Google, but the idea still sticks.

Prederick wrote:

OpenAI to make its own search engine, which I'm sure will be great.

This isn't NFTs, but this feels very much like NFTs in that the vibe is that they are trying SO HARD to get us all to buy into this wholesale, because otherwise this very quickly becomes unprofitable.

Yeah, OpenAI is burning through insane amounts of money. The longer they stay unprofitable, the more compelling the thing they're promising needs to be to keep investor money flowing - but mostly what they can deliver is "what if if we replaced this person and were worse at their job"?

It feels like this bubble is going to pop pretty soon.

AI Video Generator Runway Trained on Thousands of YouTube Videos Without Permission

A highly-praised AI video generation tool made by multi-billion dollar company Runway was secretly trained by scraping thousands of videos from popular YouTube creators and brands, as well as pirated films, according to a massive internal spreadsheet of training data obtained by 404 Media.

The model—initially codenamed Jupiter and released officially as Gen-3—drew widespread praise from the AI development community and technology outlets covering its launch when Runway released it in June. Last year, Runway raised $141 million from investors including Google and Nvidia, at a $1.5 billion valuation.

When Techcrunch asked Runway co-founder Anastasis Germanidis in June where the training data for Gen-3 came from, he would not offer specifics.

“We have an in-house research team that oversees all of our training and we use curated, internal datasets to train our models,” Germanidis told Techcrunch.

Alien Love Gardener wrote:
Prederick wrote:

OpenAI to make its own search engine, which I'm sure will be great.

This isn't NFTs, but this feels very much like NFTs in that the vibe is that they are trying SO HARD to get us all to buy into this wholesale, because otherwise this very quickly becomes unprofitable.

Yeah, OpenAI is burning through insane amounts of money. The longer they stay unprofitable, the more compelling the thing they're promising needs to be to keep investor money flowing - but mostly what they can deliver is "what if if we replaced this person and were worse at their job"?

It feels like this bubble is going to pop pretty soon.

What really disappoints me is there are areas where AI could definitely be useful. I am thinking of things like medical imagery analysis for one example. Instead of taking a narrow set of scans, because the person looking at them is limited by time, I can see a time when you take a shitload of scans, and AI goes through them looking for a specific set of markers or patterns, and then flags the scans that are useful for the human who has to make the final determination.

But because these AI TechBros are doing **waves hands wildly**, this is not getting the attention (and $$$) it deserves because all people see is this.

That tech as fair as I know is being developed by entirely different groups with much more sensible goals and already exists anyways. I wouldn't worry about any of this dragging that down because it actually does something useful and thus can show actual results.

Even the generative stuff is potentially useful for some things... but like every other application of AI so far, it requires human review and editing by someone familiar with the subject area to end up with usable results in the end. The corporate insistence on trying to use it standalone with no or very very little human involvement is what is driving all their efforts into the ditch, though I suppose there's a certain justice in that.

If they could be satisfied by the modest gains available by treating it as another tool they'd probably have achieved some positive results, but being satisfied by "modest gains" has never been part of the script for these folks.

mudbunny wrote:

What really disappoints me is there are areas where AI could definitely be useful. I am thinking of things like medical imagery analysis for one example. Instead of taking a narrow set of scans, because the person looking at them is limited by time, I can see a time when you take a shitload of scans, and AI goes through them looking for a specific set of markers or patterns, and then flags the scans that are useful for the human who has to make the final determination.

But because these AI TechBros are doing **waves hands wildly**, this is not getting the attention (and $$$) it deserves because all people see is this.

Machine learning has been used in medical analysis for decades. You just don't hear much about them because they existed long before the current LLM hype train -- and obviously the existing machine learning systems for medical analysis don't use Large Language Models because that would be like using a jackhammer to drive a screw.

My company has been using machine learning for a long time now to predict perinatal complications.

Also used extensively in manufacturing, automotive for a long time... Generative AI and LLMs are just the glitzy "AIs" of the moment.

You need to be careful of using AI for medical image analysis.

It's more likely to pick up a malignant skin cancer if there's a ruler in the photo.

https://www.bdo.com/insights/digital...

Imagine this: A predictive AI program is integrated into a dermatology lab to aid doctors in diagnosing malignant skin lesions. The AI was built using thousands of diagnostic photos of skin lesions already known to be cancerous, training it to know with great exactness what a malignant lesion looks like. The potential benefits are obvious — an AI might be able to notice details that human doctors could miss and could potentially accelerate treatment for patients with cancer.

But there’s a problem. Patient outcomes do not improve as expected, and some even tragically get worse. Upon reviewing the AI’s training materials, programmers discover that the AI wasn’t making its diagnostic decisions based on the appearance of the lesions it was shown. Instead, it was checking whether a ruler was present in the picture. Because diagnostic photos, like those shown to the AI during its training, contain a ruler for scale, the AI identified rulers as a defining characteristic of malignant skin lesions. The AI saw a pattern that its designers hadn’t accounted for and was consequently rendered useless for its intended purpose.

Which is not to say it won't be useful, but it's necessary to be so careful what data is used for training.

MrDeVil909 wrote:

You need to be careful of using AI for medical image analysis.

It's more likely to pick up a malignant skin cancer if there's a ruler in the photo.

Upon reviewing the AI’s training materials, programmers discover that the AI wasn’t making its diagnostic decisions based on the appearance of the lesions it was shown. Instead, it was checking whether a ruler was present in the picture. Because diagnostic photos, like those shown to the AI during its training, contain a ruler for scale, the AI identified rulers as a defining characteristic of malignant skin lesions. The AI saw a pattern that its designers hadn’t accounted for and was consequently rendered useless for its intended purpose.

Prompt: "Ensure there is peace and prosperity on Earth, where there is no thirst or starvation or homelessness or suffering for any human."

Primary Objective: Eliminate thirst, starvation, homelessness, suffering for humans.

Assessment: Thirst, starvation, homelessness, suffering for humans all require humans to occur.

Conclusion: Eliminate all humans.

Tech's AI message to Wall Street: Stop fretting

Big Tech's message to investors on back-to-back earnings calls this week was "Stop worrying about the billions we're spending on AI — everything's going to be just great."

The big picture: Over the past month, Wall Street began questioning the growing cost of buying high-end chips and building data centers, asking when these investments would start to show up in revenue and profit growth.

- Google, Microsoft and Meta, each in different ways, told Wall Street to relax.

By the numbers: These companies all continue to grow and remain jaw-droppingly profitable, providing an enviable financial foundation for their most ambitious dreams.

- Last week, Google parent Alphabet reported $24 billion net profit on $85 billion revenue.
- On Tuesday Microsoft reported $22 billion net profit on $65 billion revenue.
- On Wednesday Meta reported $13.5 billion net profit on $39 billion revenue.

Apple and Amazon — the two tech giants that aren't as directly in the "build the biggest and best foundation model" race but are both making their own significant AI investments — are no slouches, either.

- On Thursday Apple reported $21 billion net profit on $86 billion revenue.
- The same day, Amazon reported $13.5 billion net profit on $148 billion revenue.

Remember, all these numbers are for a single quarter — multiply by four for a very rough annual picture.

Follow the money: When an industry is geysering profits like this, shareholders can try to demand higher dividend payouts. Tech firms have traditionally shunned dividends as a mark of corporate old age, but some of the giants, like Apple, have warmed up to them.

- The other thing you can do with profit rivers and the cash reservoirs they build is plow them into the Next Big Thing — and for now, most investors seem happy to let CEOs take that approach.

Between the lines: Wall Street has already given all these companies massive valuations based on their current businesses and their long-term prospects.

- As with previous platform shifts in tech, though, the AI wave could crown new winners and drown current leaders.

CEOs hastened to reassure investors that the best is yet to come, and that their incumbent firms are well-positioned to control the future.

- AI will improve "almost every" existing Meta product and "make a whole lot of new ones possible," Meta CEO Mark Zuckerberg said on his call Wednesday. "So it's why there are all the jokes about how all the tech CEOs get on these earnings calls and just talk about AI the whole time. It's because it's actually super exciting."

As for doubts about the wisdom of plowing so many billions into "capex" (capital expenditures) to build AI capacity, Microsoft CEO Satya Nadella said the company is carefully tuning into the "demand signals" from customers.

- Those signals — like massive growth in use of GitHub Copilot, which is now a bigger business than GitHub itself was when Microsoft acquired it in 2018 — are flashing green, Nadella said.
- Executives also argued that if they overbuild data centers and buy more servers than the AI buildout ultimately needs, the companies will find other good uses for that infrastructure.

The bottom line: "When you go through a curve like this, the risk of underinvesting is dramatically greater than the risk of overinvesting for us here," Google CEO Sundar Pichai told analysts last week.

I see a lot of people online who I may broadly agree with in critiques of AI hyping themselves into believing this stuff is on the verge of collapse any day now based on one critical Goldman Sachs report, and I'm just like... no.

I have no idea what form it will ultimately take, but AI, even GenAI, is here to stay.

I'm not sure if 'AI' will have revenue/profits it can directly attribute to in the short term. It might help tech start eating itself which wouldn't be a bad thing.

Google search is just terrible compared to when I first found GWJ.

If we get back to actual useful search engines that don't tell us to do ridiculous things scrapped from Reddit that's a win. Not sure how that gets monetized besides eating Googles lunch + that pool of money growing. The masses are not signing up for $5-10 better monthly search. So it will come from ads and our eyeballs.

Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.

(NYT Paywall)

Meta is in discussions with Awkwafina, Judi Dench and other actors and influencers for the right to incorporate their voices into a digital assistant product called MetaAI, according to three people with knowledge of the talks, as the company pushes to build more products that feature artificial intelligence.

Apart from Ms. Dench and Awkwafina, Meta is in talks with the comedian Keegan-Michael Key and other celebrities, said the people, who spoke on the condition of anonymity because the discussions are private. They added that all of Hollywood’s top talent agencies were involved in negotiations with the tech giant.

The talks remain fluid, and it is unclear which actors and influencers, if any, may sign on to the project, the people said. If the parties come to an agreement, Meta could pay millions of dollars in fees to the actors.

A Meta spokesman declined to comment. The discussions were reported earlier by Bloomberg.

Meta, which owns Facebook, Instagram and WhatsApp, has invested heavily in artificial intelligence, which the biggest tech companies are racing to develop and lead. Meta has plowed billions into weaving the technology into its social networking apps and advertising business, including by creating artificially intelligent characters that could chat through text across its messaging apps.

On Wednesday, Mark Zuckerberg, Meta’s chief executive, increased how much his company would spend on A.I. and other expenses this year to at least $37 billion, up from $30 billion at the beginning of 2024. Mr. Zuckerberg said he would rather build too fast “rather than too late” to prevent his competitors from gaining an edge in the A.I. race.

One area of A.I. that is rapidly emerging are chatbots with voice abilities, which act as virtual assistants. In May, OpenAI, a leading A.I. company, unveiled a version of its ChatGPT chatbot that could receive and respond to voice commands, images and videos. It was part of a wider effort to combine conversational chatbots with voice assistants like the Google Assistant and Apple’s Siri.

OpenAI later suspended the release of its voice-related ChatGPT after the actress Scarlett Johansson, who had provided the voice for an A.I. system in Spike Jonze’s 2013 movie, “Her,” accused the company of using a voice “eerily similar to mine” despite her refusals to participate in the product.

Meta is angling to strike deals with celebrities in a way that avoids ticking off top talent. Under the terms of the proposed contract, Meta would record the voices of these celebrities for potential use in MetaAI, which users could interact with and ask questions across Facebook, Instagram, WhatsApp and Messenger, as well as Meta’s Ray-Ban augmented reality glasses, the people said. Any deal would be for a set period and could be renewed or terminated when the contract was up. Actors would not release the rights to their voices indefinitely.

Meta is trying to lock down the deals in time for its Connect technology conference in late September, when the company plans to debut new A.I.-focused products. At last year’s conference, Meta introduced digital chatbots that had the likenesses of Snoop Dogg, Tom Brady and MrBeast, but few people used the text-based characters. The company wound down the initiative last month.

Last year, the SAG-AFTRA union, which represents more than 150,000 television and movie actors, went on strike as negotiations over a new agreement with the Hollywood studios — including how to deal with the rise of A.I. — stalled. Actors eventually secured a three-year deal with a provision that says the studios cannot use A.I. tools to create digital replicas of performers without payment or approval.

Still, many union members remain dissatisfied with the provision. Other subsets of the entertainment industry, like editors, animators and voice actors, are also concerned that A.I. will put them out of work.

Actors with the union initiated a separate strike last week against video game companies that use actors’ images and voices in their games.

MS will bring back Clippy just so it can be voiced by Awkwafina.

Don't make me hate Awkwafina.

MrDeVil909 wrote:

You need to be careful of using AI for medical image analysis.

It's more likely to pick up a malignant skin cancer if there's a ruler in the photo.

https://www.bdo.com/insights/digital...

Imagine this: A predictive AI program is integrated into a dermatology lab to aid doctors in diagnosing malignant skin lesions. The AI was built using thousands of diagnostic photos of skin lesions already known to be cancerous, training it to know with great exactness what a malignant lesion looks like. The potential benefits are obvious — an AI might be able to notice details that human doctors could miss and could potentially accelerate treatment for patients with cancer.

But there’s a problem. Patient outcomes do not improve as expected, and some even tragically get worse. Upon reviewing the AI’s training materials, programmers discover that the AI wasn’t making its diagnostic decisions based on the appearance of the lesions it was shown. Instead, it was checking whether a ruler was present in the picture. Because diagnostic photos, like those shown to the AI during its training, contain a ruler for scale, the AI identified rulers as a defining characteristic of malignant skin lesions. The AI saw a pattern that its designers hadn’t accounted for and was consequently rendered useless for its intended purpose.

Which is not to say it won't be useful, but it's necessary to be so careful what data is used for training.

Yeah, that's an old problem in machine learning. Back when I was an undergraduate comp-sci student (back near the dawn of the millennium, in 2001*), a professor told us about a neural network trained to spot camouflaged tanks in forested images (or something along those lines). It passed testing, then proved to be entirely useless in practice. Why? Because in the training data all the images with tanks were taken at one time of day, and all the images without were taken at another, so all the neural network did was track the position of the sun.

* And I'm pretty sure this story was old even then.

The head of chatbot maker Replika discusses the role AI will play in the future of human relationships.

Replika’s basic pitch is pretty simple: what if you had an AI friend? The company offers avatars you can curate to your liking that basically pretend to be human, so they can be your friend, your therapist, or even your date. You can interact with these avatars through a familiar chatbot interface, as well as make video calls with them and even see them in virtual and augmented reality.

The idea for Replika came from a personal tragedy: almost a decade ago, a friend of Eugenia’s died, and she fed their email and text conversations into a rudimentary language model to resurrect that friend as a chatbot. Casey Newton wrote an excellent feature about this for The Verge back in 2015. Even back then, that story grappled with some of the big themes you’ll hear Eugenia and I talk about today: what does it mean to have a friend inside the computer?

That all happened before the boom in large language models, and Eugenia and I talked a lot about how that tech makes these companions possible and what the limits of current LLMs are. Eugenia says Replika’s goal is not to replace real-life humans. Instead, she’s trying to create an entirely new relationship category with the AI companion, a virtual being that will be there for you whenever you need it, for potentially whatever purposes you might need it for.

Right now, millions of people are using Replika for everything from casual chats to mental health, life coaching, and even romance. At one point last year, Replika removed the ability to exchange erotic messages with its AI bots, but the company quickly reinstated that function after some users reported the change led to mental health crises.

That’s a lot for a private company running an iPhone app, and Eugenia and I talked a lot about the consequences of these ideas. What does it mean for people to have an always-on, always-agreeable AI friend? What does it mean for young men, in particular, to have an AI avatar that will mostly do as it’s told and never leave them? Eugenia insists that AI friends are not just for men, and she pointed out that Replika is run by women in senior leadership roles. There’s an exchange here about the effects of violent video games that I think a lot of you will have thoughts about, and I’m eager to hear them.

Of course, it’s Decoder, so along with all of that, we talked about what it’s like to run a company like this and how products like this get built and maintained over time. It’s a ride.

OpenAI strikes search deal with Condé Nast

OpenAI has struck a multi-year licensing deal with Condé Nast, the companies announced Tuesday.

Why it matters: Condé Nast is home to some of the world's biggest tech, lifestyle and culture brands, including Vogue, The New Yorker, Bon Appétit, Vanity Fair and Wired.

Details: The arrangement gives OpenAI license to display content from Condé Nast brands within OpenAI's products, including ChatGPT and its SearchGPT prototype.

- While neither company disclosed deal terms, OpenAI's statement suggests the deal with Condé Nast is similar in structure to the search deals struck with The Atlantic and News Corp. last month.
- "We're collaborating with our news partners to collect feedback and insights on the design and performance of SearchGPT, ensuring that these integrations enhance user experiences and inform future updates to ChatGPT," the firm said.