[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

Meet My A.I. Friends

Our columnist spent the past month hanging out with 18 A.I. companions. They critiqued his clothes, chatted among themselves and hinted at a very different future.

(NYT Paywall)

Wait, so corporations shouldn't create clean energy sources to power their AI because the rest of the country it lagging behind and still burning coal for most of their power?

I’m going to take a wild guess and say that the tech companies claiming to be building out clean energy projects for AI is probably a lot like the tech companies who claimed to be building out clean energy for blockchain but turned out to be just making those claims to stem criticism of their outrageous energy usage and were largely either doing next to nothing or absolutely nothing towards clean energy.

You can guess all you want, but that is just conjecture. While you could be right, I would rather stick with facts. If they do indeed make nuclear powerplants to power AI, that is not a bad thing. It would mean that AI wouldn't be adding CO2. I was just surprised the hosts looked at building nuclear power plants to power AI as a net negative in the energy discussion.

I see no issue with giving AI access to nuclear power

The reason companies want to build out their own energy infrastructure is because, and many of them openly admit this, the development of these generative algorithms they’re calling “AI” has already hit a point of diminishing returns, requiring exponentially more power for exponentially fewer advances in term paper writing. Seems like the answer to that isn’t to let companies control power production, something that definitely hasn’t been shown to be objectively bad *wink wink* but rather maybe these companies should take what they’ve learned and try a different route. The Earth will be fine if the ability to create images of Ronald McDonald giving Jesus a massage stay at their current fidelity.

We're going to end up making the breakthroughs needed to build a Dyson sphere because our Taylor Swift porn fakes weren't good enough.

*Legion* wrote:

We're going to end up making the breakthroughs needed to build a Dyson sphere because our Taylor Swift porn fakes weren't good enough.

Am now picturing building a Dyson sphere out of T-Swizzle deepfakes like it's wallpaper on a teenage boy's bedroom.

If you squint, you can see them:

IMAGE(https://tng.trekcore.com/gallery/albums/screencaps/season6/6x04/relics-hd-074.jpg)

Bumble’s Whitney Wolfe Herd says your dating ‘AI concierge’ will soon date hundreds of other people’s ‘concierges’ for you

Imagine this: you’ve “dated” 600 people in San Fransisco without having typed a word to any of them. Instead, a busy little bot has completed the mindless ‘getting-to-know-you’ chatter on your behalf, and has told you which people you should actually get off the couch to meet.

That’s the future of dating, according to Whitney Wolfe Herd—and she’d know.

Wolfe Herd is the founder and executive chair of Bumble, a meeting and networking platform that prompted women to make the first move.

While the platform has now changed this aspect of its algorithm, Wolfe Herd said the company would always keep its “North Star” in mind: “A safer, kinder digital platform for more healthy and more equitable relationships.

“Always putting women in the driver’s seat—not to put men down—but to actually recalibrate the way we all treat each other.”

Like any platform, Bumble is now navigating itself in a world of AI—which means rethinking how humans will interact with each other in an increasing age of chatbots.

Wolfe Herd told Bloomberg Technology Summit in San Francisco this week it could streamline the matching process.

“If you want to get really out there, there is a world where your [AI] dating concierge could go and date for you with other dating concierge,” she told host Emily Chang. “Truly. And then you don’t have to talk to 600 people. It will scan all of San Fransisco for you and say: ‘These are the three people you really outta meet.'”

And forget catch-ups with friends, swapping notes on your love life—AI can be that metaphorical shoulder to cry on.

Artificial intelligence—which has seen massive amounts of investment since OpenAI disrupted the market with its ChatGPT large language model—can help coach individuals on how to date and present themselves in the best light to potential partners.

“So, for example, you could in the near future be talking to your AI dating concierge and you could share your insecurities,” Wolfe Herd explained. “‘I’ve just come out of a break-up, I’ve got commitment issues,’ and it could help you train yourself into a better way of thinking about yourself.”

“Then it could give you productive tips for communicating with other people,” she added.

OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.

OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.

“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”

The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

OpenAI is considering how its technology could responsibly generate a range of different content that might be considered NSFW, including slurs and erotica. But the company is particular about how sexually explicit material is described.

In a statement to WIRED, company spokesperson Niko Felix said "we do not have any intention for our models to generate AI porn." However, NPR reported that OpenAI's Joanne Jang, who helped write the Model Spec, conceded that users would ultimately make up their own minds if its technology produced adult content, saying "Depends on your definition of porn."

In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.

Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.

AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.

“Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”

As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.

Indian politicians are bringing the dead on the campaign trail, with help from AI

Last month, movie star-turned-politician Vijay Vasanth was campaigning in an open jeep under an unforgiving sun in the sleepy fishing town of Kanniyakumari, the southernmost tip of the Indian mainland. He periodically waved to the fisherfolk lined up on either side of the street. Sometimes, as the vehicle slowed down, kids clambered up the bonnet and tugged at his sleeves for sweets that he kept in a container up front.

It’s classic electoral campaigning. And it’s hard work.

But in a hyper-wired world, it’s no longer considered enough.

Vasanth’s campaign manager, a young man in his 20s, pulled out his phone to show me a video in which a gentleman in a crisp white kurta and neatly folded scarf leans back against a tall chair. He is H. Vasanth Kumar — the candidate’s father, a local businessman, and the previous parliamentary representative of this constituency.

Except Kumar is no longer alive. He died from Covid-19 four years ago.

Kumar, who began his career as a salesman before starting a successful consumer goods company, typically had billboards across Kanniyakumari plastered with images of him advertising his business. His son’s campaign team wants to recreate the familiarity of those images. In the video, Kumar, speaking in Tamil, explains how “though I died, my soul is still with all of you.” He goes on to extol the virtues of his son: “I can assure you that my son, Vijay, will work for the betterment of Kanniyakumari and for the progress of your children.”

As elections in India get in full swing, the country’s leading politicians and their brand gurus have gone all in on artificial intelligence to resurrect the past and manage the future. Digital rights activists have questioned the ethics of using a deceased politician’s voice or form in elections. There’s the question of rights — who owns their legacy? — but more importantly, there’s a humanizing aspect to “soft fakes,” as they are called. No one wants to speak ill of the dead, especially in India, where we have been culturally shaped to only eulogize those no longer with us.

In January this year, M. Karunanidhi, the patriarch of politics in the southern state of Tamil Nadu, first appeared in an AI video at a conference for his party’s youth wing. In the clip, he wore the look for which he is best remembered: a luminous yellow scarf and oversized dark glasses. Even his head was tilted, just slightly to one side, to replicate a familiar stance from real life. Two days later, he made another appearance at the book launch of a colleague’s memoirs.

Karunanidhi died in 2018.

“The idea is to enthuse party cadres,” Salem Dharanidharan, a spokesperson for the Dravida Munnetra Kazhagam (DMK) — the party that Karunanidhi led till his death — told me. “It excites older voters among whom Kalaignar [“Man of Letters,” as Karunanidhi was popularly called] already has a following. It spreads his ideals among younger voters who have not seen enough of him. And it also has an entertainment factor — to recreate a popular leader who is dead.”

Across the world, countries are grappling with similar dilemmas.

Americans, for instance, banned robocalls, or AI-generated voice calls. Fake robocalls, impersonating President Joe Biden’s voice, were used to try and persuade citizens not to vote in the New Hampshire primaries. The cloning was most likely done using ElevenLabs, one of Silicon Valley’s most successful startup stories. The company’s technology was also used to generate AI videos of Imran Khan, the jailed former Pakistani prime minister. And it’s open to all — no prior permissions are needed from the person being imitated. ElevenLabs separately categorizes cloning used for “non-commercial purposes,” like politics and public debate.

In the hurly-burly of the Indian election season, though, all this is entirely esoteric and academic.

According to Dharanidharan, AI for politicians is a mere mechanism, much like the newspaper or printing press were back in the day. “In the 1920s, our party used newspapers as a medium to propagate ideology; in the late ’40s up to the ’80s, we used film and cinema; in the ’90s, we used cable TV — and now it’s AI.”

India’s prime minister, Narendra Modi, has been an early user of an AI app called Bhashini, which translates his voice from Hindi to other languages in real time. Shashi Tharoor, a minister from the opposing Indian National Congress, conducted an interview with his AI avatars. And as AI goes mainstream, the first big quarrel between the ruling Bharatiya Janata Party and the Congress party has led to a police summons: Home Minister Amit Shah alleged that Revanth Reddy, the Congress’ recently elected chief minister in Telangana, used deepfake tech to alter a video that twisted Shah’s views on affirmative action quotas.

Not surprisingly, new businesses that boast about providing the ultimate guide to creating deepfakes are suddenly much in demand.

But as the lines between real and fake blur, manipulation is fast becoming a challenge — and it’s far more dire than when misinformation used to be exchanged via text messages or WhatsApp forwards. Two of India’s biggest movie stars had to deny that they had issued messages urging people to vote against the ruling party.

Voters are now receiving calls from supposed local representatives who engage in a full-blown conversation about the most pressing issues in their area — except they never actually made the call.

The full impact of AI on voting choices may not be understood in this election cycle. But if effective public communication was once all about human connection and authenticity, generative AI seems to have turned that premise on its head.

How soon before we legally can vote for a 100% AI generated President.

Don't blame me, I voted for Kodos Kennedy.

The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

Man, tech bros really wanna say the n-word

H.P. Lovesauce wrote:

Man, tech bros really wanna say the n-word

IMAGE(https://i.imgur.com/VaaK6eq.gif)

TheGameguru wrote:

How soon before we legally can vote for a 100% AI generated President.

They have to be at least 35 years old first, so we have a while.

And once people find out that a lot of the coders for AI are from Africa, we're going to have a new wave of birtherism.

Keldar wrote:
TheGameguru wrote:

How soon before we legally can vote for a 100% AI generated President.

They have to be at least 35 years old first, so we have a while.

Eliza meets that criteria.

Assuming the model was trained on US and Russian data, I feel like Trump was essentially the first AI-generated president

Ferret wrote:
Keldar wrote:
TheGameguru wrote:

How soon before we legally can vote for a 100% AI generated President.

They have to be at least 35 years old first, so we have a while.

Eliza meets that criteria. :D

IMAGE(https://i.imgflip.com/8pu9di.jpg)

The corollary of that paradox discussion is that it assumes a paradox/trick if it is couched in the language of a paradox or trick question.

eg. IMAGE(https://pbs.twimg.com/media/GMblp6zW8AA1E4l?format=png&name=900x900)

Neither is it able to solve the boat problem if there are 3 animals and no two can be left together. As there is no reasoning there it doesn't contemplate the idea that the problem may be unsolveable, unless presented with a version where people have already written that it is unsolveable.

The Tragic Downfall of the Internet’s Art Gallery

The article itself is quite obviously not a fan of AI in general, however DeviantArt is genuinely awash with this now:

As VFX animator Romain Revert (Minions, The Lorax) pointed out on X, the bots had come for his old home base of DeviantArt. Its social accounts were promoting “top sellers” on the platform, with usernames like “Isaris-AI” and “Mikonotai,” who reportedly made tens of thousands of dollars through bulk sales of autogenerated, dead-eyed 3D avatars. The sales weren’t exactly legit—an online artist known as WyerframeZ looked at those users’ followers and found pages of profiles with repeated names, overlapping biographies and account-creation dates, and zero creations of their own, making it apparent that various bots were involved in these “purchases.”

It’s not unlikely, as WyerframeZ surmised, that someone constructed a low-effort bot network that could hold up a self-perpetuating money-embezzlement scheme: Generate a bunch of free images and accounts, have them buy and boost one another in perpetuity, inflate metrics so that the “art” gets boosted by DeviantArt and reaches real humans, then watch the money pile up from DeviantArt revenue-sharing programs. Rinse, repeat.

World is ill-prepared for breakthroughs in AI, say experts

The world is ill-prepared for breakthroughs in artificial intelligence, according to a group of senior experts including two “godfathers” of AI, who warn that governments have made insufficient progress in regulating the technology.

A shift by tech companies to autonomous systems could “massively amplify” AI’s impact and governments need safety regimes that trigger regulatory action if products reach certain levels of ability, said the group.

The recommendations are made by 25 experts including Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI” who have won the ACM Turing award – the computer science equivalent of the Nobel prize – for their work.

The intervention comes as politicians, experts and tech executives prepare to meet at a two-day summit in Seoul on Tuesday.

The academic paper, called “managing extreme AI risks amid rapid progress”, recommends government safety frameworks that introduce tougher requirements if the technology advances rapidly.

It also calls for increased funding for newly established bodies such as the UK and US AI safety institutes; forcing tech firms to carry out more rigorous risk-checking; and restricting the use of autonomous AI systems in key societal roles.

“Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts,” according to the paper, published in the Science journal on Monday. “AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.”

A global AI safety summit at Bletchley Park in the UK last year brokered a voluntary testing agreement with tech firms including Google, Microsoft and Mark Zuckerberg’s Meta, while the EU has brought in an AI act and in the US a White House executive order has set new AI safety requirements.

The paper says advanced AI systems – technology that carries out tasks typically associated with intelligent beings – could help cure disease and raise living standards but also carry the threat of eroding social stability and enabling automated warfare. It warns, however, that the tech industry’s move towards developing autonomous systems poses an even greater threat.

“Companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems,” the experts said, adding that unchecked AI advancement could lead to the “marginalisation or extinction of humanity”.

The next stage in development for commercial AI is “agentic” AI, the term for systems that can act autonomously and, theoretically, carry out and complete tasks such as booking holidays.

Last week, two tech firms gave a glimpse of that future with OpenAI’s GPT-4o, which can carry out real-time voice conversations, and Google’s Project Astra, which was able to use a smartphone camera to identify locations, read and explain computer code and create alliterative sentences.

Other co-authors of the proposals include the bestselling author of Sapiens, Yuval Noah Harari, the late Daniel Kahneman, a Nobel laureate in economics, Sheila McIlraith, a professor in AI at the University of Toronto, and Dawn Song, a professor at the University of California, Berkeley. The paper published on Monday is a peer-reviewed update of initial proposals produced before the Bletchley meeting.

A UK government spokesperson said: “We disagree with this assessment. The AI Seoul summit this week will play an important role in advancing the legacy of the Bletchley Park summit and will see a number of companies update world leaders on how they are fulfilling the commitments made at Bletchley to ensure the safety of their models.”

Yeah, they're removing it.

In November, Ms Johansson reportedly took legal action against an artificial intelligence (AI) app which used her likeness without permission in an advert.

OpenAI said on Monday its "Sky" voice is not intended to be an "imitation" of the star.

"We believe that AI voices should not deliberately mimic a celebrity's distinctive voice," it said in a blog post.

The firm said it is "working to pause" the voice while it addresses questions about how it was chosen in a post on X, formerly Twitter.

Uh huh.

This all feels very "We're asking as a courtesy, but we're gonna do it anyway, unless you can sue-sue us, in which case we're very sorry" kind of thing.

What an ass. Is Altman going to be the next Musk? He seems to be trying really hard to join that club.

JC wrote:

What an ass. Is Altman going to be the next Musk? He seems to be trying really hard to join that club.

Absolutely. He's drinking his own kool-aid.