[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

fenomas wrote:

Idle thought: the premise "make an AI more factual by having it do a web search before answering" relies strongly on the assumption that arbitrary web pages almost never include malicious jailbreak instructions.

Or lies?

I mean, then it's a value judgment - if Bing serves the same fact to two people they may disagree whether it's a lie or not.

I'm probably weird, but I tend to think it's not very useful to talk about truth-vs-lies when it comes to generative AI. Like, by analogy to image generators, if stable diffusion draws me a picture of Einstein using a smartphone, is it lying? Considering that GPT does to tokens more or less what SD does to pixels, it strikes me as the same kind of question.

Don't confuse cultural ideas of truth with epistemological ideas. For regular people, arguing about "truth" is not the same as determining whether a statement is true or not.

For example, Trump lost the 2020 US presidential election. This is a true statement by objective standards. However, culturally - subjectively - it could be false. But that in itself would be a false judgment, again by objective standards.

It's not useful to hold up *cultural* truths as examples of truth, it's just confusing the issue. Our choices tend to be "We don't know enough to say", or "as best we can discover, this is true", or "as best we can discover, this is false". The difference between this and cultural truths is whether or not they are congruent with what is real, all the way down the stack. Cultural truths veer off at some point based on considerations other than shared realities; epistemological truth goes down the stack with verification at every step.

If you want AIs to be truly useful, their statements need to be based on solid epistemic evidence, rather than just accepting whatever they find as "true" because it is asserted on the Internet. If people disagree on a statement, then either it's an open statement - "We don't know enough to say" - or one of them, or both, are wrong.

Ultimately, AIs should be able to settle these questions *before* responding, which is going to take an interesting "fact verification" infrastructure. This will become necessary, because if you control what are taken to be facts by the general public, you control the thinking of your population. An uncontrolled AI knowledge base is an invitation to dictatorship, hacking, and many other problems. We are already seeing this.

Robear wrote:

If you want AIs to be truly useful, their statements need to be based on solid epistemic evidence, rather than just accepting whatever they find as "true" because it is asserted on the Internet. If people disagree on a statement, then either it's an open statement - "We don't know enough to say" - or one of them, or both, are wrong.

The real problem is that AI doesn't have another way to learn if something is "true" or not. Things written in books can be wrong. Things that people in general agree on can be wrong. Things on Wikipedia can be wrong! People make arguments like "vaccines cause autism", "the earth is flat", and "9/11 was an inside job" and they cite sources for all of these things. AI doesn't have access to all of these sources, and even if it did, it can't really make a judgement on whether their conclusions are "correct" or not. The best it could say is that "this contradicts something else". Pretty soon everything "contradicts something else" and AI has no idea whatsoever if even the most basic facts are true.

We would have to have the AI create an objective "database of facts" containing only things that are absolutely true - but then AI would have to be able to add more things to it as it "learns", and sooner or later "Humans are the worst thing that has ever happened to the planet" gets added to the list and there are multiple science fiction franchises that deal with that and pretty much all of them would be bad for us.

That's why I say that a "truth infrastructure" would be "interesting". I'm tempted to argue that this is simply a business opportunity - or one for scientists and philosophers. A new field of research might be created. Heck, it might be being created as we speak.

Obviously, science does a pretty good job at this, especially in teaching environments. Perhaps we need to train AIs at the university and post-graduate level. But I suspect that there has to a "subjectivity rating" that puts, for example, evidence about physical phenomenon at a higher level of confidence than, say, Marxist Literary Theory... And making that quiet hierarchy explicit will be an "interesting" task in the cultures of academia.

Also we need to be able to express pragmatics - degrees of truth, even likelihood of change over time. That's it's own challenge as well.

And then, culturally, in countries like the US, when Evolution shows up as more credible than anything based on religious reasoning, fur is gonna fly. At the speed of bullets and bombs, I suspect...

Robear, that premise is like saying we could train AIs to win the lottery if we just gave them a database of future lottery numbers. A fact database that only contains true facts isn't and can't be a thing, but if we had one we could just query it without getting an AI involved. (Also if you think it might be a good idea, note for the record that the person currently most likely to try making such a database is probably Elon Musk...)

Whereas, the point I was trying to make is that I don't think it's useful to talk about the truthiness of AI outputs in the first place. When someone complains that ChatGPT told them a lie, to me they're making the same category error as someone who complains that Stable Diffusion gave them a picture of something that doesn't actually exist. LLM training inputs aren't labeled for objective truth - it's simply not something AIs know how to maximize.

So why can't we train on something like Encyclopedia Brittanica, as a start? Perfect is the enemy of good enough, and right now we don't have anything like "good enough".

The idea that we can't accumulate knowledge of what is actually real flies in the face of the entire purpose of science...

Robear wrote:

So why can't we train on something like Encyclopedia Brittanica, as a start?"

Because scale. You need many orders of magnitude more input data to train an LLM that exists in all the encyclopedias. The "large" in LLM is an understatement.

So the problem is discernment. That's an interesting one. Again, truth evaluation would be important, as well as distinguishing conversational elements from subjects that require truth evaluation. (off-the-cuff observation, could be wildly off-base)

Palantir Demos AI to Fight Wars But Says It Will Be Totally Ethical Don’t Worry About It

Palantir, the company of billionaire Peter Thiel, is launching Palantir Artificial Intelligence Platform (AIP), software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos, Palantir demos how a military might use AIP to fight a war. In the video, the operator uses a ChatGPT-style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of enemy communications.

In Palantir’s scenario, a “military operator responsible for monitoring activity within eastern Europe” receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be.

“They ask what enemy units are in the region and leverage AI to build out a likely unit formation,” the video said. After getting the AI’s best guess as to what’s going on, the operator then asks the AI to take better pictures. It launches a Reaper MQ-9 drone to take photos and the operator discovers that there’s a T-80 tank, a Soviet-era Russia vehicle, near friendly forces.

Then the operator asks the robots what to do about it. “The operator uses AIP to generate three possible courses of action to target this enemy equipment,” the video said. “Next they use AIP to automatically send these options up the chain of command.” The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems.

Palantir’s pitch is, of course, incredibly dangerous and weird. While there is a “human in the loop” in the AIP demo, they seem to do little more than ask the chatbot what to do and then approve its actions. Drone warfare has already abstracted warfare, making it easier for people to kill vast distances with the push of a button. The consequences of those systems are well documented. In Palantir’s vision of the military’s future, more systems would be automated and abstracted. A funny quirk of the video is that it calls its users “operators,” a term that in a military context is shorthand for bearded special forces of groups like Seal TEAM Six. In Palantir’s world, America’s elite forces share the same nickname as the keyboard cowboys asking a robot what to do about a Russian tank at the border.

Robear wrote:

The idea that we can't accumulate knowledge of what is actually real...

I mean.. not only is that not what I said, it's not even a thing that anybody ever would say.

Robear wrote:

So why can't we train on something like Encyclopedia Brittanica, as a start? Perfect is the enemy of good enough, and right now we don't have anything like "good enough".

LLMs have already been trained on encyclopedias. What's being talked about here is doing a web search for additional text to add in as context to a given prompt, and doing that can certainly make an LLM more useful at question answering. But it can't solve the problem of the LLM saying untrue things - not least because you don't start out knowing which web pages say true things (or if you do, you don't need the LLM).

But more importantly, like I said being truthful just isn't a constraint that LLMs have a way to follow. An LLM trained solely on true statements will still say untrue things, for the same reason that an image generator trained solely on historical photographs will still make images of things that don't exist.

Anyone else feel the Hollywood writer strike is coming at the worst possible time? I support their cause, but I also imagine the studios will just double down on AI-generated scripts and expecting the actors to improvise the rest.

jdzappa wrote:

Anyone else feel the Hollywood writer strike is coming at the worst possible time? I support their cause, but I also imagine the studios will just double down on AI-generated scripts and expecting the actors to improvise the rest.

Probably better now than later when those scripts/AI get more advanced. It might be good to get some out there to fail.

NathanialG wrote:

Probably better now than later when those scripts/AI get more advanced. It might be good to get some out there to fail.

I mean what writer's room could compete with the pure gold of highly compelling scripts AI's are currently sh*tting out?

IMAGE(https://i.imgur.com/CKgvLni.png)

IMAGE(https://i.imgur.com/PdEoAP0.png)

IMAGE(https://i.imgur.com/Ar3SNb3.png)

There's a Kokomo, IN but it's a pit.

They've got to get so many visits each year from confused tourists who aren't fluent in English.

As a kid watching music videos including that particular Beach Boys song in the very early 90's, I actually thought Kokomo was real. I mean the chorus rattles off real places (ooh I wanna take her to Bermuda, Bahamas...). I later attempted to find Kokomo using the paper Britannica encyclopaedia my parents bought us a few years later with no success.

Also should note that a not-insignificant part of the reason for the Writer's strike in Hollywood involves the use of AI:

I’m curious about how concerns around AI are affecting negotiations. According to the WGA, the AMPTP rejected its proposal to regulate AI use on projects, instead suggesting they meet annually to “discuss advancements in technology.” How does prospective technology fit into this puzzle?

It’s not the biggest issue, but it is figuring into the conversation more than one might think. When home video was introduced, the studios successfully argued for a formula for residuals that was very disadvantageous to writers, actors, and directors. The argument they made was that it was a nascent technology, saying, “It’s just emerging. We don’t know what the economics are going to look like, and it’s very expensive to make these videotapes.” The cost structure was high, but as manufacturing costs declined with videotape and then dropped precipitously with the introduction of disc technology, the formulas were never adjusted.

The writers feel that when it comes to new technology, Fool me once, shame on you; fool me twice, shame on me. That was part of the ethos in 2007 when the studios at one point proposed a three-year study of new media rather than any actual contractual provisions. And now the AMPTP is proposing the same sort of study approach to AI. Well, it’s true we don’t know. We don’t know when and in what iteration ChatGPT will actually be able to do useful screenwriting work. I haven’t heard reports in this business, but people are getting use of it in the political business, marketing, and fundraising. But the Writers Guild was burned very severely by the studio approach to an emerging technology back in the ’80s. They don’t want to make the same mistake twice.

More here:

The Writers Guild of America said it wants Hollywood’s top studios and networks to regulate the use of AI on creative projects. The union's specific demand, according to a document released Monday, states: “AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered [contract-covered] material can’t be used to train AI.”

In a response that left many professional writers dispirited, the Alliance of Motion Picture and Television Producers — the trade association that represents most of the industry’s big entertainment companies — rejected that proposal. (The WGA represents some of NBCUniversal’s news division employees.)

Instead, according to WGA leaders, the companies “countered by offering annual meetings to discuss advancements in technology” — a vague proposal that suggests industry leaders are not prepared to make any guarantees. (Comcast, the corporation that owns NBCUniversal, is represented by the trade group.)

If you haven't seen the Bird App discourse recently, you've missed a bunch of AI advocates posting that Hollywood is done for while showcasing some... uh.... "interesting" AI-generated content as to why.

Well, that and this doozy:

IMAGE(https://pbs.twimg.com/media/FvIviwhXwAEN_dX?format=jpg&name=small)

Thanks for the link - I had heard the strike was more over pay and benefits but trying to set more rules for AI make sense.

All that being said, I could easily see execs signing off on that terrible Kokomo scene. I mean we are talking about an industry that not too long ago pitched having Julia Roberts play Harriet Tubman.

The strike does touch on the use of AI, but the writers are really trying to deal with the other technology/trend that f*cked them over: the massive shift to streaming.

Yeah, it's like 80/20 streaming/AI.

AI regulations/protections is about the future, but streaming is right now.

SPEAKING OF UNIONS:

150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting

More than 150 workers whose labor underpins the AI systems of Facebook, TikTok and ChatGPT gathered in Nairobi on Monday and pledged to establish the first African Content Moderators Union, in a move that could have significant consequences for the businesses of some of the world’s biggest tech companies.

The current and former workers, all employed by third party outsourcing companies, have provided content moderation services for AI tools used by Meta, Bytedance, and OpenAI—the respective owners of Facebook, TikTok and the breakout AI chatbot ChatGPT. Despite the mental toll of the work, which has left many content moderators suffering from PTSD, their jobs are some of the lowest-paid in the global tech industry, with some workers earning as little as $1.50 per hour.

As news of the successful vote to register the union was read out, the packed room of workers at the Mövenpick Hotel in Nairobi burst into cheers and applause, a video from the event seen by TIME shows. Confetti fell onto the stage, and jubilant music began to play as the crowd continued to cheer.

This dovetails nicely with a story about Facebook content moderators from last month.

Scientists warn of AI dangers but don’t agree on solutions

Computer scientists who helped build the foundations of today’s artificial intelligence technology are warning of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.

Humanity’s survival is threatened when “smart things can outsmart us,” so-called Godfather of AI Geoffrey Hinton said at a conference Wednesday at the Massachusetts Institute of Technology.

“It may keep us around for a while to keep the power stations running,” Hinton said. “But after that, maybe not.”

After retiring from Google so he could speak more freely, the 75-year-old Hinton said he’s recently changed his views about the reasoning capabilities of the computer systems he’s spent a lifetime researching.

“These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people,” Hinton said, addressing the crowd attending MIT Technology Review’s EmTech Digital conference from his home via video. “Even if they can’t directly pull levers, they can certainly get us to pull levers.”

“I wish I had a nice simple solution I could push, but I don’t,” he added. “I’m not sure there is a solution.”

Chris James from the youtube prank channel Not Even A Show has been leaning hard into AI voice replication for the last several months, calling politicians and media figures while playing pre-generated clips of people like Ben Shapiro, Sebastian Gorka, and Mike Huckabee. Way too many of the people of he calls never seem to realize it’s AI, and several of them recognized the voices and thought it’s the actual person.

IMAGE(https://i.postimg.cc/qMQdpQm8/2-ACDC32-E-0-E76-49-FA-B543-C44-BF6129114.jpg)

Can I not be sad that this happened to Alex, but also be deeply disturbed about this happening in the aggregate?

Garbage Day wrote:

A redditor broke down how they recently used Stable Diffusion to make a billboard ad and it’s fascinating (and more than a little unnerving). Here’s how they did it.

First, they took a creative commons photo of a man looking to the left (pictured above) and used it as the input photo. Stable Diffusion scanned the general features of the photo using a tool called ControlNet and then the redditor put a prompt in. The AI combined the details from the prompt with the features of the input photo and combined them. The redditor said they then used a process called inpainting to do touchups on small details. Inpainting allows you to instruct the AI to generate new sections of the image in the same way an app like Photoshop allows you to paint colors with a digital brush.

What I think is so interesting about this process is that it is, effectively, the same way you would use a stock photo in an ad campaign without AI. Go find a picture you like, alter it to taste, and there you go. But with the use of Stable Diffusion the alterations that are possible are so far beyond anything you could do with Photoshop that it’s hard to conceptualize it — and they take a fraction of the time.

IMAGE(https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2ad9-7255-4512-a994-a6a71e20b12f_1662x1370.png?utm_source=substack&utm_medium=email)

Geoffrey Hinton recently quit Google warning of the dangers of artificial intelligence. Is AI really going to destroy us? And how long do we have to prevent it?

In trying to model how the human brain works, Hinton found himself one of the leaders in the field of “neural networking”, an approach to building computer systems that can learn from data and experience. Until recently, neural nets were a curiosity, requiring vast computer power to perform simple tasks worse than other approaches. But in the last decade, as the availability of processing power and vast datasets has exploded, the approach Hinton pioneered has ended up at the centre of a technological revolution.

“In trying to think about how the brain could implement the algorithm behind all these models, I decided that maybe it can’t – and maybe these big models are actually much better than the brain,” he says.

A “biological intelligence” such as ours, he says, has advantages. It runs at low power, “just 30 watts, even when you’re thinking”, and “every brain is a bit different”. That means we learn by mimicking others. But that approach is “very inefficient” in terms of information transfer. Digital intelligences, by contrast, have an enormous advantage: it’s trivial to share information between multiple copies. “You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, we’ve discovered the secret of immortality. The bad news is, it’s not for us.”

Once he accepted that we were building intelligences with the potential to outthink humanity, the more alarming conclusions followed. “I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that any more. And I don’t know any examples of more intelligent things being controlled by less intelligent things – at least, not since Biden got elected.

“You need to imagine something more intelligent than us by the same difference that we’re more intelligent than a frog. And it’s going to learn from the web, it’s going to have read every single book that’s every been written on how to manipulate people, and also seen it in practice.”

He now thinks the crunch time will come in the next five to 20 years, he says. “But I wouldn’t rule out a year or two. And I still wouldn’t rule out 100 years – it’s just that my confidence that this wasn’t coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.”

There’s still hope, of sorts, that AI’s potential could prove to be over-stated. “I’ve got huge uncertainty at present. It is possible that large language models,” the technology that underpins systems such as ChatGPT, “having consumed all the documents on the web, won’t be able to go much further unless they can get access to all our private data as well. I don’t want to rule things like that out – I think people who are confident in this situation are crazy.” Nonetheless, he says, the right way to think about the odds of disaster is closer to a simple coin toss than we might like.

Counterpoint:

https://www.fastcompany.com/90892235...

FC: On CNN recently, Hinton downplayed the concerns of Timnit Gebru—who Google fired in 2020 for refusing to withdraw a paper about AI’s harms on marginalized people—saying her ideas were not as “existentially serious” as his own. What do you make of that?

MW: I think it’s stunning that someone would say that the harms [from AI] that are happening now—which are felt most acutely by people who have been historically minoritized: Black people, women, disabled people, precarious workers, et cetera—that those harms aren’t existential.

What I hear in that is, “Those aren’t existential to me. I have millions of dollars, I am invested in many, many AI startups, and none of this affects my existence. But what could affect my existence is if a sci-fi fantasy came to life and AI were actually super intelligent, and suddenly men like me would not be the most powerful entities in the world, and that would affect my business.”

FC: So, we shouldn’t be worried that AI will come to life and wipe out humanity?

MW: I don’t think there’s any evidence that large machine learning models—that rely on huge amounts of surveillance data and the concentrated computational infrastructure that only a handful of corporations control—have the spark of consciousness.

We can still unplug the servers, the data centers can flood as the climate encroaches, we can run out of the water to cool the data centers, the surveillance pipelines can melt as the climate becomes more erratic and less hospitable.

I think we need to dig into what is happening here, which is that, when faced with a system that presents itself as a listening, eager interlocutor that’s hearing us and responding to us, that we seem to fall into a kind of trance in relation to these systems, and almost counterfactually engage in some kind of wish fulfillment: thinking that they’re human, and there’s someone there listening to us. It’s like when you’re a kid, and you’re telling ghost stories, something with a lot of emotional weight, and suddenly everybody is terrified and reacting to it. And it becomes hard to disbelieve.

Spotify Removes ‘Tens Of Thousands’ Of AI-Generated Songs

Speaking personally, while its advocates have talked about how AI will create some renaissance of human creativity, mostly what I've seen is just a content glurge for quick cash-ins.

Spotify has removed 7% of songs created with AI-generated music service Boomy from its website, equating to “tens of thousands,” after Universal Music Group flagged Boomy for allegedly using bots to boost its streaming numbers, according to a Financial Times report.

Boomy’s music was removed due to suspected “artificial streaming,” which is the use of bots posed as people to increase streaming and audience numbers of songs.

The company helps users release their AI-generated songs and albums on streaming services like Apple Music and Spotify while taking a cut of the royalty distribution fees.

According to the Financial Times report, Boomy resumed submitting tracks to Spotify and the two companies are in negotiations to reinstate the removed songs.

Boomy claims its users have created over 14 million songs, totaling about 13.83% of “the world's recorded music.”