News updates on the development and ramifications of AI. Obvious header joke is obvious.
WIRED article: "TIRED: Generative AI"
ChatGPT: "yo what the f*ck"
I’ve been using Procreate for over a decade, it’s a great piece of software.
Has anyone tried that Procreate animation tool? Dreams?
ruhk wrote:I’ve been using Procreate for over a decade, it’s a great piece of software.
Wish it was available on Desktop. Til' then, I remain a CSP boy.
Clip Studio is superior in many ways, Procreate’s primary strength is that it was designed for tablets and feels more natural with that interface. I’ve tried using CSP on both my iPad and Surface and it’s just a pain to use that way.
I remember that I had bought Procreate at some point for my iPad and figured I'd boot it up for the hell of it only to find despite feeling like I'd seen my apple pencil last week in a drawer somewhere that I have no idea where the damned thing is. So now I'm taking 10 million photos of my apartment to feed into chat gpt 8 so it can help me find the most likely lottery numbers to win big so I can buy a new ipad and pencil. Wish me luck!
Cut to a news report about all of kansas city burning down from the heat from training a model in the middle of summer with a bot net made up of shitty apartment smart thermostats
Hank Green published a video about AI and copyright. A lot of the content has already been brought up here but it is a good break down on what is going on with AI and YouTube today.
That trailer was already totally off-putting already, it was horribly self-indulgent. Like Coppola had removed a couple of ribs to fellate himself, so to see this is truly funny.
And boosts the Dead Internet Theory.
If you’re asking a student a question, the machine can answer, just let the machine answer it, and everyone can move on with their lives.
Not my burner account, I swear.
Urban Wolfe being an AI explains why its initial reply was gibberish.
I guess what studios are still left in California are going to relocate to the nearest AI friendly state.
Came across this and felt it warranted a post:
Why AI Isn't Going to Make Art
The computer scientist François Chollet has proposed the following distinction: skill is how well you perform at a task, while intelligence is how efficiently you gain new skills. I think this reflects our intuitions about human beings pretty well. Most people can learn a new skill given sufficient practice, but the faster the person picks up the skill, the more intelligent we think the person is. What’s interesting about this definition is that—unlike I.Q. tests—it’s also applicable to nonhuman entities; when a dog learns a new trick quickly, we consider that a sign of intelligence.In 2019, researchers conducted an experiment in which they taught rats how to drive. They put the rats in little plastic containers with three copper-wire bars; when the mice put their paws on one of these bars, the container would either go forward, or turn left or turn right. The rats could see a plate of food on the other side of the room and tried to get their vehicles to go toward it. The researchers trained the rats for five minutes at a time, and after twenty-four practice sessions, the rats had become proficient at driving. Twenty-four trials were enough to master a task that no rat had likely ever encountered before in the evolutionary history of the species. I think that’s a good demonstration of intelligence.
Now consider the current A.I. programs that are widely acclaimed for their performance. AlphaZero, a program developed by Google’s DeepMind, plays chess better than any human player, but during its training it played forty-four million games, far more than any human can play in a lifetime. For it to master a new game, it will have to undergo a similarly enormous amount of training. By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills. It is currently impossible to write a computer program capable of learning even a simple task in only twenty-four trials, if the programmer is not given information about the task beforehand.
Self-driving cars trained on millions of miles of driving can still crash into an overturned trailer truck, because such things are not commonly found in their training data, whereas humans taking their first driving class will know to stop. More than our ability to solve algebraic equations, our ability to cope with unfamiliar situations is a fundamental part of why we consider humans intelligent. Computers will not be able to replace humans until they acquire that type of competence, and that is still a long way off; for the time being, we’re just looking for jobs that can be done with turbocharged auto-complete.
Call me when it offers personal doxing of Neo-Nazis marching through neighborhoods...
Oprah presents: Foxes Guarding the Henhouse.
In surprisingly useful AI news, researchers have discovered that their tailor-made chatbot can reduce people's belief in conspiracy theories by up to 20%. It does so by finding out exactly why they believed in a given conspiracy theory and then debunking those specific claims.
Participants first answered a series of open-ended questions about the conspiracy theories they strongly believed and the evidence they relied upon to support those beliefs. The AI then produced a single-sentence summary of each belief, for example, "9/11 was an inside job because X, Y, and Z." Participants would rate the accuracy of that statement in terms of their own beliefs and then filled out a questionnaire about other conspiracies, their attitude toward trusted experts, AI, other people in society, and so forth.Then it was time for the one-on-one dialogues with the chatbot, which the team programmed to be as persuasive as possible. The chatbot had also been fed the open-ended responses of the participants, which made it better to tailor its counter-arguments individually. For example, if someone thought 9/11 was an inside job and cited as evidence the fact that jet fuel doesn't burn hot enough to melt steel, the chatbot might counter with, say, the NIST report showing that steel loses its strength at much lower temperatures, sufficient to weaken the towers' structures so that it collapsed. Someone who thought 9/11 was an inside job and cited demolitions as evidence would get a different response tailored to that.
Participants then answered the same set of questions after their dialogues with the chatbot, which lasted about eight minutes on average. Costello et al. found that these targeted dialogues resulted in a 20 percent decrease in the participants' misinformed beliefs—a reduction that persisted even two months later when participants were evaluated again.
Pages