[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

Humane warns AI Pin owners to ‘immediately’ stop using its charging case

Humane is telling AI Pin owners today that they should “immediately” stop using the charging case that came with its AI gadget. There are issues with a third-party battery cell that “may pose a fire safety risk,” the company wrote in an email to customers (including The Verge’s David Pierce, who reviewed it when it came out).

Humane says it has “disqualified” that vendor and is moving to find another supplier. It also specified that the AI Pin itself, the magnetic Battery Booster, and its charging pad are “not affected.” As recompense, the company is offering two free months of its subscription service, which is required for most of its functionality. The company didn’t say if it will offer a replacement charging case, only that it “will share additional information” after investigating the issue.

The company seemingly hasn’t communicated the battery issue otherwise, either on its website or its X account.

Charging $699 dollars for a pin that does not work and potentially explodes is a pretty succinct summary of tech in the last year.
Prederick wrote:
Charging $699 dollars for a pin that does not work and potentially explodes is a pretty succinct summary of tech in the last year.

Skynet is just taking babysteps.

Charging $699 dollars for a pin that does not work and potentially explodes is a pretty succinct summary of tech in the last year.

I mean, at least it finally does something.

A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies

Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts.

Instagram is a necessity for many artists, who use the platform to promote their work and solicit paying clients. But Meta is using public posts to train its generative AI systems, and only European users can opt out, since they’re protected by GDPR laws. Generative AI has become so front-and-center on Meta’s apps that artists reached their breaking point.

“When you put [AI] so much in their face, and then give them the option to opt out, but then increase the friction to opt out… I think that increases their anger level — like, okay now I’ve really had enough,” Jingna Zhang, a renowned photographer and founder of Cara, told TechCrunch.

Cara, which has both a web and mobile app, is like a combination of Instagram and X, but built specifically for artists. On your profile, you can host a portfolio of work, but you can also post updates to your feed like any other microblogging site.

Zhang is perfectly positioned to helm an artist-centric social network, where they can post without the risk of becoming part of a training dataset for AI. Zhang has fought on behalf of artists, recently winning an appeal in a Luxembourg court over a painter who copied one of her photographs, which she shot for Harper’s Bazaar Vietnam.

“Using a different medium was irrelevant. My work being ‘available online’ was irrelevant. Consent was necessary,” Zhang wrote on X.

Zhang and three other artists are also suing Google for allegedly using their copyrighted work to train Imagen, an AI image generator. She’s also a plaintiff in a similar lawsuit against Stability AI, Midjourney, DeviantArt and Runway AI.

“Words can’t describe how dehumanizing it is to see my name used 20,000+ times in MidJourney,” she wrote in an Instagram post. “My life’s work and who I am—reduced to meaningless fodder for a commercial image slot machine.”

Artists are so resistant to AI because the training data behind many of these image generators includes their work without their consent. These models amass such a large swath of artwork by scraping the internet for images, without regard for whether or not those images are copyrighted. It’s a slap in the face for artists – not only are their jobs endangered by AI, but that same AI is often powered by their work.

“When it comes to art, unfortunately, we just come from a fundamentally different perspective and point of view, because on the tech side, you have this strong history of open source, and people are just thinking like, well, you put it out there, so it’s for people to use,” Zhang said. “For artists, it’s a part of our selves and our identity. I would not want my best friend to make a manipulation of my work without asking me. There’s a nuance to how we see things, but I don’t think people understand that the art we do is not a product.”

This commitment to protecting artists from copyright infringement extends to Cara, which partners with the University of Chicago’s Glaze project. By using Glaze, artists who manually apply Glaze to their work on Cara have an added layer of protection against being scraped for AI.

Other projects have also stepped up to defend artists. Spawning AI, an artist-led company, has created an API that allows artists to remove their work from popular datasets. But that opt-out only works if the companies that use those datasets honor artists’ requests. So far, HuggingFace and Stability have agreed to respect Spawning’s Do Not Train registry, but artists’ work cannot be retroactively removed from models that have already been trained.

“I think there is this clash between backgrounds and expectations on what we put on the internet,” Zhang said. “For artists, we want to share our work with the world. We put it online, and we don’t charge people to view this piece of work, but it doesn’t mean that we give up our copyright, or any ownership of our work.”

An avid Go player and fan, Zhang learned about the potential of AI eight years ago, when Google’s AlphaGo system defeated Lee Sedol, one of the best players in the world.

“We will never have the same experience as pre-AlphaGo,” Zhang said. “The beauty and the mystery of Go was that you wanted to see how far and how interesting a human’s play could be. Now, the highest achievement would be if you can defeat an AI.”

But what’s more depressing is that in a recent interview with Google, Sedol said that he might not have become a professional Go player if AlphaGo had existed in his youth.

In a blog post, Zhang explained, “Lee Sedol made so much of Go history and was an icon of our time, a role model for me. So to see him say that if he were to choose again, he wouldn’t become a pro—because of AI. Words can’t adequately describe how heartbroken I feel to hear this.”

But because of Zhang’s interest in Go, she had a head start in thinking about how AI would impact her career as an artist.

Cara isn’t Zhang’s first attempt at building an artist-friendly social network. But aside from the good timing, she thinks Cara has stood the best chance at longevity because she herself has grown as a founder. From managing an esports team to attending Stanford’s Ignite program, she learned how to work in a group.

“I think it’s experience and maturity. You get to learn from all of your previous experiences,” she said. “For me, I was a national athlete for Singapore and then a photographer, and both times I have done really well in the specific fields I’ve chosen, but they’re very individually driven — you just have to be very, very good yourself. Let’s say, my teamwork was not the best.”

Now, Cara is having its breakthrough moment. But this explosion in popularity doesn’t come without conflict.

Founded in late 2022, Cara is fully bootstrapped, and much of its engineering support comes from volunteers. Any company would struggle with an unexpected 1525% increase in users, let alone one that’s operating with such a small team.

On Wednesday, Zhang opened her email to find a horrible shock: her bill for using Vercel, a web hosting company, would cost $96,280 for the last week. After she posted on X about the bill, Vercel’s vice president of product Lee Robinson replied publicly, claiming that his team attempted to reach out ahead of time – but Zhang was so swamped by the platform’s rapid growth that she missed Vercel’s emails.

“The team and I are standing by, ready to work with you to ensure your app is running as efficiently as possible on our infra,” Robinson wrote to Zhang on X. But it’s unclear how this issue will pan out, and if it could put Cara on life support.

Zhang told TechCrunch that she hasn’t sought out venture funding because she doesn’t want to have to answer to outside investors – and it can’t be easy to find an angel investor who’s committed to supporting the interests of artists.

The next few weeks could be make-or-break for Cara, but at least Zhang has a community of like-minded artists on her side.

“Building a product is a bit like making art,” she said. “I think you just make something that you like as a person, and know not everyone will love it. But some people who have the same point of view, they would, and then you can grow your community from there.”

IMAGE(https://i.imgflip.com/8t08df.jpg)

Microsoft 'recalls' screenshot feature after outcry

Microsoft is making changes to a controversial feature announced for its new range of PCs powered by artificial intelligence after it was flagged as a potential "privacy nightmare".

The company billed the "Recall" feature for Copilot+ as a way to make users' lives easier by capturing and storing screenshots of their desktop activity.

But after people claimed hackers might be able to misuse the tool and its saved screenshots, Microsoft is making the feature opt-in instead.

The Information Commissioner's Office (ICO), the UK's data watchdog, had told the BBC it was "making enquiries" with Microsoft about the tool after concerns were raised.

"We have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards," said Pavan Davuluri, corporate vice president of Windows and devices, on Friday.

Mr Davuluri shared the update in a blog post.

The "Recall" tool featured prominently during the unveiling of Microsoft's new PCs at its developer conference in May. The company is counting on artificial intelligence (AI) to drive demand for its devices.
Executive vice president Yusuf Medhi said during the event's keynote speech that the feature used AI "to make it possible to access virtually anything you have ever seen on your PC" and likened it to having photographic memory.

The feature can search through a users' past activity, including their files, photos, emails and browsing history.

While this is something lots of other devices do, the tool would also take screenshots every few seconds and search these too.

Microsoft said it "built privacy into Recall’s design" from the outset and users would have control over what was captured - such as by opting out of capturing certain websites or not capturing private browsing on Microsoft's browser, Edge.

It said changes to the feature would give people a "clearer choice" to opt in to saving screenshots during set-up of the PCs, and would otherwise be turned off by default.

Users will also be required to use Windows' "Hello" authentication process to enable the tool, and their "proof of presence" will be required if they want to view their timeline of saved activity or search through it in Recall.

The updates will be implemented before Copilot+ PCs launch on 18 June.

That feature is utterly deranged. No-one asked for this. It doesn't solve any problems - unless you're an abuser that needs to make sure your victim does not reach out for help. Microsoft! We make sure the abused stay abused!

It feels like the tech industry has reached a plateau, and is incapable of dealing with it. Instead it reaches for whatever dumb f*cking bezzle that promises the growth will keep going, and makes everyone's lives so much worse in the process.

Alien Love Gardener wrote:

It feels like the tech industry has reached a plateau, and is incapable of dealing with it. Instead it reaches for whatever dumb f*cking bezzle that promises the growth will keep going, and makes everyone's lives so much worse in the process.

That’s essentially the primary argument behind Ed Zitron’s Rot Economy. Perverse financial incentives have caused most of the tech industry to get stuck in a spiral of pushing products no one wants, needs, or asked for because it causes the line to go up just long enough to skim money off the top.

Alien Love Gardener wrote:

It feels like the tech industry has reached a plateau, and is incapable of dealing with it. Instead it reaches for whatever dumb f*cking bezzle that promises the growth will keep going, and makes everyone's lives so much worse in the process.

The geniuses behind the Humane AI pin had a company policy against talking negatively against the product, and even used it to fire a software engineer who questioned if it'd be ready for launch. Kind of hard to make a worthwhile product if you fire people for trying to anticipate criticisms it may face. Makes me wonder if the team behind Microsoft Recall had a similar policy, even if it was an unofficial one. Humane also repeatedly ignored their employees requests to put someone in charge of marketing until after the product already had launched. Their main plan was, and still appears to be, to sell their company to HP for more than a billion dollars. They've only had about $7 million in revenue, and I can't imagine they'll get much more given the horrendous launch they've had, but I'm sure they've also got a policy against talking about that too.

Concern rises over AI in adult entertainment

Later this month, people in Berlin will be able to book an hour with an AI sex doll as the world’s first cyber brothel rolls out the service following a test phase.

Customers will be able to interact verbally with the AI dolls as well as physically.

“Many people feel more comfortable sharing private matters with a machine because it doesn’t judge,” says Philipp Fussenegger, founder and owner of Cybrothel.

“Previously, there was significant interest in a doll with a voice actress, where users could only hear the voice and interact with the doll. Now, there is an even greater demand for interacting with artificial intelligence.”

It's just one of many ways that generative AI is being used by the adult entertainment business.

Analysis by SplitMetrics revealed that AI companion apps reached 225 million downloads in the Google Play Store.

“I would expect more app developers to take note of this trend and look at ways this category can be further innovative and monetised,” said SplitMetrics general manager Thomas Kriebernegg.

AI companions can be lucrative, says Misha Rykov, privacy researcher with Mozilla’s Privacy Not Included guide.

“Given that most of the chatbots are charging fees, and the core technology has been developed elsewhere [such as Open AI], it looks like a high-margin business. Also, these apps collect personal data and often share it with third parties like advertisers - a tried and true business model.”

But the merger of AI and the adult entertainment business has set off alarm bells.

One problem lies in the bias inherent in generative AI, which produces new content based on the data on which it has been trained.

There is a risk that retrograde gender stereotypes about sex and pleasure get encoded into sex chatbots, says Dr Kerry McInerney, senior research fellow at the Leverhulme Centre for the Future of Intelligence, at the University of Cambridge.

“It's crucial that we understand what kinds of data sets are used to train sex chatbots, otherwise we risk replicating ideas about sex that demean female pleasure and ignore sex that exists outside of heterosexual intercourse.”

IMAGE(https://media0.giphy.com/media/xlSSXjAeu6N7qR5Zpv/giphy.gif?cid=6c09b952jbb9xwjjaumlajavuinq7teu34x8e549wear95r0&ep=v1_gifs_search&rid=giphy.gif&ct=g)

Given how porn is very clearly consumed on the internet, I think it's entirely fair to say that engaging in "retrograde gender stereotypes about sex and pleasure" are explicitly why some people will use these services.

There is also the risk of addiction says Mr Rykov, who says that AI chatbots target lonely people, notably men.

“Most of the AI chatbots we reviewed have high addictive potential and several potential harms, especially to users with mental health challenges.”

Mozilla has added content warnings to several AI chatbots “as we found themes of abuse, violence, and underage relationships,” says Mr Rykov.

He also raised the issue of privacy. Partnership chatbots are designed to collect “an unprecedented amount of personal data”.

Mr Rykov adds that that 90% of apps reviewed by Mozilla “may share or sell personal data”, while more than half of the apps won’t let users delete personal data.

Others warn about the possible danger such AI could have on real-world relationships.

Tamara Hoyton, senior practice consultant at the counselling service Relate, points out: “Some difficulties may come about if real encounters are profoundly disappointing because they don't match up to the strictly defined requirements that users experience in AI porn.”

Ms Hoyton adds that, in some cases, AI porn could take users into dangerous areas.

“There is nothing wrong with a bit of fantasy, and many people get aroused by thoughts that they have absolutely no intention of acting on; AI porn might be seen like this.

“If it's crossed over into an assumption of consent for example, a sense of entitlement, or that everyone will be what turns you on, based on the user’s experience of the compliance of the AI object, then it's an issue.”

Companies using AI within the adult entertainment industry acknowledge that there is a need for caution, but maintain that AI has an important role to play.

Philipp Hamburger, head of AI at Lovehoney, says the company is aiming “to enhance the sexual experience of its customers, rather than replace it, which is an important line to draw".

Others believe AI will have a positive effect on the sector. Ruben Cruz is the co-founder of Barcelona-based The Clueless Agency, which created one of the first AI influencers, Aitana Lopez.

He points out that the sex industry will always exist, and AI can help mitigate ethical concerns by ensuring that the content is not created using real people.

“This shift aims to ensure that no person, male or female, has to be explicitly sexualized in the future.”

It always confuses me how people can make product decisions that are obviously dumb, yet still make them. One company I worked for charged $8 for download insurance, basically you would download our software when you bought it, but if you didn't pay the $8, if you lost the installer, you would have to buy the product again at full price. I remember being in a meeting where where we were talking about adding it to another product and I challenged us to stop charging for it. The room went silent. I was told that we make too much money from it so we couldn't get rid of it. Thankfully me speaking up got a fire going under some people and we got rid of it by the next release. It would have really sucked if I lost my job over doing the right thing.

kazar wrote:

One company I worked for charged $8 for download insurance, basically you would download our software when you bought it, but if you didn't pay the $8, if you lost the installer, you would have to buy the product again at full price.

TheGameguru wrote:

The very nature of capitalism means that you have to create scarcity even if it’s artificial.

*Legion* wrote:
kazar wrote:

One company I worked for charged $8 for download insurance, basically you would download our software when you bought it, but if you didn't pay the $8, if you lost the installer, you would have to buy the product again at full price.

TheGameguru wrote:

The very nature of capitalism means that you have to create scarcity even if it’s artificial.

It is the byproduct of everything human. Socialism is just as good at creating scarcity, even artificial scarcity, especially when people are involved. I don't disagree that scarcity and even artificial scarcity is a byproduct of capitalism, but I don't agree that it is in it's nature. In fact, my story proved it as we got rid of the scarcity and found better ways to replace that revenue, all within "capitalism". The reason I shared the story is that the idea that you would be fired for speaking truth is wrong. Thankfully my company didn't go down that road, again not all companies are outright evil.

kazar wrote:

The reason I shared the story is that the idea that you would be fired for speaking truth is wrong. Thankfully my company didn't go down that road, again not all companies are outright evil.

I don’t see your story as evidence that non-evil companies exist. Getting them to be less evil than they could have been is certainly still a win, but it's a great example of an orphan crushing machine story, since it glosses over that they could only be convinced to stop charging for "download insurance" because they were running the "download insurance" scam in the first place.

Interestingly enough, I’ve seen the most amount of chatter about Cara on Threads. Though, I think that might be a side effect of the fact that, best as I can tell, the biggest community on Threads are aggrieved Instagram creators that finally have a non-image-based platform to complain about Meta.

There is nothing particularly special about Cara, as a platform, it’s your standard art-hosting site. But it’s blowing up right now because it has adopted a very strong anti-AI policy. I’ve also seen some Instagram artists say they’re moving to Cara to get away from the platform mechanics that decide how art is shared elsewhere.

Though Cara’s new-found popularity has come with some downsides. First, their server bill is, reportedly, through the roof. Second, the site’s new users, who both hate AI and also don’t seem to fully understand how AI works and definitely don’t understand that AI services are not a monolith, started spreading rumors that Cara was lying about being anti-AI. This all stems from the fact that Cara uses a third-party service called Hive, which provides AI-based moderation for spam.

Yeah, that tracks. Not Cara, I mean the freakout.

Apple Will Add ChatGPT to Siri, iPhone and Other Platforms for Free

Apple is stuffing OpenAI‘s ChatGPT into a wide range of its product and platforms, including the Siri smart assistant, as the tech company looks to stay competitive in Silicon Valley’s AI race.

The company said it is integrating ChatGPT, the artificial-intelligence chatbot, into experiences within iOS 18, iPadOS 18 and macOS Sequoia. Users of Apple’s products will be able to access ChatGPT’s text-based answers as well as its image- and document-processing capabilities without needing to jump between tools. The ChatGPT features are part of Apple Intelligence, a series of AI initiatives and product developments unveiled Monday at the company’s Worldwide Developers Conference.

Siri can tap into ChatGPT, too, although Apple said users will be prompted to confirm their requests before any questions, documents or photos are sent to ChatGPT.

Additionally, ChatGPT will be available in Apple’s systemwide Writing Tools and with Compose, users can also access ChatGPT image tools to generate images in “a wide variety of styles” to complement what they are writing.

“Apple Intelligence will transform what users can do with our products — and what our products can do for our users,” Apple chief Tim Cook said in launching the features.

Fake beauty queens charm judges at the Miss AI pageant

Beauty pageant contestants have always been judged by their looks, and, in recent decades, by their do-gooderly deeds and winning personalities.

Still, one thing that’s remained consistent throughout beauty pageant history is that you had to be a human to enter.

But now that’s changing.

Models created using generative artificial intelligence (AI) are competing in the inaugural “Miss AI” pageant this month.

The contestants have no physical, real-world presence. They exist only on social media, primarily Instagram, in the form of photorealistic images of extremely beautiful young women — all of it created using a combination of off-the-shelf and proprietary AI technology.

Some of the characters can also be seen talking and moving in videos. And they share their "thoughts" and news about their "lives" mostly through accompanying text on social media posts.

In one video, Kenza Layli, created by a team from Morocco, speaks in Arabic about how happy she is to have been selected as one of finalists for Miss AI.

"I am proud to receive this nomination after only existing for five months, especially since this invention is Arab and Moroccan 100%," the AI model said.

In another, the Brazilian entry, Ailya Lou, lip-synchs and bops around to a song.

Even though these beauty queens are not real women, there is a real cash prize of $5,000 for the winner. The company behind the event, the U.K.-based online creator platform FanVue, is also offering public relations and mentorship perks to the top-placed entry as well as to two runners-up.

According to a statement from the organizer, a panel of four judges selected 10 finalists from 1,500 submissions. This is the first of a series of contests for AI content creators that FanVue is launching under the "The FanVue World AI Creator Awards" umbrella. The results for Miss AI will be announced at the end of June.

"What the awards have done is uncover creators none of us were aware of," said FanVue co-founder Will Monange in the statement. "And that's the beauty of the AI creator space: It's enabling creative people to enter the creator economy with their AI-generated creations without having to be the face themselves."

I wonder how the dudes behind Lil Miquela feel about all of this.

EDIT:

Mohammad Talha Saray, a member of the team in Ankara, Turkey, that created one of the Miss AI finalists — the red-haired, green-eyed Seren Ay, said they came up with the AI model five or six months ago as a brand ambassador for their jewelry e-commerce company because human influencers they approached cost too much money and were too demanding. Saray said his AI avatar is cheaper, more flexible and doesn’t talk back.

"With the AI, there's no limit," Saray told NPR. "You can just do whatever you want. Like, if you want to just do something on the moon or on the sun, whatever you want, you can just do it — all with your imagination."

Saray said his jewelry business has grown tenfold since Seren Ay came on board. Her social media videos garner millions of views.

"Our goal for Seren Ay is to position her as a globally recognized and beloved digital influencer," said Saray. "Winning the Miss AI competition will be a significant step toward achieving these goals, allowing us to reach a wider audience and seize more collaboration opportunities."

He said AI influencers do not have the ability to move people as much as their human counterparts can.

"People are always going to know that it's an artificial intelligence," Saray said.

Yet he said he's constantly astonished by the number of people commenting on Seren Ay's posts on Instagram who seem to mistake the AI character for a real human being.

"People say they have feelings for Seren AI," said Saray. "They're congratulating her. They're saying they hope she wins the prize."

Our only truly renewable energy source is terminally horny men in Instagram comments.

Good to know I can rely on Captain Dipsh*t to say something stupid about.... everything.

Elon Musk says he will ban Apple devices if it integrates OS with OpenAI

Elon Musk said on Monday he would ban Apple devices at his companies if the iPhone maker integrates OpenAI at the OS level.

"That is an unacceptable security violation," the Tesla CEO said on a post on X.

"And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage," he said on the social media platform.

Earlier in the day, Apple announced a slew of AI features across its apps and operating platforms and a partnership with OpenAI to bring the ChatGPT technology to its devices.

Apple said it had built AI with privacy "at the core" and it would use a combination of on-device processing and cloud computing to power those features.

Boy does Elon make me feel so much better about Bezos being the guy ultimately signing my paychecks.

Sure, I'm grist for a billionaire's nonsense space mill, but at least it's not the stupidest billionaire.

"And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage," he said on the social media platform.
Jonman wrote:

Boy does Elon make me feel so much better about Bezos being the guy ultimately signing my paychecks.

Sure, I'm grist for a billionaire's nonsense space mill, but at least it's not the stupidest billionaire.

This is why I’m cheering for Project Kuiper. I want an LEO satellite Internet connection as my backup ISP, but I absolutely will not give the E-diot money for Starlink.

*Legion* wrote:

This is why I’m cheering for Project Kuiper. I want an LEO satellite Internet connection as my backup ISP, but I absolutely will not give the E-diot money for Starlink.

I got some bad news for you, buddy. They let this idiot here work on the upper stage of the launch vehicle that's supposed to be launching most of them, so it's clearly going to blow up.

*Legion* wrote:
Jonman wrote:

Boy does Elon make me feel so much better about Bezos being the guy ultimately signing my paychecks.

Sure, I'm grist for a billionaire's nonsense space mill, but at least it's not the stupidest billionaire.

This is why I’m cheering for Project Kuiper. I want an LEO satellite Internet connection as my backup ISP, but I absolutely will not give the E-diot money for Starlink.

There are a lot of telcos around the world cheering them on. An entire industry that wants to avoid dealing with his nonsense.

I swear to God, I am not making this tweet up:

@spectatorindex wrote:

BREAKING: Elon Musk says he might need to develop a Grok phone if 'Apple actually integrates woke nanny AI spyware into their OS'

Reminder that he was originally a board member of OpenAI, tried to take control of the company, was voted down, took his ball and went home.

Jonman wrote:
*Legion* wrote:

This is why I’m cheering for Project Kuiper. I want an LEO satellite Internet connection as my backup ISP, but I absolutely will not give the E-diot money for Starlink.

I got some bad news for you, buddy. They let this idiot here work on the upper stage of the launch vehicle that's supposed to be launching most of them, so it's clearly going to blow up.

No, my satellite Internet!

IMAGE(https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExazB3NG5qMGI5eHRpb3FlM2lpbDdnaWF4Y3p1bm9xamdoYzV4MnhldiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/p3hJFHDLKhTd1NsqzP/giphy.gif)

Elon Musk abruptly withdraws lawsuit against Sam Altman and OpenAI

Elon Musk has moved to dismiss his lawsuit accusing ChatGPT maker OpenAI and its CEO Sam Altman of abandoning the startup’s original mission of developing artificial intelligence for the benefit of humanity.

Musk launched the suit against Altman in February, and the case had been slowly working its way through the California court system. There was no indication until Tuesday that Musk planned to drop the suit; only a month ago, his lawyers filed a challenge that forced the judge hearing the case to remove himself.

Musk’s request for a dismissal contained no reason behind the decision. A San Francisco superior court judge was scheduled on Wednesday to hear Altman and OpenAI’s argument for throwing the case out.

The dismissal is an abrupt end to a legal battle between two of the tech world’s most powerful men. Musk and Altman co-founded OpenAI in 2015, but Musk left the board three years later during a struggle over control of the company and its direction. As Altman’s star has risen in recent years, the two have become increasingly hostile to each other.

Musk’s suit revolved around his claim that Altman and OpenAI breached what he referred to as the company’s “founding agreement” to work for the betterment of humanity. He alleged that OpenAI’s pivot to become a largely for-profit entity that partnered with Microsoft and did not share its technology with the public constituted a breach of that agreement.

OpenAI and Altman vehemently denied any wrongdoing, stating that there was no such “founding agreement” and releasing messages that appeared to show Musk supported becoming a for-profit company. OpenAI and Altman also posted a blog in March that essentially accused Musk of professional jealousy, saying “we’re sad that it’s come to this with someone whom we’ve deeply admired”.

Musk’s suit drew skepticism from legal experts who argued that certain claims in the filing – such as that OpenAI had created artificial intelligence on a level that could match human intelligence – did not hold up to scrutiny.

He's such a moron. He was suing for breach of contract, when THERE WAS NO CONTRACT!”

IMAGE(https://i.ibb.co/nbDxpZy/MuskBurn.png)