[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

It sounds like they won't need to buy OpenAI. They're getting Altman and something like 2/3rds of OpenAI's employees want to follow him. I don't know that OpenAI will have anything left that they'd need to buy at this point.

It is confusing but Open AI Inc is non profit that owns Open AI LLC which is the capped for profit you are mentioning (at least my interpretation). I think it still makes things muddy fit things like acquisitions.

AI-generated episode of legendary Black Jack manga series unveiled

A project to create a new episode of Japanese manga legend Tezuka Osamu's Black Jack series with the help of generative AI has been completed. It will debut in a magazine that goes on sale Wednesday.

The TEZUKA 2023 project kicked off in June to commemorate the 50th anniversary of the Black Jack series, which centers on a genius surgeon who practices medicine without a license.

It was led by Tezka Macoto, the late manga artist's son. He and Keio University professor Kurihara Satoshi — an artificial intelligence expert — held a news conference at Keio University in Tokyo on Monday.

There they unveiled the first five pages of the new 32-page medical drama.

The story begins with Black Jack and his assistant Pinoko visiting a company that makes cutting-edge prosthetics and artificial organs. There they meet a patient named Maria who has been given an artificial heart and is bedridden with multiple tubes attached to her. Black Jack realizes a serious problem exists with her transplant and gets to work solving it.

Tezka says, "It's a very contemporary theme, and a work that is very Black Jack in the way that it takes on problems that will happen in the future." He adds, "I think my father would have wanted to use AI. He always said he wished he had more hands so he could draw as much as he wanted to."

/shrug emoji

Maybe the first, but definitely not the last.

The fact that OpenAI is sort of a non profit doesn't tell me anything other then the non profit designation is easily exploitable and kind of meaningless. Not to say there's no such thing as a good non profit but I don't think this is it

It's more complicated than that. OpenAI Inc. is a non-profit, whose papers explicitly say its fiduciary duty is to humanity rather than investors or customers. But a few years ago it created a for-profit subsidiary called OpenAI LLC, basically in order to raise funding and hire more people. The latter is what operates ChatGPT, employs tons of people, partnered with Microsoft, etc., and that's what Altman was the CEO of. But the board of directors that removed and then fired him sits on the non-profit, not the LLC.

Given that odd structure, a lot of people assume that the whole coup thing happened because the board was idealistic and rejected Altman's chasing profits or ignoring safety. But there are some good reasons to doubt that, and considering the board still hasn't told anyone their reasons it's all just speculation.

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

(NYT Paywall)

I genuinely was joking with the header image. Lo and behold!

It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.

But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.

That prospect is so worrying to many other governments that they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons.

“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions. The United States, Russia, Australia, Israel and others have all argued that no new international law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.

The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon.

“We do not see that it is really the right time,” Konstantin Vorontsov, the deputy head of the Russian delegation to the United Nations told diplomats who were packed into a basement conference room recently at the U.N. headquarters in New York.

The debate over the risks of artificial intelligence has drawn new attention in recent days with the battle over control of OpenAI, perhaps the world’s leading A.I. company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology. And last week, officials from China and the United States discussed a related issue: potential limits on the use of A.I. in decisions about deploying nuclear weapons.

Against that backdrop, the question of what limits should be placed on the use of lethal autonomous weapons has taken on new urgency, and for now has come down to whether it is enough for the U.N. simply to adopt nonbinding guidelines, the position supported by the United States.

“The word ‘must’ will be very difficult for our delegation to accept,” Joshua Dorosin, the chief international agreements officer at the State Department, told other negotiators during a debate in May over the language of proposed restrictions.

Mr. Dorosin and members of the U.S. delegation, which includes a representative from the Pentagon, have argued that instead of a new international law, the U.N. should clarify that existing international human rights laws already prohibit nations from using weapons that target civilians or cause a disproportionate amount of harm to them.

But the position being taken by the major powers has only increased the anxiety among smaller nations, who say they are worried that lethal autonomous weapons might become common on the battlefield before there is any agreement on rules for their use.

“The window of opportunity to act is rapidly diminishing as we prepare for a technological breakout,” said Ambassador Khalil Hashmi of Pakistan.Credit...Dave Sanders for The New York Times

“Complacency does not seem to be an option anymore,” Ambassador Khalil Hashmi of Pakistan said during a meeting at U.N. headquarters. “The window of opportunity to act is rapidly diminishing as we prepare for a technological breakout.”

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that United States military will “field attritable, autonomous systems at scale of multiple thousands,” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitates that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.

The United States has already adopted voluntary policies that set limits on how artificial intelligence and lethal autonomous weapons will be used, including a Pentagon policy revised this year called “Autonomy in Weapons Systems” and a related State Department “Political Declaration on Responsible Use of Artificial Intelligence and Autonomy,” which it has urged other nations to embrace.

The American policy statements “will enable nations to harness the potential benefits of A.I. systems in the military domain while encouraging steps that avoid irresponsible, destabilizing, and reckless behavior,” said Bonnie Denise Jenkins, a State Department under secretary.

The Pentagon policy prohibits the use of any new autonomous weapon or even the development of them unless they have been approved by top Defense Department officials. Such weapons must be operated in a defined geographic area for limited periods. And if the weapons are controlled by A.I., military personnel must retain “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

At least initially, human approval will be needed before lethal action is taken, Air Force generals said in interviews.

But Frank Kendall, the Air Force secretary, said in a separate interview that these machines will eventually need to have the power to take lethal action on their own, while remaining under human oversight in how they are deployed.

“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said. He added, “I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”

Thomas X. Hammes, a retired Marine officer who is now a research fellow at the Pentagon’s National Defense University, said in an interview and a recent essay published by the Atlantic Council that it is a “moral imperative that the United States and other democratic nations” build and use autonomous weapons.

He argued that “failing to do so in a major conventional conflict will result in many deaths, both military and civilian, and potentially the loss of the conflict.”

Some arms control advocates and diplomats disagree, arguing that A.I.-controlled lethal weapons that do not have humans authorizing individual strikes will transform the nature of warfighting by eliminating the direct moral role that humans play in decisions about taking a life.

These A.I. weapons will sometimes act in unpredictable ways, and they are likely to make mistakes in identifying targets, like driverless cars that have accidents, these critics say.

The new weapons may also make the use of lethal force more likely during wartime, since the military launching them would not be immediately putting its own soldiers at risk, or they could lead to faster escalation, the opponents have argued.

Arms control groups like the International Committee of the Red Cross and Stop Killer Robots, along with national delegations including Austria, Argentina, New Zealand, Switzerland and Costa Rica, have proposed a variety of limits.

Some would seek to globally ban lethal autonomous weapons that explicitly target humans. Others would require that these weapons remain under “meaningful human control,” and that they must be used in limited areas for specific amounts of time.

Mr. Kmentt, the Austrian diplomat, conceded in an interview that the U.N. has had trouble enforcing existing treaties that set limits on how wars can be waged. But there is still a need to create a new legally binding standard, he said.

“Just because someone will always commit murder, that doesn’t mean that you don’t need legislation to prohibit it,” he said. “What we have at the moment is this whole field is completely unregulated.”

But Mr. Dorosin has repeatedly objected to proposed requirements that the United States considers too ambiguous or is unwilling to accept, such as calling for weapons to be under “meaningful human control.”

The U.S. delegation’s preferred language is “within a responsible human chain of command.”

He said it is important to the United States that the negotiators “avoid vague, overarching terminology.”

Mr. Vorontsov, the Russian diplomat, took the floor after Mr. Dorosin during one of the debates and endorsed the position taken by the United States.

“We understand that for many delegations the priority is human control,” Mr. Vorontsov said. “For the Russian Federation, the priorities are somewhat different.”

The United States, China and Russia have also argued that artificial intelligence and autonomous weapons might bring benefits by reducing civilian casualties and unnecessary physical damage.

“Smart weapons that use computers and autonomous functions to deploy force more precisely and efficiently have been shown to reduce risks of harm to civilians and civilian objects,” the U.S. delegation has argued.

Mr. Kmentt in early November won broad support for a revised plan that asked the U.N. secretary general’s office to assemble a report on lethal autonomous weapons, but it made clear that in deference to the major powers the detailed deliberations on the matter would remain with a U.N. committee in Geneva, where any single nation can effectively block progress or force language to be watered down.

Last week, the Geneva-based committee agreed at the urging of Russia and other major powers to give itself until the end of 2025 to keep studying the topic, one diplomat who participated in the debate said.

“If we wait too long, we are really going to regret it,” Mr. Kmentt said. “As soon enough, it will be cheap, easily available, and it will be everywhere. And people are going to be asking: Why didn’t we act fast enough to try to put limits on it when we had a chance to?”

CNBC - OpenAI brings Sam Altman back as CEO less than a week after he was fired by board

Haven't seen anything that mentions any current board members getting the boot.

Might have been updated since you posted it, but currently it says three of them were removed.

Concurrent with Altman’s return, Helen Toner, Tasha McCauley and co-founder Ilya Sutskever were removed as board members. All had been involved in pushing out Altman, although Sutskever later walked back his support for the coup and remains an OpenAI employee as of Wednesday.

A nice broad overview of LLMs - at the functional level, with no math or programming:

Elon and his buddies continue to be truly, amazingly dumb snowflakes.

Yes, that is a man imagining the stupidest trolley problem in history about his inability to get ChatGPT to say n**ger.

I hope all of you will understand my lack of reaction when one billion white people are killed because someone couldn't drop an n-bomb.

Why do they always invent a scenario where saying the N word is heroic, we can only speculate
Bluechecks are going to try as hard as they can to get Musk's AI to say the N word, and if they somehow cannot they will spend days complaining about it until Musk relents, because he always bows to them.
Prederick wrote:

I hope all of you will understand my lack of reaction when one billion white people are killed because someone couldn't drop an n-bomb.

The plot of Metal Gear Solid V makes so much more sense now

IMAGE(https://i.postimg.cc/Bbs56tRK/Screenshot-20231126-093513-Edge.jpg)

Futurism: Sports Illustrated Published Articles by Fake, AI-Generated Writers:

There was nothing in Drew Ortiz's author biography at Sports Illustrated to suggest that he was anything other than human.

"Drew has spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature," it read. "Nowadays, there is rarely a weekend that goes by where Drew isn't out camping, hiking, or just back on his parents' farm."

The only problem? Outside of Sports Illustrated, Drew Ortiz doesn't seem to exist. He has no social media presence and no publishing history. And even more strangely, his profile photo on Sports Illustrated is for sale on a website that sells AI-generated headshots, where he's described as "neutral white young-adult male with short brown hair and blue eyes."

Ortiz isn't the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content who asked to be kept anonymous to protect them from professional repercussions.

"There's a lot," they told us of the fake authors. "I was like, what are they? This is ridiculous. This person does not exist."

"At the bottom [of the page] there would be a photo of a person and some fake description of them like, 'oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.' Stuff like that," they continued. "It's just crazy."

The AI authors' writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball "can be a little tricky to get into, especially without an actual ball to practice with."

According to a second person involved in the creation of the Sports Illustrated content who also asked to be kept anonymous, that's because it's not just the authors' headshots that are AI-generated. At least some of the articles themselves, they said, were churned out using AI as well.

"The content is absolutely AI-generated," the second source said, "no matter how much they say that it's not."

After we reached out with questions to the magazine's publisher, The Arena Group, all the AI-generated authors disappeared from Sports Illustrated's site without explanation. Our questions received no response.

Oh brother, if this is actually what's going on, every major news outlet's about to get a virtual colonoscopy from people hunting for AI generated content.

How Jensen Huang’s Nvidia Is Powering the A.I. Revolution

The revelation that ChatGPT, the astonishing artificial-intelligence chatbot, had been trained on an Nvidia supercomputer spurred one of the largest single-day gains in stock-market history. When the Nasdaq opened on May 25, 2023, Nvidia’s value increased by about two hundred billion dollars. A few months earlier, Jensen Huang, Nvidia’s C.E.O., had informed investors that Nvidia had sold similar supercomputers to fifty of America’s hundred largest companies. By the close of trading, Nvidia was the sixth most valuable corporation on earth, worth more than Walmart and ExxonMobil combined. Huang’s business position can be compared to that of Samuel Brannan, the celebrated vender of prospecting supplies in San Francisco in the late eighteen-forties. “There’s a war going on out there in A.I., and Nvidia is the only arms dealer,” one Wall Street analyst said.

Huang is a patient monopolist. He drafted the paperwork for Nvidia with two other people at a Denny’s restaurant in San Jose, California, in 1993, and has run it ever since. At sixty, he is sarcastic and self-deprecating, with a Teddy-bear face and wispy gray hair. Nvidia’s main product is its graphics-processing unit, a circuit board with a powerful microchip at its core. In the beginning, Nvidia sold these G.P.U.s to video gamers, but in 2006 Huang began marketing them to the supercomputing community as well. Then, in 2013, on the basis of promising research from the academic computer-science community, Huang bet Nvidia’s future on artificial intelligence. A.I. had disappointed investors for decades, and Bryan Catanzaro, Nvidia’s lead deep-learning researcher at the time, had doubts. “I didn’t want him to fall into the same trap that the A.I. industry has had in the past,” Catanzaro told me. “But, ten years plus down the road, he was right.”

In the near future, A.I. is projected to generate movies on demand, provide tutelage to children, and teach cars to drive themselves. All of these advances will occur on Nvidia G.P.U.s, and Huang’s stake in the company is now worth more than forty billion dollars.

In September, I met Huang for breakfast at the Denny’s where Nvidia was started. (The C.E.O. of Denny’s was giving him a plaque, and a TV crew was in attendance.) Huang keeps up a semi-comic deadpan patter at all times. Chatting with our waitress, he ordered seven items, including a Super Bird sandwich and a chicken-fried steak. “You know, I used to be a dishwasher here,” he told her. “But I worked hard! Like, really hard. So I got to be a busboy.”

Huang has a practical mind-set, dislikes speculation, and has never read a science-fiction novel. He reasons from first principles about what microchips can do today, then gambles with great conviction on what they will do tomorrow. “I do everything I can not to go out of business,” he said at breakfast. “I do everything I can not to fail.” Huang believes that the basic architecture of digital computing, little changed since it was introduced by I.B.M. in the early nineteen-sixties, is now being reconceptualized. “Deep learning is not an algorithm,” he said recently. “Deep learning is a method. It’s a new way of developing software.” The evening before our breakfast, I’d watched a video in which a robot, running this new kind of software, stared at its hands in seeming recognition, then sorted a collection of colored blocks. The video had given me chills; the obsolescence of my species seemed near. Huang, rolling a pancake around a sausage with his fingers, dismissed my concerns. “I know how it works, so there’s nothing there,” he said. “It’s no different than how microwaves work.” I pressed Huang—an autonomous robot surely presents risks that a microwave oven does not. He responded that he has never worried about the technology, not once. “All it’s doing is processing data,” he said. “There are so many other things to worry about.”

In May, hundreds of industry leaders endorsed a statement that equated the risk of runaway A.I. with that of nuclear war. Huang didn’t sign it. Some economists have observed that the Industrial Revolution led to a relative decline in the global population of horses, and have wondered if A.I. might do the same to humans. “Horses have limited career options,” Huang said. “For example, horses can’t type.” As he finished eating, I expressed my concerns that, someday soon, I would feed my notes from our conversation into an intelligence engine, then watch as it produced structured, superior prose. Huang didn’t dismiss this possibility, but he assured me that I had a few years before my John Henry moment. “It will come for the fiction writers first,” he said. Then he tipped the waitress a thousand dollars, and stood up to accept his award.

Huang was born in Taiwan in 1963, but when he was nine he and his older brother were sent as unaccompanied minors to the U.S. They landed in Tacoma, Washington, to live with an uncle, before being sent to the Oneida Baptist Institute, in Kentucky, which Huang’s uncle believed was a prestigious boarding school. In fact, it was a religious reform academy. Huang was placed with a seventeen-year-old roommate. On their first night together, the older boy lifted his shirt to show Huang the numerous places where he’d been stabbed in fights. “Every student smoked, and I think I was the only boy at the school without a pocketknife,” Huang told me. His roommate was illiterate; in exchange for teaching him to read, Huang said, “he taught me how to bench-press. I ended up doing a hundred pushups every night before bed.”

Although Huang lived at the academy, he was too young to attend its classes, so he went to a nearby public school. There, he befriended Ben Bays, who lived with his five siblings in an old house with no running water. “Most of the kids at the school were children of tobacco farmers,” Bays said, “or just poor kids living in the mouth of the holler.” Huang arrived with the school year already in session, and Bays remembers the principal introducing an undersized Asian immigrant with long hair and heavily accented English. “He was a perfect target,” Bays said.

Huang was relentlessly bullied. “The way you described Chinese people back then was ‘Chinks,’ ” Huang told me, with no apparent emotion. “We were called that every day.” To get to school, Huang had to cross a rickety pedestrian footbridge over a river. “These swinging bridges, they were very high,” Bays said. “It was old planks, and most of them were missing.” Sometimes, when Huang was crossing the bridge, the local boys would grab the ropes and try to dislodge him. “Somehow it never seemed to affect him,” Bays said. “He just shook it off.” By the end of the school year, Bays told me, Huang was leading those same kids on adventures into the woods. Bays recalled how carefully Huang stepped around the missing planks. “Actually, it looked like he was having fun,” he said.

Huang credits his time at Oneida with building resiliency. “Back then, there wasn’t a counsellor to talk to,” he told me. “Back then, you just had to toughen up and move on.” In 2019, he donated a building to the school, and talked fondly of the (now gone) footbridge, neglecting to mention the bullies who had tried to toss him off it.

After a couple of years, Huang’s parents secured entry to the United States, settling in Oregon, and the brothers reunited with them. Huang excelled in high school, and was a nationally ranked table-tennis player. He belonged to the school’s math, computer, and science clubs, skipped two grades, and graduated when he was sixteen. “I did not have a girlfriend,” he said.

Huang attended Oregon State University, where he majored in electrical engineering. His lab partner in his introductory classes was Lori Mills, an earnest, nerdy undergraduate with curly brown hair. “There were, like, two hundred and fifty kids in electrical engineering, and maybe three girls,” Huang told me. Competition broke out among the male undergraduates for Mills’s attention, and Huang felt that he was at a disadvantage. “I was the youngest kid in the class,” he said. “I looked like I was about twelve.”

Every weekend, Huang would call Mills and pester her to do homework with him. “I tried to impress her—not with my looks, of course, but with my strong capability to complete homework,” he said. Mills accepted, and, after six months of homework, Huang worked up the courage to ask her out on a date. She accepted that offer, too.

Following graduation, Huang and Mills found work in Silicon Valley as microchip designers. (“She actually made more than me,” Huang said.) The two got married, and within a few years Mills had left the workforce to bring up their children. By then, Huang was running his own division, and attending graduate school at Stanford by night. He founded Nvidia in 1993, with Chris Malachowsky and Curtis Priem, two veteran microchip designers. Although Huang, then thirty, was younger than Malachowsky and Priem, both felt that he was ready to be C.E.O. “He was a fast learner,” Malachowsky said.

Malachowsky and Priem were looking to design a graphics chip, which they hoped would make competitors, in Priem’s words, “green with envy.” They called their company NVision, until they learned that the name was taken by a manufacturer of toilet paper. Huang suggested Nvidia, riffing on the Latin word invidia, meaning “envy.” He selected the Denny’s as a venue to organize the business because it was quieter than home and had cheap coffee—and also because of his experience working for the restaurant chain in Oregon in the nineteen-eighties. “I find that I think best when I’m under adversity,” Huang said. “My heart rate actually goes down. Anyone who’s dealt with rush hour in a restaurant knows what I’m talking about.”

Huang liked video games and thought that there was a market for better graphics chips. Instead of drawing pixels by hand, artists were starting to assemble three-dimensional polygons out of shapes known as “primitives,” saving time and effort but requiring new chips. Nvidia’s competitors’ primitives used triangles, but Huang and his co-founders decided to use quadrilaterals instead. This was a mistake, and it nearly sank the company: soon after the release of Nvidia’s first product, Microsoft announced that its graphics software would support only triangles.

Short on money, Huang decided that his only hope was to use the conventional triangle approach and try to beat the competition to market. In 1996, he laid off more than half the hundred people working at Nvidia, then bet the company’s remaining funds on a production run of untested microchips that he wasn’t sure would work. “It was fifty-fifty,” Huang told me, “but we were going out of business anyway.”

When the product, known as riva 128, hit stores, Nvidia had enough money to meet only one month of payroll. But the gamble paid off, and Nvidia sold a million rivas in four months. Huang encouraged his employees to continue shipping products with a sense of desperation, and for years to come he opened staff presentations with the words “Our company is thirty days from going out of business.” The phrase remains the unofficial corporate motto.

At the center of Nvidia’s headquarters, in Santa Clara, are two enormous buildings, each in the shape of a triangle with its corners trimmed. This shape is replicated in miniature throughout the building interiors, from the couches and the carpets to the splash guards in the urinals. Nvidia’s “spaceships,” as employees call the two buildings, are cavernous and filled with light, but eerie, and mostly empty; post-covid, only about a third of the workforce shows up on any given day. Employee demographics are “diverse,” sort of—I would guess, based on a visual survey of the cafeteria at lunchtime, that about a third of the staff is South Asian, a third is East Asian, and a third is white. The workers are overwhelmingly male.

Even before the run-up in the stock price, employee surveys ranked Nvidia as one of America’s best places to work. Each building has a bar at the top, with regular happy hours, and workers are encouraged to treat their offices as flexible spaces in which to eat, code, and socialize. Nevertheless, the buildings’ interiors are immaculate—Nvidia tracks employees throughout the day with video cameras and A.I. If an employee eats a meal at a conference table, the A.I. can dispatch a janitor within an hour to clean up. At Denny’s, Huang told me to expect a world in which robots would fade into the background, like household appliances. “In the future, everything that moves will be autonomous,” he said.

The only people I saw at Nvidia who didn’t look happy were the quality-control technicians. In windowless laboratories underneath the north-campus bar, pallid young men wearing earplugs and T-shirts pushed Nvidia’s microchips to the brink of failure. The racket was unbearable, a constant whine of high-pitched fans trying to cool overheating silicon circuits. It is these chips which have made the A.I. revolution possible.

In standard computer architecture, a microchip known as a “central processing unit” does most of the work. Coders create programs, and those programs bring mathematical problems to the C.P.U., which produces one solution at a time. For decades, the major manufacturer of C.P.U.s was Intel, and Intel has tried to force Nvidia out of existence several times. “I don’t go anywhere near Intel,” Huang told me, describing their Tom and Jerry relationship. “Whenever they come near us, I pick up my chips and run.”

Nvidia has embraced an alternative approach. In 1999, the company, shortly after going public, introduced a graphics card called GeForce, which Dan Vivoli, the company’s head of marketing, called a “graphics-processing unit.” (“We invented the category so we could be the leader in it,” Vivoli said.) Unlike general-purpose C.P.U.s, the G.P.U. breaks complex mathematical tasks apart into small calculations, then processes them all at once, in a method known as parallel computing. A C.P.U. functions like a delivery truck, dropping off one package at a time; a G.P.U. is more like a fleet of motorcycles spreading across a city.

The GeForce line was a success. Its popularity was driven by the Quake video-game series, which used parallel computing to render monsters that players could shoot with a grenade launcher. (Quake II was released when I was a freshman in college, and cost me years of my life.) The Quake series also featured a “deathmatch” mode for multiplayer combat, and PC gamers, looking to gain an edge, bought new GeForce cards every time they were upgraded. In 2000, Ian Buck, a graduate student studying computer graphics at Stanford, chained thirty-two GeForce cards together to play Quake using eight projectors. “It was the first gaming rig in 8K resolution, and it took up an entire wall,” Buck told me. “It was beautiful.”

Buck wondered if the GeForce cards might be useful for tasks other than launching grenades at his friends. The cards came with a primitive programming tool called a shader. With a grant from darpa, the Department of Defense’s research arm, Buck hacked the shaders to access the parallel-computing circuits below, repurposing the GeForce into a low-budget supercomputer. Soon, Buck was working for Huang.

Buck is intense and balding, and he radiates intelligence. He is a computer-science hot-rodder who has spent the past twenty years testing the limits of Nvidia chips. Human beings “think linearly. You give instructions to someone on how to get from here to Starbucks, and you give them individual steps,” he said. “You don’t give them instructions on how to get to any Starbucks location from anywhere. It’s just hard to think that way, in parallel.”

Since 2004, Buck has overseen the development of Nvidia’s supercomputing software package, known as cuda. Huang’s vision was to enable cuda to work on every GeForce card. “We were democratizing supercomputing,” Huang said.

As Buck developed the software, Nvidia’s hardware team began allocating space on the microchips for supercomputing operations. The chips contained billions of electronic transistors, which routed electricity through labyrinthine circuits to complete calculations at extraordinary speed. Arjun Prabhu, Nvidia’s lead chip engineer, compared microchip design to urban planning, with different zones of the chip dedicated to different tasks. As Tetris players do with falling blocks, Prabhu will sometimes see transistors in his sleep. “I’ve often had it where the best ideas happen on a Friday night, when I’m literally dreaming about it,” Prabhu said.

When cuda was released, in late 2006, Wall Street reacted with dismay. Huang was bringing supercomputing to the masses, but the masses had shown no indication that they wanted such a thing. “They were spending a fortune on this new chip architecture,” Ben Gilbert, the co-host of “Acquired,” a popular Silicon Valley podcast, said. “They were spending many billions targeting an obscure corner of academic and scientific computing, which was not a large market at the time—certainly less than the billions they were pouring in.” Huang argued that the simple existence of cuda would enlarge the supercomputing sector. This view was not widely held, and by the end of 2008 Nvidia’s stock price had declined by seventy per cent.

In speeches, Huang has cited a visit to the office of Ting-Wai Chiu, a professor of physics at National Taiwan University, as giving him confidence during this time. Chiu, seeking to simulate the evolution of matter following the Big Bang, had constructed a homemade supercomputer in a laboratory adjacent to his office. Huang arrived to find the lab littered with GeForce boxes and the computer cooled by oscillating desk fans. “Jensen is a visionary,” Chiu told me. “He made my life’s work possible.”

Chiu was the model customer, but there weren’t many like him. Downloads of cuda hit a peak in 2009, then declined for three years. Board members worried that Nvidia’s depressed stock price would make it a target for corporate raiders. “We did everything we could to protect the company against an activist shareholder who might come in and try to break it up,” Jim Gaither, a longtime board member, told me. Dawn Hudson, a former N.F.L. marketing executive, joined the board in 2013. “It was a distinctly flat, stagnant company,” she said.

In marketing cuda, Nvidia had sought a range of customers, including stock traders, oil prospectors, and molecular biologists. At one point, the company signed a deal with General Mills to simulate the thermal physics of cooking frozen pizza. One application that Nvidia spent little time thinking about was artificial intelligence. There didn’t seem to be much of a market.

At the beginning of the twenty-tens, A.I. was a neglected discipline. Progress in basic tasks such as image recognition and speech recognition had seen only halting progress. Within this unpopular academic field, an even less popular subfield solved problems using “neural networks”—computing structures inspired by the human brain. Many computer scientists considered neural networks to be discredited. “I was discouraged by my advisers from working on neural nets,” Catanzaro, the deep-learning researcher, told me, “because, at the time, they were considered to be outdated, and they didn’t work.”

Catanzaro described the researchers who continued to work on neural nets as “prophets in the wilderness.” One of those prophets was Geoffrey Hinton, a professor at the University of Toronto. In 2009, Hinton’s research group used Nvidia’s cuda platform to train a neural network to recognize human speech. He was surprised by the quality of the results, which he presented at a conference later that year. He then reached out to Nvidia. “I sent an e-mail saying, ‘Look, I just told a thousand machine-learning researchers they should go and buy Nvidia cards. Can you send me a free one?’ ” Hinton told me. “They said no.”

Despite the snub, Hinton encouraged his students to use cuda, including a Ukrainian-born protégé of his named Alex Krizhevsky, who Hinton thought was perhaps the finest programmer he’d ever met. In 2012, Krizhevsky and his research partner, Ilya Sutskever, working on a tight budget, bought two GeForce cards from Amazon. Krizhevsky then began training a visual-recognition neural network on Nvidia’s parallel-computing platform, feeding it millions of images in a single week. “He had the two G.P.U. boards whirring in his bedroom,” Hinton said. “Actually, it was his parents who paid for the quite considerable electricity costs.”

Sutskever and Krizhevsky were astonished by the cards’ capabilities. Earlier that year, researchers at Google had trained a neural net that identified videos of cats, an effort that required some sixteen thousand C.P.U.s. Sutskever and Krizhevsky had produced world-class results with just two Nvidia circuit boards. “G.P.U.s showed up and it felt like a miracle,” Sutskever told me.

AlexNet, the neural network that Krizhevsky trained in his parents’ house, can now be mentioned alongside the Wright Flyer and the Edison bulb. In 2012, Krizhevsky entered AlexNet into the annual ImageNet visual-recognition contest; neural networks were unpopular enough at the time that he was the only contestant to use this technique. AlexNet scored so well in the competition that the organizers initially wondered if Krizhevsky had somehow cheated. “That was a kind of Big Bang moment,” Hinton said. “That was the paradigm shift.”

In the decade since Krizhevsky’s nine-page description of AlexNet’s architecture was published, it has been cited more than a hundred thousand times, making it one of the most important papers in the history of computer science. (AlexNet correctly identified photographs of a scooter, a leopard, and a container ship, among other things.) Krizhevsky pioneered a number of important programming techniques, but his key finding was that a specialized G.P.U. could train neural networks up to a hundred times faster than a general-purpose C.P.U. “To do machine learning without cuda would have just been too much trouble,” Hinton said.

Within a couple of years, every entrant in the ImageNet competition was using a neural network. By the mid-twenty-tens, neural networks trained on G.P.U.s were identifying images with ninety-six-per-cent accuracy, surpassing humans. Huang’s ten-year crusade to democratize supercomputing had succeeded. “The fact that they can solve computer vision, which is completely unstructured, leads to the question ‘What else can you teach it?’ ” Huang said to me.

The answer seemed to be: everything. Huang concluded that neural networks would revolutionize society, and that he could use cuda to corner the market on the necessary hardware. He announced that he was once again betting the company. “He sent out an e-mail on Friday evening saying everything is going to deep learning, and that we were no longer a graphics company,” Greg Estes, a vice-president at Nvidia, told me. “By Monday morning, we were an A.I. company. Literally, it was that fast.”

Around the time Huang sent the e-mail, he approached Catanzaro, Nvidia’s leading A.I. researcher, with a thought experiment. “He told me to imagine he’d marched all eight thousand of Nvidia’s employees into the parking lot,” Catanzaro said. “Then he told me I was free to select anyone from the parking lot to join my team.”

Huang rarely gives interviews, and tends to deflect attention from himself. “I don’t really think I’ve done anything special here,” he told me. “It’s mostly my team.” (“He’s irreplaceable,” the board member Jim Gaither told me.) “I’m not sure why I was selected to be the C.E.O.,” Huang said. “I didn’t have any particular drive.” (“He was determined to run a business by the time he was thirty,” his co-founder Chris Malachowsky told me.) “I’m not a great speaker, really, because I’m quite introverted,” Huang said. (“He’s a great entertainer,” his friend Ben Bays told me.) “I only have one superpower—homework,” Huang said. (“He can master any subject over a weekend,” Dwight Diercks, Nvidia’s head of software, said.)

Huang prefers an agile corporate structure, with no fixed divisions or hierarchy. Instead, employees submit a weekly list of the five most important things they are working on. Brevity is encouraged, as Huang surveys these e-mails late into the night. Wandering through Nvidia’s giant campus, he often stops by the desks of junior employees and quizzes them on their work. A visit from Huang can turn a cubicle into an interrogation chamber. “Typically, in Silicon Valley, you can get away with fudging it,” the industry analyst Hans Mosesmann told me. “You can’t do that with Jensen. He will kind of lose his temper.”

Huang communicates to his staff by writing hundreds of e-mails per day, often only a few words long. One executive compared the e-mails to haiku, another to ransom notes. Huang has also developed a set of management aphorisms that he refers to regularly. When scheduling, Huang asks employees to consider “the speed of light.” This does not simply mean to move quickly; rather, employees are to consider the absolute fastest a task could conceivably be accomplished, then work backward toward an achievable goal. They are also encouraged to pursue the “zero-billion-dollar market.” This refers to exploratory products, such as cuda, which not only do not have competitors but don’t even have obvious customers. (Huang sometimes reminded me of Kevin Costner’s character in “Field of Dreams,” who builds a baseball diamond in the middle of an Iowa cornfield, then waits for players and fans to arrive.)

Perhaps Huang’s most radical belief is that “failure must be shared.” In the early two-thousands, Nvidia shipped a faulty graphics card with a loud, overactive fan. Instead of firing the card’s product managers, Huang arranged a meeting in which the managers presented, to a few hundred people, every decision they had made that led to the fiasco. (Nvidia also distributed to the press a satirical video, starring the product managers, in which the card was repurposed as a leaf blower.) Presenting one’s failures to an audience has become a beloved ritual at Nvidia, but such corporate struggle sessions are not for everyone. “You can kind of see right away who is going to last here and who is not,” Diercks said. “If someone starts getting defensive, I know they’re not going to make it.”

Huang’s employees sometimes complain of his mercurial personality. “It’s really about what’s going on in my brain versus what’s coming out of my mouth,” Huang told me. “When the mismatch is great, then it comes out as anger.” Even when he’s calm, Huang’s intensity can be overwhelming. “Interacting with him is kind of like sticking your finger in the electric socket,” one employee said. Still, Nvidia has high employee retention. Jeff Fisher, who runs the company’s consumer division, was one of the first employees. He’s now extremely wealthy, but he continues to work. “Many of us are financial volunteers at this point,” Fisher said, “but we believe in the mission.” Both of Huang’s children pursued jobs in the hospitality industry when they were in their twenties; following years of paternal browbeating, they now have careers at Nvidia. Catanzaro at one point left for another company. A few years later, he returned. “Jensen is not an easy person to get along with all of the time,” Catanzaro said. “I’ve been afraid of Jensen sometimes, but I also know that he loves me.”

After the success of AlexNet, venture capitalists began shovelling money at A.I. “We’ve been investing in a lot of startups applying deep learning to many areas, and every single one effectively comes in building on Nvidia’s platform,” Marc Andreessen, of the firm Andreessen Horowitz, said in 2016. Around that time, Nvidia delivered its first dedicated A.I. supercomputer, the DGX-1, to a research group at OpenAI. Huang himself took the computer to OpenAI’s offices; Elon Musk, then the chairman, opened the package with a box cutter.

In 2017, researchers at Google introduced a new architecture for neural-net training called a transformer. The following year, researchers at OpenAI used Google’s framework to build the first “generative pre-trained transformer,” or G.P.T. The G.P.T. models were trained on Nvidia supercomputers, absorbing an enormous corpus of text and learning how to make humanlike connections. In late 2022, after several versions, ChatGPT was released to the public.

Since then, Nvidia has been overwhelmed with customer requests. The company’s latest A.I.-training module, known as the DGX H100, is a three-hundred-and-seventy-pound metal box that can cost up to five hundred thousand dollars. It is currently on back order for months. The DGX H100 runs five times as fast as the hardware that trained ChatGPT, and could have trained AlexNet in less than a minute. Nvidia is projected to sell half a million of the devices by the end of the year.

The more processing power one applies to a neural net, the more sophisticated its output becomes. For the most advanced A.I. models, Nvidia sells a rack of dozens of DGX H100s. If that isn’t enough, Nvidia will arrange these computers like library stacks, filling a data center with tens of millions of dollars’ worth of supercomputing equipment. There is no obvious limit to the A.I.’s capabilities. “If you allow yourself to believe that an artificial neuron is like a biological neuron, then it’s like you’re training brains,” Sutskever told me. “They should do everything we can do.” I was initially skeptical of Sutskever’s claim—I hadn’t learned to identify cats by looking at ten million reference images, and I hadn’t learned to write by scanning the complete works of humanity. But the fossil record shows that the nervous system first developed several hundred million years ago, and has been growing more sophisticated ever since. “There have been a lot of living creatures on this earth for a long time that have learned a lot of things,” Catanzaro said, “and a lot of that is written down in physical structures in your brain.”

The latest A.I.s have powers that surprise even their creators, and no one quite knows what they are capable of. (GPT-4, ChatGPT’s successor, can transform a sketch on a napkin into a functioning Web site, and has scored in the eighty-eighth percentile on the LSAT.) In the next few years, Nvidia’s hardware, by accelerating evolution to the speed of a computer-clock cycle, will train all manner of similar A.I. models. Some will manage investment portfolios; some will fly drones. Some will steal your likeness and reproduce it; some will mimic the voices of the dead. Some will act as brains for autonomous robots; some will create genetically tailored drugs. Some will write music; some will write poetry. If we aren’t careful, someday soon, one will outsmart us.

The gross profit margin on Nvidia’s equipment approaches seventy per cent. This ratio attracts competition in the manner that chum attracts sharks. Google and Tesla are developing A.I.-training hardware, as are numerous startups. One of those startups is Cerebras, which makes a “mega-chip” the size of a dinner plate. “They’re just extorting their customers, and nobody will say it out loud,” Cerebras’s C.E.O., Andrew Feldman, said of Nvidia. (Huang countered that a well-trained A.I. model can reduce customers’ overhead in other business lines. “The more you buy, the more you save,” he said.)

Nvidia’s fiercest rival is Advanced Micro Devices. Since 2014, A.M.D. has been run by Lisa Su, another gifted engineer who immigrated to the United States from Taiwan at a young age. In the years since Su became the head of the company, A.M.D.’s stock price has risen thirtyfold, making her second only to Huang as the most successful semiconductor C.E.O. of this era. Su is also Huang’s first cousin once removed.

Huang told me that he didn’t know Su growing up; he met her only after she was named C.E.O. “She’s terrific,” he said. “We’re not very competitive.” (Nvidia employees can recite the relative market share of Nvidia’s and A.M.D.’s graphics cards from memory.) Their personalities are different: Su is reserved and stoic; Huang is temperamental and expressive. “She has a great poker face,” Mosesmann, the industry analyst, said. “Jensen does not, although he’d still find a way to beat you.”

Su likes to tail the incumbent, and wait for it to falter. Unlike Huang, she is not afraid to compete with Intel, and, in the past decade, A.M.D. has captured a large portion of Intel’s C.P.U. business, a feat that analysts once regarded as impossible. Recently, Su has turned her attention to the A.I. market. “Jensen does not want to lose. He’s a driven guy,” Forrest Norrod, the executive overseeing A.M.D.’s effort, said. “But we think we can compete with Nvidia.”

On a gloomy Friday afternoon in September, I drove to an upscale resort overlooking the Pacific to watch Huang be publicly interviewed by Hao Ko, the lead architect of Nvidia’s headquarters. I arrived early to find the two men facing the ocean, engaged in quiet conversation. They were dressed nearly identically, in black leather jackets, black jeans, and black shoes, although Ko was much taller. I was hoping to catch some candid statements about the future of computing; instead, I got a six-minute roast of Ko’s wardrobe. “Look at this guy!” Huang said. “He’s dressed just like me. He’s copying me—which is smart—only his pants have too many pockets.” Ko gave a nervous chuckle, and looked down at his designer jeans, which did have a few more zippered pockets than function would strictly demand. “Simplify, man!” Huang said, before turning to me. “That’s why he’s dressed like me. I taught this guy everything he knows.” (Huang’s wardrobe is widely imitated, and earlier this year he was featured in the Style section of the Times.)

The interview was sponsored by Gensler, one of the world’s leading corporate-design firms, and there were several hundred architects in attendance. As the event approached, Huang increased the intensity of his shtick, cracking a series of weak jokes and rocking back and forth on his feet. Huang does dozens of speaking gigs a year, and had given a talk to a different audience earlier that day, but I realized that he was nervous. “I hate public speaking,” he said.

Onstage, though, he seemed relaxed and confident. He explained that the skylights on the undulating roof of his headquarters were positioned to illuminate the building while blocking direct sunlight. To calculate the design, Huang had strapped Ko into a virtual-reality headset and then attached the headset to a rack of Nvidia G.P.U.s, so that Ko could track the flow of light. “This is the world’s first building that needed a supercomputer to be possible,” Huang said.

Following the interview, Huang took questions from the audience, including one about the potential risks of A.I. “There’s the doomsday A.I.s—the A.I. that somehow jumped out of the computer and consumes tons and tons of information and learns all by itself, reshaping its attitude and sensibility, and starts making decisions on its own, including pressing buttons of all kinds,” Huang said, pantomiming pressing the buttons in the air. The room grew very quiet. “No A.I. should be able to learn without a human in the loop,” he said. One architect asked when A.I. might start to figure things out on its own. “Reasoning capability is two to three years out,” Huang said. A low murmur went through the crowd.

Afterward, I caught up with Ko. Like a lot of Huang’s jokes, the crack about teaching Ko “everything he knows” contained a pointed truth. Ko hadn’t yet made partner at Gensler when Huang chose him for the Nvidia headquarters, bypassing Ko’s boss. I asked Ko why Huang had done so. “You probably have heard stories,” Ko said. “He can be very tough. He will undress you.” Huang had no architecture experience, but he would often tell Ko that he was wrong about the building’s design. “I would say ninety per cent of architects would battle back,” Ko said. “I’m more of a listener.”

Ko recalled Huang challenging Nvidia’s engineering staff on the speed of the V.R. headset. The headset originally took five hours to render design changes; at Huang’s urging, the engineers got the speed down to ten seconds. “He was tough on them, but there was a logic to it,” Ko said. “If the headset took five hours, I’d probably settle on whatever shade of green looked adequate. If it took ten seconds, I’d take the time to pick the best shade of green there was.”

The buildings’ design won several awards and made Ko’s career. Still, Ko recalled his time on the project with mixed emotions. “The place was finished, it looks amazing, we’re doing the tour, and he’s questioning me about the placement of the water fountains,” Ko said. “He was upset because they were next to the bathrooms! That’s required by code, and this is a billion-dollar building! But he just couldn’t let it go.”

“I’m never satisfied,” Huang told me. “No matter what it is, I only see imperfections.”

Iasked Huang if he was taking any gambles today that resemble the one he took twenty years ago. He responded immediately with a single word: “Omniverse.” Inspired by the V.R.-architecture gambit, the Omniverse is Nvidia’s attempt to simulate the real world at an extraordinary level of fine-grained detail. Huang has described it as an “industrial metaverse.”

Since 2018, Nvidia’s graphics cards have featured “ray-tracing,” which simulates the way that light bounces off objects to create photorealistic effects. Inside a triangle of frosted glass in Nvidia’s executive meeting center, a product-demo specialist showed me a three-dimensional rendering of a gleaming Japanese ramen shop. As the demo cycled through different points of view, light reflected off the metal counter and steam rose from a bubbling pot of broth. There was nothing to indicate that it wasn’t real.

The specialist then showed me “Diane,” a hyper-realistic digital avatar that speaks five languages. A powerful generative A.I. had studied millions of videos of people to create a composite entity. It was the imperfections that were most affecting—Diane had blackheads on her nose and trace hairs on her upper lip. The only clue that Diane wasn’t truly human was an uncanny shimmer in the whites of her eyes. “We’re working on that,” the specialist said.

Huang’s vision is to unify Nvidia’s computer-graphics research with its generative-A.I. research. As he sees it, image-generation A.I.s will soon be so sophisticated that they will be able to render three-dimensional, inhabitable worlds and populate them with realistic-seeming people. At the same time, language-processing A.I.s will be able to interpret voice commands immediately. (“The programming language of the future will be ‘human,’ ” Huang has said.) Once the technologies are united with ray-tracing, users will be able to speak whole universes into existence. Huang hopes to use such “digital twins” of our own world to safely train robots and self-driving cars. Combined with V.R. technology, the Omniverse could also allow users to inhabit bespoke realities.

I felt dizzy leaving the product demo. I thought of science fiction; I thought of the Book of Genesis. I sat on a triangular couch with the corners trimmed, and struggled to imagine the future that my daughter will inhabit. Nvidia executives were building the Manhattan Project of computer science, but when I questioned them about the wisdom of creating superhuman intelligence they looked at me as if I were questioning the utility of the washing machine. I had wondered aloud if an A.I. might someday kill someone. “Eh, electricity kills people every year,” Catanzaro said. I wondered if it might eliminate art. “It will make art better!” Diercks said. “It will make you much better at your job.” I wondered if someday soon an A.I. might become self-aware. “In order for you to be a creature, you have to be conscious. You have to have some knowledge of self, right?” Huang said. “I don’t know where that could happen.”

IMAGE(https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fd7ca61-cfbc-41bd-9c01-49d375a3a92b_1222x1170.png)

This is Anna Indiana, yet another AI pop star experiment. “She” looks and sounds terrible. And her “songs” are like three notes away from triggering a lawsuit from Taylor Swift. Also, isn’t it fascinating how all of these AI influencer projects are always meant to look like young women? Speaking of which…

IMAGE(https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78a50dc3-79f3-4b0d-b7c0-693fb6addf79_1894x1266.png)

This “digital creator” named Aitana was created by a Spanish “AI modeling agency” (lol) called The Clueless (also lol). They recently did an interview with Euro News, where they said that Aitana makes around 1,000 euros per branded post. She also has a page where she posts lingerie photos on an OnlyFans competitor. Extremely grim stuff.

Though, the punchline here is that the team behind Aitana created her after they realized that they were losing business because of how flaky and annoying human influencers are to work with. Which, honestly, fair enough.

Two things that stuck out to me

The two got married, and within a few years Mills had left the workforce to bring up their children. By then, Huang was running his own division, and attending graduate school at Stanford by night.
In the years since Su became the head of the company, A.M.D.’s stock price has risen thirtyfold, making her second only to Huang as the most successful semiconductor C.E.O. of this era. Su is also Huang’s first cousin once removed.

There's a lot in there. And even as someone who tries to pare away the hype whenever I read a lot of AI stuff, I couldn't help but come away even more convinced that anyone arguing "No AI ever" might as well get themselves a suit of armor and try and slay some windmills (although somewhat fitting, as the analysis I remember reading in college said that Quixote's recurring theme is the human need to withstand suffering).

They employ Grant Cohn, so fake AI writers is a big step up for them.

Two days late, but an addendum to the OpenAI thing:

Last week, it seemed that OpenAI—the secretive firm behind ChatGPT—had been broken open. The company’s board had suddenly fired CEO Sam Altman, hundreds of employees revolted in protest, Altman was reinstated, and the media dissected the story from every possible angle. Yet the reporting belied the fact that our view into the most crucial part of the company is still so fundamentally limited: We don’t really know how OpenAI develops its technology, nor do we understand exactly how Altman has directed work on future, more powerful generations.

This was made acutely apparent last Wednesday, when Reuters and The Information reported that, prior to Altman’s firing, several staff researchers had raised concerns about a supposedly dangerous breakthrough. At issue was an algorithm called Q* (pronounced “Q-star”), which has allegedly been shown to solve certain grade-school-level math problems that it hasn’t seen before. Although this may sound unimpressive, some researchers within the company reportedly believed that this could be an early sign of the algorithm improving its ability to reason—in other words, using logic to solve novel problems.

Math is often used as a benchmark for this skill; it’s easy for researchers to define a novel problem, and arriving at a solution should in theory require a grasp of abstract concepts as well as step-by-step planning. Reasoning in this way is considered one of the key missing ingredients for smarter, more general-purpose AI systems, or what OpenAI calls “artificial general intelligence.” In the company’s telling, such a theoretical system would be better than humans at most tasks and could lead to existential catastrophe if not properly controlled.

An OpenAI spokesperson didn’t comment on Q* but told me that the researchers’ concerns did not precipitate the board’s actions. Two people familiar with the project, who asked to remain anonymous for fear of repercussions, confirmed to me that OpenAI has indeed been working on the algorithm and has applied it to math problems. But contrary to the worries of some of their colleagues, they expressed skepticism that this could have been considered a breakthrough awesome enough to provoke existential dread. Their doubt highlights one thing that has long been true in AI research: AI advances tend to be highly subjective the moment they happen. It takes a long time for consensus to form about whether a particular algorithm or piece of research was in fact a breakthrough, as more researchers build upon and bear out how replicable, effective, and broadly applicable the idea is.

Take the transformer algorithm, which underpins large language models and ChatGPT. When Google researchers developed the algorithm, in 2017, it was viewed as an important development, but few people predicted that it would become so foundational and consequential to generative AI today. Only once OpenAI supercharged the algorithm with huge amounts of data and computational resources did the rest of the industry follow, using it to push the bounds of image, text, and now even video generation.

In AI research—and, really, in all of science—the rise and fall of ideas is not based on pure meritocracy. Usually, the scientists and companies with the most resources and the biggest loudspeakers exert the greatest influence. Consensus forms around these entities, which effectively means that they determine the direction of AI development. Within the AI industry, power is already consolidated in just a few companies—Meta, Google, OpenAI, Microsoft, and Anthropic. This imperfect process of consensus-building is the best we have, but it is becoming even more limited because the research, once largely performed in the open, now happens in secrecy.

Over the past decade, as Big Tech became aware of the massive commercialization potential of AI technologies, it offered fat compensation packages to poach academics away from universities. Many AI Ph.D. candidates no longer wait to receive their degree before joining a corporate lab; many researchers who do stay in academia receive funding, or even a dual appointment, from the same companies. A lot of AI research now happens within or connected to tech firms that are incentivized to hide away their best advancements, the better to compete with their business rivals.

OpenAI has argued that its secrecy is in part because anything that could accelerate the path to superintelligence should be carefully guarded; not doing so, it says, could pose a threat to humanity. But the company has also openly admitted that secrecy allows it to maintain its competitive advantage. “GPT-4 is not easy to develop,” OpenAI’s chief scientist, Ilya Sutskever, told The Verge in March. “It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are many, many companies who want to do the same thing.”

Since the news of Q* broke, many researchers outside OpenAI have speculated about whether the name is a reference to other existing techniques within the field, such as Q-learning, a technique for training AI algorithms through trial and error, and A*, an algorithm for searching through a range of options to find the best one. The OpenAI spokesperson would only say that the company is always doing research and working on new ideas. Without additional knowledge and without an opportunity for other scientists to corroborate Q*’s robustness and relevance over time, all anyone can do, including the researchers who worked on the project, is hypothesize about how big of a deal it actually is—and recognize that the term breakthrough was not arrived at via scientific consensus, but assigned by a small group of employees as a matter of their own opinion.

I'll openly admit to not being very sophisticated about business matters... but when I saw this Q* stuff pop up not long after all the kerfuffle with the board of directors, I couldn't help but think it was an attempt to put a sort-of-positive spin on an otherwise baffling and startlingly weird few days of behaviour (i.e.: we went temporarily wild and crazy because we've got super AI just around the corner!)

I've been assuming the opposite - the board's antics led to a frenzy of attention pointed towards not very much information, so people latched onto a rumor of a rumor and started speculating.

The other complication is that OpenAI's IP license deal with Microsoft apparently excludes tech judged to be "AGI", so regardless of what Q* actually is, there may have been business pressures related to whether it should be considered AGI (under the contract, as opposed to any technical definition). But whatever was going on with that, the terrain is probably different now that MS is cozier with the post-kerfuffle board.

Former SI fact-checker Pablo Torre had a take I liked about this today, which is that it's dystopian, yes, but it's super-f*cking sloppy. Like, it's just clearly such a lazy, cheapass attempt to churn out content while not paying people for their work.

AI is not the cause of the fall of Sports Illustrated, not at all, but good GOD, that was once one of the seminal magazines for journalism, of any form, in this nation. And now they're this.

But I do look forward to the inaugural AI generated swimsuit edition

Gonna be a lotta surplus thumbs.

The most likely rumored cause I saw was that it was reprisal for Altman trying to get the board to remove one of the board members that has since resigned, Helen Toner. Apparently they'd long been at odds, and about a month before all this she wrote a paper that he felt could be harmful to the future of OpenAI the company. In between the paper being published and the firing, he had been talking to the rest of the board to convince them to remove her. Trying to influence the board to benefit the company ran counter to the purpose of why they structured things like they did, which was so the board could reign in the company if needed. That fits with the initial claim that they couldn't trust him anymore, but that he didn't do anything illegal.

@emollick wrote:

It isn't just AI generated text that is starting to bleed over into search results.

The main image if you do a Google search for Hawaiian singer Israel Kamakawiwoʻole (whose version of Somewhere Over the Rainbow you have probably hear) is a Midjourney creation right from Reddit.

One side effect from AI is that the corpus of human knowledge from mid-2023 on will have to be treated fundamentally differently than prior to 2023.

A huge amount of what you learned or think you know about how to evaluate images or text is no longer valid. Not an exaggeration.

IMAGE(https://pbs.twimg.com/media/F_5p-loWUAAKvbB?format=jpg&name=medium)

IMAGE(https://pbs.twimg.com/media/F_5qgjyXAAEPSkZ?format=jpg&name=medium)

Now, TO BE FAIR, I did a search on desktop just now, and here's the result:

IMAGE(https://cdn.discordapp.com/attachments/829908548594827295/1180788870799958089/israel.JPG?ex=657eb254&is=656c3d54&hm=a3c738a15d50bb3e22f7c2b33ad5540ae386bbddda77110d684892cc28f802b7&)

That said, I personally have noticed the growing AI creep in Google Image Search, and in Pinterest, which is being rendered increasingly worthless for art reference (it was once a GREAT source) due to AI glurge.

Things are going to get weird and sh*tty.

AI isn't evil but if evil people programs AI....SKYNET!