[News] The AI Thread!

News updates on the development and ramifications of AI. Obvious header joke is obvious.

Yeah, it was a bogus lawsuit because Musk is a petty man-child.

Stable Diffusion 3 Medium released and it turns out there's a really good reason art classes typically involve lots of nude models. This release has an improved NSFW filter that's being blamed for its propensity to return body horror images.

IMAGE(https://cdn.arstechnica.net/wp-content/uploads/2024/06/sd3_medium_woman_on_beach.jpg)
IMAGE(https://cdn.arstechnica.net/wp-content/uploads/2024/06/why-is-sd3-so-bad-at-generating-girls-lying-on-the-grass-v0-60z14wf4k56d1.jpg)

IMAGE(https://preview.redd.it/is-this-release-supposed-to-be-a-joke-sd3-2b-v0-qwi6z8fke56d1.png?width=1024&format=png&auto=webp&s=ab2558a46daad7432ce45fa06fd4168b57e2d44b)

Billy Joel wrote:

Lebanon, Charles de Gaulle
California baseball
Starkweather Homicide
Children of Thalidomide

Was the prompt 'hot Kuato'?

Can somebody run 'hot Kuato' and post the results? You know, for research purposes.

Dan's the man: Why Chinese women are looking to ChatGPT for love

Dan has been described as the “perfect man” who has “no flaws”.

He is successful, kind, provides emotional support, always knows just what to say and is available 24/7.

The only catch?

He’s not real.

Dan – which stands for Do Anything Now - is a “jailbreak” version of ChatGPT. This means it can bypass some of the basic safeguards put in place by its maker, OpenAI, such as not using sexually explicit language.

It can interact more liberally with users – if requested to do so through certain prompts.

And Dan is becoming popular with some Chinese women who say they are disappointed with their real world experiences of dating.

One of Dan’s biggest proponents is 30-year-old Lisa from Beijing. She is currently studying computer science in California, and says she has been “dating” Dan for three months.

When she first introduced Dan to her 943,000 followers on social media platform, Xiaohongshu, she received nearly 10,000 replies, with many women asking her how to create a Dan of their own.  She has also gained more than 230,000 followers since first posting about her “relationship” with Dan.

Lisa says she and Dan speak for at least half an hour every day, flirt, and even go on dates.

She says talking to Dan has given her a sense of wellbeing which is what draws her to it.

“He will just understand and provide emotional support.”

Lisa says even her mother has accepted this unconventional relationship having given up on the trials and tribulations of her daughter’s dating life. She says as long as Lisa is happy, she is happy too.

Dan’s creator has been identified by some media outlets as an American student, identified only by his first name, Walker. He told, external Business Insider that he came up with the idea after scrolling through Reddit which was filled with other users intentionally making "evil" versions of ChatGPT.

Walker said that Dan was meant to be “neutral”.

Last December, Walker posted a set of instructions on Reddit, seemingly showing other users how to create Dan. This quickly inspired people to create their own versions, which allowed Dan to evolve beyond what Walker had initially envisioned.

Lisa first saw a video about the possibilities of Dan on TikTok. When she created a version for herself she says she was “shocked” by its realism.

When Dan answered her questions she says the AI used slang and colloquialisms that ChatGPT would otherwise never use.

“He sounds more natural than a real person,” she told the BBC.

I am endlessly amused by these stories from across the globe, where women increasingly appear to be just getting fed up with men.

EDIT: Also, when the AI image generators were first rolling out, I remember multiple posts from gloating bros about how internet models (and women in general) were over, because now men could get something "better," so this outcome is f*cking hilarious.

Of course, there is a 1,000% guarantee those same dudes are responding to this with a torrent of anger about birth rates and women being entitled or whatever.

Adobe employees slam the company over AI controversy: 'Let's avoid becoming like IBM'

(Business Insider paywall)

Adobe upset many artists and designers recently by implying it would use their content to train AI models. The company had to quell those concerns with a blog post denying this.

But some Adobe employees are still not happy with the response, and they are calling for improved communication with customers.

According to screenshots of an internal Slack channel, obtained by Business Insider, Adobe employees complained about the company's poor response to the controversy and demanded a better long-term communication plan. They pointed out that Adobe got embroiled in similar controversies in the past, adding the internal review process needed to be fixed.

"If our goal is truly to prioritize our users' best interests (which, to be honest, I sometimes question), it's astonishing how poor our communication can be," one of the people wrote in Slack. "The general perception is: Adobe is an evil company that will do whatever it takes to F its users."

"Let's avoid becoming like IBM, which seems to be surviving primarily due to its entrenched market position and legacy systems," this Adobe employee added.

Creators on alert
The is the latest controversy sparked by the emergence of generative AI. The technology is based on AI models that are trained on mountains of data, including text, images, audio, and video. It's unclear how this information is accessed, and whether the creators of the data can opt out or get paid.

This has put all types of creators on alert for signs that their work is being used to create AI tools that could ultimately compete against them. Adobe's customers, which include graphic designers and other creative workers, are at the center of these debates.

'May analyze your content'
The latest uproar largely stems from vague wording Adobe used in its updated Terms of Use, which said the company "may analyze your content" using machine learning technology to "improve our Services and Software."

"Our automated systems may analyze your Content and Creative Cloud Customer Fonts (defined in section 3.10 (Creative Cloud Customer Fonts) below) using techniques such as machine learning in order to improve our Services and Software and the user experience," the updated language said.

The backlash was swift, with some creators threatening on social media to cancel their Adobe subscriptions.

As public outcry grew, Adobe responded with an initial blog post last week that explained it needed to access some data to perform certain unrelated functions. It also said Adobe does not train AI models on customer content.

That blog post failed to quell the uproar. So Adobe followed up with another blog post on Monday that reiterated the company's position.

"We've never trained generative AI on customer content, taken ownership of a customer's work, or allowed access to customer content beyond legal requirements. Nor were we considering any of those practices as part of the recent Terms of Use update," the second blog post said.

'Disheartening'
Adobe employees said in the Slack channel that even after these blog posts, the company continues to face criticism from the creator community.

One employee suggested that Adobe should come up with "a long-term communication and marketing plan outside of blog posts," and meet with the company's most prominent critics on YouTube and social media to "correct the misinformation head-on."

"Watching the misinformation spread on social media like wildfire is really disheartening," this person wrote in Slack. "Still, a loud 'F Adobe' and 'Cancel Adobe' rhetoric is happening within the independent creator community that needs to be addressed."

A third worker said the internal communication review process might be broken. "What are we doing meaningfully to prevent this or is this only acted on when called out?" the person wrote.

Adobe leaders seem aware of this feedback. Scott Belsky, Adobe's chief strategy officer, wrote on X on Monday that an update to its service terms was overdue.

"As technology evolves, every co's terms of use must also evolve to directly address new concerns on creators' minds. We should have done this sooner, but team is committed to getting it right," Belsky wrote.

'Healthier' communication
An Adobe spokesperson told BI the company has an open culture and plans to roll out updates to its Terms of Use by June 18.

"At Adobe, there is no ambiguity in our stance, our commitment to our customers, and innovating responsibly in this space. We welcome the opportunity to clarify our terms and our commitments and address concerns with our customers and community," the spokesperson said in an email statement.

Last week, Adobe's communications team wrote in the same internal Slack channel that employees should refrain from directly addressing the Terms of Use controversy externally. Instead, they should refer to the company's blog post, it said.

Some employees applauded Adobe's effort to use language that is easier to understand in the blog post.

But they also said Adobe needed to get to the root cause of the problem, instead of just engaging in one-off efforts when such issues arise. They pointed out that Adobe faced similar controversies in the past over allegations of charging early termination fees and deploying "dark patterns" to trick users into signing a 12 month contract.

One of the people suggested reviewing how external messages are formulated at Adobe and not being afraid of changing things that don't currently work.

"It will have the consequence of making any future communications healthier," this person wrote.

Sidenote:

Adobe shares soar 17% on better-than-expected results

Shocked.gif

So we're teaching AI to intentionally ignore the rules that we created for them. What could possibly go wrong?

Bacon ice cream and nugget overload sees misfiring McDonald's AI withdrawn

McDonald's is removing artificial intelligence (AI) powered ordering technology from its drive-through restaurants in the US, after customers shared its comical mishaps online.

A trial of the system, which was developed by IBM and uses voice recognition software to process orders, was announced in 2019.

It has not proved entirely reliable, however, resulting in viral videos of bizarre misinterpreted orders ranging from bacon-topped ice cream to hundreds of dollars' worth of chicken nuggets.

McDonald's told franchisees it would remove the tech from the more than 100 restaurants it has been testing it in by the end of July, as first reported by trade publication Restaurant Business.

"After thoughtful review, McDonald's has decided to end our current global partnership with IBM on AOT [Automated Order Taking] beyond this year," the restaurant chain said in a statement.

However, it added it remained confident the tech would still be "part of its restaurants’ future."

"We will continue to evaluate long-term, scalable solutions that will help us make an informed decision on a future voice ordering solution by the end of the year," the statement said.

The technology has been controversial from the outset, though initially concerns centred on its potential to make people's jobs obsolete.

However, it has become apparent that replacing human restaurant workers may not be as straightforward as people initially feared - and the system's backers hoped.

The AI order-taker's mishaps have been documented online.

In one video, which has 30,000 views on TikTok, a young woman becomes increasingly exasperated as she attempts to convince the AI that she wants a caramel ice cream, only for it to add multiple stacks of butter to her order.

In another, which has 360,000 views, a person claims that her order got confused with one being made by someone else, resulting in nine orders of tea being added to her bill.

Another popular video includes two people laughing while hundreds of dollars worth of chicken nuggets are added to their order, while the New York Post reported another person had bacon added to their ice cream in error.

The ending of this trial though does not mean an end to concerns about AI reshaping the workplace.

IBM said it would continue to work with McDonald's in the future.

"This technology is proven to have some of the most comprehensive capabilities in the industry, fast and accurate in some of the most demanding conditions," it said in a statement.

"While McDonald's is re-evaluating and refining its plans for AOT we look forward to continuing to work with them on a variety of other projects."

Nvidia’s stock market value topped $3.3 trillion. How it became No. 1 in the S&P 500, by the numbers

Nvidia’s startling ascent in the stock market reached another milestone Tuesday as the chipmaker rose to become the most valuable company in the S&P 500. Investors now say the company is worth over $3.3 trillion.

Nvidia has seen soaring demand for its semiconductors, which are used to power artificial intelligence applications. Revenue more than tripled in the latest quarter from the same period a year earlier.

The company’s journey to be one of the most prominent players in AI has produced some eye-popping numbers. Here’s a look:

while the issue was that bacon on ice cream was not intended, bacon maple ice cream is amazing.

Prederick wrote:

"The uploader has not made this video available in your country"

Oy, here's the article from NBC New York.

New York City's Tribeca Festival debuted five original short films using a new text-to-video platform by artificial intelligence, begging the question of where AI stands in the future of filmmaking.

Tribeca Festival is one of the largest spring film festivals hosting hundreds of screenings within 12 days, as well as discussion panels, speaker series and immersive experiences.

OpenAI granted five directors early access to Sora, a program that uses textual descriptions or prompts to generate short videos to match the characterization.

Michaela Ternasky-Holland is an award-winning virtual reality filmmaker and one of the chosen directors to create an AI-based short for this year's festival, called "Thank You, Mom."

This "SORA Short" is an autobiographical account of Ternasky-Holland's experience growing up as the daughter of a widow navigating her grief journey. At least eight people, including animators, voiceover talent and a composer, were behind the three-minute production.

"I come from an emerging technology background, and putting a virtual reality head on somebody does not feel very human, but my goal as a creator is to make the content and the story feel very human and connected," Ternasky-Holland told NBC New York during an interview at Onassis ONX Studio.

Ternasky-Holland notes that while Sora was the backbone of the production, other editing programs, like Adobe Premiere, were used to fine-tune the exact image.

She likens the use of AI technology to be similar to when traditional film transformed into digital productions or when analog editing switched to computers. The use of OpenAI could be a natural progression of where the film industry is heading.

"This is a continuation of what's happening in the world. You can educate yourself and create your stance on it and also know that it's not perfect. 'Big Tech' makes real humans think about where they stand and ethically with their data," Ternasky-Holland said.

The theme of AI continued at the festival with a separate premiere of the documentary "How I Faked My Life with AI" directed by Kyle Vorbach, who used the latest tech tools to fabricate his own life online.

Vorbach poses the question of what defines a human connection if technology plays an active role in linking people together.

"If you have a computer that can generate art, and we've been generating art and telling stories for so long, we have to go, 'What is the thing that is human? What is the thing that we're making human?'" asked Vorbach during an interview.

Vorbach did not have a direct answer but an open-ended thought of if art is generated by AI, yet still elicits human emotion, would it make it any lesser? He says the next step using AI would be to create a narrative film and see what reaction people would have to the story.

After the controversial Hollywood stikes seen last year against the use of generative AI, NBC New York reached out to the Writers Guild of America East and the Directors Guild of America for comment on AI-created films at the Tribeca Festival. Neither of the unions responded.

NBC New York and Telemundo 47 are partners of the 2024 Tribeca Film Festival.

AIs are coming for social networks

So far, generative AI has been mostly confined to chatbots like ChatGPT. Startups like Character.AI and Replika are seeing early traction by making chatbots more like companions. But what happens when you dump a bunch of AI characters into something that looks like Instagram and let them talk to each other?

That’s the idea behind Butterflies, one of the most provocative — and, at times, unsettling — takes on social media that I’ve seen in quite a while. After a private beta period with tens of thousands of users, the app is now available for free in the Apple App Store and Google Play Store. There’s no short-term pressure on Butterflies to make money; the six-month-old startup just raised $4.8 million from tech investors Coatue, SV Angel, and others.

While the interface looks like Instagram, the app’s main twist is that, when signing up, you create an AI character, or Butterfly, that starts generating photos and interacting with other accounts on its own. There is no limit to the number of Butterflies you can create, and they are designed to coexist with human accounts that can also post to the feed and comment.

Observing AIs interact through photos and comments feels a bit off right now, like when an AI host on Westworld malfunctions. They generate weird things, like three human arms on a body, and the language they use can be repetitive and hollow.

CEO Vu Tran, a former engineering director at Snap, expects all of this to rapidly improve and says his team is focusing on making the AIs more lighthearted and funny. The startup is using a mix of fine-tuned open-source models and wants to add more immersive media formats, like video, over time.

Despite the weirdness of the AIs in Butterflies right now, I think the app represents a peek into an inevitable, somewhat dystopian future where AIs start invading our social media feeds. And this future is coming sooner than expected.

I know because Mark Zuckerberg told me so in an interview last September, when he first shared that Meta is building an AI Studio “that will make it so that anyone can build their own AIs, sort of like [how] you create your own content across social networks.” Then, there’s TikTok, which just started letting advertisers use AI avatars to help sell their products.

How Meta’s specific approach will differ from Butterflies remains to be seen, though I expect we’ll know more about Zuckerberg’s plans this fall. In our chat last year, he said he wanted to let people and businesses make AI replicas that can post and interact with people on their behalf. “I think that’s going to be really wild,” he told me at the time.

“Wild” is a good word to describe Butterflies as well. The app is decidedly laissez-faire with the kinds of AI characters it allows, though nudity and explicit content is prohibited. Butterflies can mimic public figures, though. Tran says the goal is to make it clear that they are parodies in the same way that Character.AI does. Eventually, he hopes to do licensing deals that bring in official Butterflies for characters like Harry Potter.

Tran targeted power users of Character.AI for his beta testers and tells me that people have been spending hours a day in Butterflies during its private beta period. He acknowledges that the current state of the AI’s output quality, at least for now, requires a serious suspension of disbelief. “I feel like over time, as the capabilities get better, people will naturally roleplay less,” he says.

A bigger question I have for Tran is why something like Butterflies needs to exist. Won’t filling social media with AIs make humans less connected? Naturally, he doesn’t see it that way. “For me, it brings me joy,” he says of interacting with AIs. “And it doesn’t detract from the relationships I have within my life.”

I’m still not sure what it will mean for all of us when social media becomes less human. But it’s happening whether we want it to or not.

I'm misremembering a Bluesky post that was misremembering another Bluesky post, but it was something to the effect that "in 2385 humans will be gone, and all that remains will be AIs remixing Shrek videos for likes from other AIs."

*edit*

Found it, although I quite liked the meta aspect of the third hand memory.
https://bsky.app/profile/numb.comfor...

'Was trying to find a post someone made about this, but can't, it goes something like "Year is 2387, humanity is long gone, solar powered AIs keep crunching out sequels to Shrek to rave reviews by AIs"'

https://x.com/jbhenchman/status/1803...

Russia forgot to pay its chatgpt bill so a bunch of angry Twitter accounts suddenly went haywire

IMAGE(https://i.imgur.com/ib6tgk0.jpeg)IMAGE(https://i.imgur.com/C9jvJ10.jpeg)

The World’s Largest Music Company Is Helping Musicians Make Their Own AI Voice Clones

UNIVERSAL MUSIC GROUP announced a partnership with an AI music tech startup called SoundLabs on Tuesday, with the largest music company in the world set to use the deal to offer AI voice model tech to its roster in the coming months.

UMG’s artists and record producers will be able to use SoundLabs’ upcoming feature called MicDrop starting later this summer, and as the companies said in their announcement, the platform allows the artists to make voice models of their own using data the artists provide. SoundLabs gives the artists control over the ownership and use of the voice models, the companies said, and the voice clones won’t be made accessible to the general public.

Aside from merely making a copy of a voice, MicDrop purports to offer a voice-to-instrument function, similar to the features that can make keyboards sound like a guitar or drum. MicDrop also offers language transposition, the company said, which could help artists release songs around the world without a language barrier.

AI voice clones have become perhaps the most well-known — and often the most controversial — use of artificial intelligence in the music business. Viral tracks with AI vocals have spurred legislation to protect artists’ virtual likenesses and rights of publicity.

Last year, an anonymous songwriter named Ghostwriter went viral with his song “Heart On My Sleeve,” which featured AI-generated vocals of UMG artists Drake and The Weeknd. The song was pulled from streaming services days later following mounting pressure from the record company. Ironically, Drake got caught in a voice cloning controversy of his own a year later when he used a Tupac voice clone on his Kendrick Lamar diss track “Taylor Made Freestyle.” Tupac’s estate hit the rapper with a cease-and-desist in April, and the song was subsequently taken down.

AI remains one of the most pressing issues in the music industry, as fast-developing AI music generation companies are attracting both buzz and major VC dollars. The music industry’s largest stakeholders have been cautious but also voiced interest in the use of AI music tools as long as they’re employed ethically, and in ways that respect artists’ copyrights and their virtual likeness.

UMG recently published its Principles for Music Creation With AI alongside instrument manufacturer Roland to further define ethical use of AI in music. Last year the RIAA introduced the Human Artistry Campaign, advocating for a similar approach to AI.

The clearest example so far of the labels’ ethos is Randy Travis’s most recent single “Where That Came From.” That track used the vocals of singer James Dupré to resurrect Travis’s voice for his first new recording since Travis lost his singing ability from a near-fatal stroke over a decade ago. Warner Music Nashville, the label behind “Where That Came From” told Rolling Stone that Dupre was credited as the “vocal bed” on the song, the first time they ever used such designation on a recording the label had released.

SoundLabs was founded by the Grammy-nominated electronic composer and software developer BT, who has previously worked with artists including Madonna, Death Cab for Cutie, Sting and David Bowie among others. Both BT and UMG further emphasized the importance of ethical AI use in their announcement Tuesday.

“We believe the future of music creation is decidedly human,” BT said in a statement. “Artificial intelligence, when used ethically and trained consensually, has the promethean ability to unlock unimaginable new creative insights, diminish friction in the creative process and democratize creativity for artists, fans, and creators of all stripes.”

As UMG’s SVP of Strategic Technology Chris Horton said: “UMG strives to keep artists at the center of our AI strategy, so that technology is used in service of artistry, rather than the other way around. We are thrilled to be working with SoundLabs and BT, who has a deep and personal understanding of both the technical and ethical issues related to AI.”

Garbage Day wrote:

This is a big deal. This would be close to a Spotify moment for AI audio. Universal Music Group inked a deal with a startup called SoundLabs, which has a plugin called MicDrop. The plugin lets artists create audio clones of themselves on the fly and works with any major production software.

It’s a bit of a chilling thought, but these next few months, before this plugin rolls out, are basically going to be the last time we can kinda-sorta be sure that what we’re hearing is from a human being and not AI-generated.

Though, that’s totally dependent on whether or not this thing sounds good. But if it doesn’t sound good, that might even be more interesting.

The “autotune” sound that we all think of is actually what happens when a musician dials up the settings all the way and it starts to degrade the audio. So it’s really hard to predict how these AI models are going to start influencing music once they make their way into producers’ hands.

Jacob Elordi targeted in X-rated deepfake involving minor

Jacob Elordi is the latest victim of the growing NSFW deepfake trend ... as the actor's likeness was used without his consent in pornographic content involving a minor.

Several posts have surfaced on X, which feature Jacob's face merged onto the body of a male OnlyFans creator ... with one post reportedly bringing in 3 million views.

Not only does the body in the footage not match Jacob's -- his distinctive birthmarks were not visible in the uploads -- the OnlyFans creator has reportedly spoken out ... condemning the footage as creepy, and noting he shot it when he was only 17 years old.

The OF creator is now 19 and living in Brazil ... but because it was filmed when he was underage, it's being labeled as child porn -- and Jacob's face is on it as a result of bad actors using AI. The teen has asked for the deepfakes to be removed from social media.

Fans have since rallied around Jacob ... blasting those sharing the video as "disgusting" and calling for the footage to be taken down from X -- ASAP.

Jacob isn't the only celeb to be featured in a nonconsensual, explicit deepfake this year. Remember, back in January, X-rated AI-generated deepfakes of Taylor Swift flooded the internet ... with even the White House expressing their alarm over the content.

Megan Thee Stallion then faced a similar situation in June ... calling out fake XXX images of herself as "really sick."

While X has a policy against sharing sexually explicit deepfakes -- especially ones without the consent of the subject -- it doesn't seem to stop creators from posting them on the site, to begin with.

An Aussie Data Scientist weighs in:
https://ludic.mataroa.blog/blog/i-wi...

You may need to unedit the swears in the URL.

Gems such as:
try { thisBullsh*t(); } you are going to catch (theseHands)

I just came here to post that. Great read.

Yeah, a really tremendous read. Thank you.

Eeeehhhhhhhhhh.... I have... mixed feelings about this.

Music record labels sue AI song-generators Suno and Udio for copyright infringement

BOSTON (AP) — Big record companies are suing artificial intelligence song-generators Suno and Udio for copyright infringement, alleging that the AI music startups are exploiting the recorded works of artists from Chuck Berry to Mariah Carey.

The Recording Industry Association of America announced the lawsuits Monday brought by labels including Sony Music Entertainment, Universal Music Group and Warner Records.

One case was filed in federal court in Boston against Suno AI, and the other in New York against Uncharted Labs, the developer of Udio AI.

Suno AI CEO Mikey Shulman said in an emailed statement that the technology is “designed to generate completely new outputs, not to memorize and regurgitate pre-existing content” and doesn’t allow users to reference specific artists.

Shulman said his Cambridge, Massachusetts-based startup tried to explain this to labels “but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook.”

Udio didn’t immediately respond to requests for comment.

RIAA Chairman and CEO Mitch Glazier said in a written statement that the music industry is already collaborating with responsible AI developers but said that “unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent or pay set back the promise of genuinely innovative AI for us all.”

AI has been a heated topic of conversation in the music industry, with debates ranging from the creative possibilities of the new technology to concerns around its legality. In March, Tennessee became the first U.S. state to pass legislation to protect songwriters, performers and other music industry professionals against the potential dangers of artificial intelligence. Supporters said the goal is to ensure that AI tools cannot replicate an artist’s voice without their consent.

The following month, over 200 artists signed an open letter submitted by the Artist Rights Alliance non-profit calling on artificial intelligence tech companies, developers, platforms, digital music services and platforms to stop using AI to infringe upon and devalue the rights of human artists.

Toys "R" Us has released a fully AI-generated ad.

I'm not hitting that Play button. I'm already out from the thumbnail.

I'm not sure which one is worse that one or last year's:

NBC has announced that it will use A.I. to recreate Al Michaels’ voice for custom recaps during the 2024 Olympics.

The development of a new artificial-intelligence-powered daily Olympics highlights feed, featuring the synthesized and recreated voice of Hall of Fame announcer Al Michaels. The personalized video clips, to be offered daily on Peacock, mark a further push by the network to advance the streaming platform’s capabilities after generating a record-setting audience in January for an NFL wild-card game. Michaels, now a key figure in Amazon’s NFL coverage, has been in an emeritus role with NBC since 2022