[Discussion] The Inconceivable Power of Trolls in Social Media

This is a follow-on to the nearly two year old topic "Trouble at the Kool-Aid Point." The intention is to provide a place to discuss the unreasonable power social media trolls have over women and minorities, with a primary focus on video games (though other examples are certainly welcome).

Chairman_Mao wrote:

Good. If a channel is truly trying to be family friendly, comments should be disabled anyway.

The number of comments a video gets is something YouTube's algorithm uses to determine how visible or how much a video gets promoted/recommended. The algorithm considers turning off comments a negative thing and responds by basically burying the video in search results and not recommending it.

Content creators, especially smaller creators, consider comments essential to interacting with their viewers and growing their channels.

OG_slinger wrote:
Chairman_Mao wrote:

Good. If a channel is truly trying to be family friendly, comments should be disabled anyway.

The number of comments a video gets is something YouTube's algorithm uses to determine how visible or how much a video gets promoted/recommended. The algorithm considers turning off comments a negative thing and responds by basically burying the video in search results and not recommending it.

Content creators, especially smaller creators, consider comments essential to interacting with their viewers and growing their channels.

Mostly retorical, but when have YouTube comments EVER been of value?

I keep seeing "YouTube demonitized antivax videos" and reading it as "YouTube demonized antivax videos."

But they were already demonic!

BadKen wrote:

I keep seeing "YouTube demonitized antivax videos" and reading it as "YouTube demonized antivax videos."

But they were already demonic!

YouTube demonitizied Anthrax videos?! F that!

OG_slinger wrote:
Chairman_Mao wrote:

Good. If a channel is truly trying to be family friendly, comments should be disabled anyway.

The number of comments a video gets is something YouTube's algorithm uses to determine how visible or how much a video gets promoted/recommended. The algorithm considers turning off comments a negative thing and responds by basically burying the video in search results and not recommending it.

Content creators, especially smaller creators, consider comments essential to interacting with their viewers and growing their channels.

Yeah, I've seen a lot of perfectly normal YT creators decrying this move as the kind of short-sighted, ham-handed CYA attempt YT has become best known for. This will, almost certainly, cause some pretty unfortunate unexpected knock-on effects.

Meanwhile, ALGORITHMS!

It’s troubling enough that British teenager Molly Russell sought out images of suicide and self-harm online before she took her own life in 2017. But it was later discovered that these images were also being delivered to her, recommended by her favorite social media platforms. Her Instagram feed was full of them. Even in the months after her death, Pinterest continued to send her automated emails, its algorithms automatically recommending graphic images of self-harm, including a slashed thigh and cartoon of a young girl hanging. Her father has accused Instagram and Pinterest of helping to kill his 14-year-old daughter by allowing these graphic images on their platforms and pushing them into Molly’s feed.

Molly’s father’s distressing discovery has fueled the argument that social media companies like Instagram and Pinterest are exacerbating a “mental health crisis” among young people. Social media may be a factor in the rise of a “suicide generation”: British teens who are committing suicide at twice the rate they were eight years ago. There have been calls for change in the wake of Molly Russell’s death. British health secretary Matt Hancock, for example, said social media companies need to “purge this content once and for all” and threatened to prosecute companies that fail to do so. In the face of this intense criticism, Instagram has banned “graphic self-harm images,” a step beyond their previous rule only against “glorifying” self-injury and suicide.

But simple bans do not in themselves deal with a more pernicious problem: Social media platforms not only host this troubling content, they end up recommending it to the people most vulnerable to it. And recommendation is a different animal than mere availability. A growing academic literature bears this out: Whether its self-harm, misinformation, terrorist recruitment, or conspiracy, platforms do more than make this content easily found—in important ways they help amplify it.

Our research has explored how content that promotes eating disorders gets recommended to Instagram, Pinterest, and Tumblr users. Despite clear rules against any content that promotes self-harm, and despite blocking specific hashtags to make that content harder to find, social media platforms continue to serve this content up algorithmically. Social media users receive recommendations—or, as Pinterest affectionately calls them, “things you might love”—intended to give them a personalized, supposedly more enjoyable experience. Search for home inspiration and soon the platform will populate your feed with pictures of paint samples and recommend amateur interior designers for you to follow. This also means that, the more a user seeks out accounts promoting eating disorders or posting images of self-harm, the more the platform learns about their interests and sends them further down that rabbit hole too.

As we've noted,t his can apply to a whole lot, like anti-vax stuff. Watch one video, here's 500 more like it.

Honestly, the more and more I observe this stuff, the more and more it seems like the people who created it clearly couldn't (or weren't interested in) imagine where it would go, and now they've built Frankenstien's monster, and they can't turn it off.

Prederick wrote:

Honestly, the more and more I observe this stuff, the more and more it seems like the people who created it clearly couldn't (or weren't interested in) imagine where it would go, and now they've built Frankenstien's monster, and they can't turn it off.

What I really want to know is if they have realized that yet. Like, Zuckerberg clearly doesn't believe Facebook is causing harm (or is fine with the harm it causes because it gives him more power). The people at YouTube and Google? No idea what they actually believe.

Prederick wrote:

Honestly, the more and more I observe this stuff, the more and more it seems like the people who created it clearly couldn't (or weren't interested in) imagine where it would go, and now they've built Frankenstien's monster, and they can'twon’t turn it off.

FTFY

So, here's the thing about policing content on these platforms. You can't leave it solely up to the algorithms, they're not advanced enough yet, you always need actual, real-life human beings make some of the calls. I wonder what that experience is like for....

....oh. Oh that doesn't sound good.

The panic attacks started after Chloe watched a man die.

She spent the past three and a half weeks in training, trying to harden herself against the daily onslaught of disturbing posts: the hate speech, the violent attacks, the graphic pornography. In a few more days, she will become a full-time Facebook content moderator, or what the company she works for, a professional services vendor named Cognizant, opaquely calls a “process executive.”

For this portion of her education, Chloe will have to moderate a Facebook post in front of her fellow trainees. When it’s her turn, she walks to the front of the room, where a monitor displays a video that has been posted to the world’s largest social network. None of the trainees have seen it before, Chloe included. She presses play.

The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed. She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people. When Chloe explains this to the class, she hears her voice shaking.

Returning to her seat, Chloe feels an overpowering urge to sob. Another trainee has gone up to review the next post, but Chloe cannot concentrate. She leaves the room, and begins to cry so hard that she has trouble breathing.

No one tries to comfort her. This is the job she was hired to do. And for the 1,000 people like Chloe moderating content for Facebook at the Phoenix site, and for 15,000 content reviewers around the world, today is just another day at the office.

Collectively, the employees described a workplace that is perpetually teetering on the brink of chaos. It is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place where employees can be fired for making just a few errors a week — and where those who remain live in fear of the former colleagues who return seeking vengeance.

It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders micromanage content moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers; where people develop severe anxiety while still in training, and continue to struggle with trauma symptoms long after they leave; and where the counseling that Cognizant offers them ends the moment they quit — or are simply let go.

The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”

Miguel works the posts in his queue. They arrive in no particular order at all.

Here is a racist joke. Here is a man having sex with a farm animal. Here is a graphic video of murder recorded by a drug cartel. Some of the posts Miguel reviews are on Facebook, where he says bullying and hate speech are more common; others are on Instagram, where users can post under pseudonyms, and tend to share more violence, nudity, and sexual activity.

Each post presents Miguel with two separate but related tests. First, he must determine whether a post violates the community standards. Then, he must select the correct reason why it violates the standards. If he accurately recognizes that a post should be removed, but selects the “wrong” reason, this will count against his accuracy score.

Miguel is very good at his job. He will take the correct action on each of these posts, striving to purge Facebook of its worst content while protecting the maximum amount of legitimate (if uncomfortable) speech. He will spend less than 30 seconds on each item, and he will do this up to 400 times a day.

Well! Great.

Yep.

We had a team of a few dozen human moderators when Yahoo Personals started offering a premium service back in 2001. Turnaround on those jobs was severe, and they paid well. I doubt they had to see a fraction of the stuff mentioned in that article, but paid subscribers submitted more than enough disturbing profile pictures to break reviewers' brains.

The horrible treatment of their own staff is so infuriating. They're treated like your average call center/helpdesk/Support type of white collar employee, who by the way should also be treated better than human cattle, but thing is: this is a thousand times worse than even those jobs!

Yeah, as sh*tty as people are to regular call center workers, this is exponentially worse. This is literally being tasked with looking at the absolute worst of humanity, Every. Single. Day.

Like, you might come into this gig having never seen CP, but after six months on the job, there's a pretty good assurance that, unfortunately, you will be more familiar than you ever thought you'd be.

Bringing it back from social media horror shows for a moment, THQ Nordic apparently thought it was a good idea to do an AMA on 8chan. 8chan is the somehow-even-worse child of 4chan that's been, among other things, blacklisted by Google for child abuse.

THQ Nordic is a garbage developer.

Indeed. This is the first bad news I've seen from them on the social media front. What have I missed? Do I even want to know?

BadKen wrote:

Indeed. This is the first bad news I've seen from them on the social media front. What have I missed? Do I even want to know?

Sally didn't like them because they didn't fix a broken achievement.

But yeah, I'm pretty comfortable calling them garbage after this sh*t. This is literally the worst gaming PR that I've ever seen.

They're just going after the much-coveted "arsehole" segment of the market.

Jonman wrote:

They're just going after the much-coveted "arsehole" segment of the market.

To be fair, that's being outed as an increasingly huge segment.

Tanglebones wrote:
Jonman wrote:

They're just going after the much-coveted "arsehole" segment of the market.

To be fair, that's being outed as an increasingly huge segment.

Right? I was only half-joking.

If I was a savvy money man at a big publisher, I'd be looking to get my games in front of their mouthbreathing faces. Like, I'm sadly half convinced that this was actually a good PR move.

Jonman wrote:
Tanglebones wrote:
Jonman wrote:

They're just going after the much-coveted "arsehole" segment of the market.

To be fair, that's being outed as an increasingly huge segment.

Right? I was only half-joking.

If I was a savvy money man at a big publisher, I'd be looking to get my games in front of their mouthbreathing faces. Like, I'm sadly half convinced that this was actually a good PR move.

Having to follow up your "good" PR move, while you have licenses from Disney, no less, with apologizing about being in a site hosting images of sexual abuse, including child abuse there, antisemitism, and far more... seems like LESS of a good move.

EDIT: Damn Twitter website updating my URL as I scroll through the replies... which are increasingly "Uhh, you sure about this guys because your replies from you and the other THQN members participating in that AMA seem to suggest y'all were right at home.

Demosthenes wrote:

Having to follow up your "good" PR move, while you have licenses from Disney, no less, with apologizing about being in a site hosting images of sexual abuse, including child abuse there, antisemitism, and far more... seems like LESS of a good move.

I mean, I'm not disagreeing with you, but my inner skeptic looks at your statement and thinks "only if they lose more money from Disney recoiling than they do from the clammy hug they're now in with the internet arsehole brigade."

But my inner skeptic's inner skeptic* thinks that 8chan arseholes are going to pirate all their games anyway...

Spoiler:

* it's skeptics all the way down

ALLLLLLLLLLGORITHMS

In 2008, Vaccinate Your Family, the nation’s largest nonprofit dedicated to advocating for vaccinations, had to stop posting videos to YouTube.

Then known as Every Child By Two, the organization had used its channel on the massive video platform to post interviews with doctors, public service announcements and testimonials from parents of children who had died of vaccine-preventable diseases.

But those messages were quickly sabotaged. YouTube’s recommendation system, which appears alongside videos and suggests what users should watch next, would direct viewers to anti-vaccination videos, according to Amy Pisani, executive director of Vaccinate Your Family.

“When we would put things on YouTube, it was followed by an anti-vaccination video,” Pisani, told NBC News. YouTube’s recommendation system, powered by an algorithm that the company does not make public, has been criticized in recent years for favoring controversial and conspiratorial content. The companyhas said that it changed the system to point to more “authoritative” sources.

“They were insane. Videos like ‘My child was harmed by the DTaP’ or ‘My child can’t walk anymore,’ every conspiracy that you can imagine would come after ours,” Pisani said. "They actually started running right after our video was over, so if you blinked for a minute, you wouldn’t know it was a new video.”

“We became so frustrated with the recommendations that we moved them to Vimeo,” a far smaller YouTube-like video platform owned by media conglomerate InterActiveCorp, Pisani said. YouTube has more than 10 times Vimeo’s active users and is the second-largest search engine in the world after Google, which also owns YouTube.

Pisani’s story offers a window into the struggle that public health officials and advocates face as they attempt to provide information on vaccinations on social media, where anti-vaccination proponents have spent more than a decade building audiences and developing strategies that ensure they appear high in search results and automated recommendations.

Again, replace "Anti-vax" with any number of other problems, like "Q-Anon" or "Flat Earth" and you begin to notice the scale of the problem here.

LOL, YOOUUUUUUTUUUUUUUUUUBE

Free Hess, a pediatrician and mother, had learned about the chilling videos over the summer when another mom spotted one on YouTube Kids.

She said that minutes into the clip from a children’s video game, a man appeared on the screen — offering instructions on how to commit suicide.

“I was shocked,” Hess said, noting that since then, the scene has been spliced into several more videos from the popular Nintendo game Splatoon on YouTube and YouTube Kids, a video app for children. Hess, from Ocala, Fla., has been blogging about the altered videos and working to get them taken down amid an outcry from parents and child health experts, who say such visuals can be damaging to children.

One on YouTube shows a man pop into the frame. “Remember, kids,” he begins, holding what appears to be an imaginary blade to the inside of his arm. “Sideways for attention. Longways for results.”

“I think it’s extremely dangerous for our kids,” Hess said about the clips Sunday in a phone interview with The Washington Post. “I think our kids are facing a whole new world with social media and Internet access. It’s changing the way they’re growing, and it’s changing the way they’re developing. I think videos like this put them at risk.”

A recent YouTube video viewed by The Post appears to include a spliced-in scene showing Internet personality Filthy Frank. It’s unclear why he was edited into these clips, but his fans have been known to put him in memes and other videos. There is a similar video on his channel filmed in front of a green screen, but the origins and context of the clip in question are not clear.

Andrea Faville, a spokeswoman for YouTube, said in a written statement that the company works to ensure that it is “not used to encourage dangerous behavior and we have strict policies that prohibit videos which promote self-harm.”

“We rely on both user flagging and smart detection technology to flag this content for our reviewers,” Faville added. “Every quarter we remove millions of videos and channels that violate our policies and we remove the majority of these videos before they have any views. We are always working to improve our systems and to remove violative content more quickly, which is why we report our progress in a quarterly report [transparencyreport.google.com] and give users a dashboard showing the status of videos they’ve flagged to us.”

WTAF

At this point I'd be fairly ok with the government stepping in and shutting the whole damn thing down. Google are clearly not interested in doing anything beyond the slightest amount of housekeeping they can get away with.

Again, in their meager defense, part of the problem is the model. The appeal of YouTube is that there is no bar to clear. Anyone can upload whatever they want, whenever they want. The sheer amount of content YouTube processes on an hourly basis is essentially impossible to closely moderate.

That said, they are also doing next to the laziest job possible.

The volume of content is partly a smokescreen. They brought the issue on themselves by allowing YouTube to create an "anything goes" culture. They refused to moderate it early on, and if they had, fewer attempts to post this stuff would be made. But they wanted clicks and views and engagement and prioritized that above all else (they still do).

It's like refusing to clean up mold in your basement and then suddenly being surprised that it's inside all the walls and has become a huge problem and isn't just in the basement anymore. (And this is a problem with internet culture as a whole. We collectively created a space where people were outright encouraged to do whatever, and now we're surprised that they're doing just that.)

ClockworkHouse wrote:

But they wanted clicks and views and engagement and prioritized that above all else (they still do).

Clicks, views, and engagement are exactly what you want from an advertising delivery platform, which is exactly what YouTube really is.

Every social media technology suffers from the same problem. The companies that developed the technologies or platforms don't care how people use their stuff or what content they upload. They just care that a lot of people use their stuff and consume user-generated content.

That's because regardless of what utility or value people may get from a platform it really just exists to serve up advertising. And companies are only going to try to manage all the content and interactions when things get so bad that it generates bad publicity that causes advertisers to pull their ad buys.

It's basically an online version of retail loss prevention. There's a level of theft--or in the case of social media companies abuse, threats, racism, disturbing ass sh*t, etc.--that's simply tolerated because it's not cost effective to actually do something about it.

Removing content that violates the terms of service isn't really moderation or at least it's qualitatively different from moderation as practiced on a forum like this. The content screeners have no context and no awareness of the community.

And YouTube's "anything goes" was partially because they didn't want to deal with how much of their early content was straight-up stolen.

It doesn't help that the safe harbor laws keep them from being responsible for what people upload.

ClockworkHouse wrote:

The volume of content is partly a smokescreen.

As is the claim of "safe harbor." They are not just presenting user content. They are ranking it and presenting it based on their own idea of what has value.

The issue is not that there is more content than can possibly be reviewed, the issue is that YouTube itself is promoting inappropriate content. As I've mentioned elsewhere, they can fix that (relatively) easily. They tweak their ranking algorithms for advertisers all the time. It's much easier to identify viewing patterns of inappropriate content than it is to police all inappropriate content.

Unfortunately, YouTube and other sites with machine learning content promotion are not going to change anything until they are forced to do so.

Seems like the government intervention should take the form of outlawing any recommendation algorithm for any web platform. Relatively easy to define, we have clear examples of how it is harmful, and society isn't harmed when you take it away.

Oh no my ad revenue! Tough sh*t you made a monster. Maybe think about selling products and services that add value rather than taxing attention as a business model.