[Discussion] The Inconceivable Power of Trolls in Social Media

This is a follow-on to the nearly two year old topic "Trouble at the Kool-Aid Point." The intention is to provide a place to discuss the unreasonable power social media trolls have over women and minorities, with a primary focus on video games (though other examples are certainly welcome).

Hey, Prederick!

Police issue warning to parents after "Momo challenge" resurfaces

Like most memes, the Momo challenge seemingly disappeared soon after it went viral. But this week, parents across the U.K. are finding the game on WhatsApp as well as hidden within animated videos for children across social media. [...]
Schools across the U.K. are also alerting parents to the potential dangers of the viral videos. "We are aware that some nasty challenges (Momo challenge) are hacking into children's programmes. Challenges appear midway through Kids YouTube, Fortnight, Peppa pig to avoid detection by adults,"
Chairman_Mao wrote:

Vanilla Ice impersonator Milo Yiannopoulos....

Genuine LOL.

And again, parents have finally freaked out about something on YouTube, and it's a hoax. Fantastic.

It's not a hoax! It's just something only children can see on YouTube. THAT'S WHY IT'S SO PERNICIOUS!

The credible takeaway that I read in a CNN report is that all the publicity around this is probably going to give trolls and/or hackers some ideas.

"We are aware that some nasty challenges (Momo challenge) are hacking into children's programmes."

Hacking into children's programs? This is why we're doomed of course, even the language surrounding this hoax is so wrong that it's going to confuse people. As a larger society we can't even have discussions about it properly.

That Momo thing is freaky though, if anyone ever wanted to kill me Agent 47 style, they'd just have to print that out and stick it to the outside of my bathroom window facing in. Get up to pee in the middle of the night, switch on the light and boom: instant heart attack.

BadKen wrote:

The credible takeaway that I read in a CNN report is that all the publicity around this is probably going to give trolls and/or hackers some ideas.

Oh this has absolutely been Streisand-effected into the stratosphere now.

I'm fascinated by this Borgeian phenomenon of an urban legend willing itself into existance from basically nothing.

Gremlin wrote:

I'm fascinated by this Borgeian phenomenon of an urban legend willing itself into existance from basically nothing.

Like Libertarianism?

I'm listening to the Behind the Bastards podcast on George Lincoln Rockwell. He's basically the original racist troll who started the whole holocaust denial, neo-nazi, free-speech warrior thing right after WWII. Current trolls really just used the same strategy he came up with way back then. It's fascinating and depressing.

Jonman wrote:
JeffreyLSmith wrote:

Sounds like an easy way to identify the white nationalist incels all in one convenient echo chamber that civilized human beings aren't forced to hear. What's the downside?

That the entirety of that echo chamber will be filled with people egging each other on to commit terrorism?

In light of recent events, how sadly, infuriatingly prescient.

So, YouTube tweeted:

YouTube wrote:

Our hearts are broken over today’s terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage.

There have been many good responses to this, but I thought this one was the best.

@meakoopa wrote:

You have a lot more than that to remove. You built, plank by plank, the stage from which a virulent racist ideology reached so many children. We live in the nightmare your irresponsibility and greed helped build.

And this one:

@sfrantzman wrote:

They seem to see the footage as the problem, not the hate and the nazis. It’s like seeing footage of the aftermath of pogroms as the problem, rather than the ideology that led to it.

Note that YouTube, Twitter, and other social media sites have an already-deployed way to instantly ban the majority of Nazi content from their sites. They're required by German law to block such content, and extending that globally would be relatively straightforward. (Not painless, because software dev is never really as easy as it looks, but they have the ability.)

A bunch of other stuff would probably still slip by (Lobster man, gamergate, etc.) but it'd be a start.

It might be relatively straightforward technically, but politically and culturally it would be a whole different ball game because those companies are based in a country that baked the idea of freedom of expression into its founding document.

People barely tolerate the idea of the government censoring things, even with safeguards like the courts in place.

They're not going to be happy with corporations censoring things because, well, you simply can't trust corporations. The people moderating content aren't going to be federal judges with years of legal training and a wealth of settled law to draw upon. They're going to be poor, overworked bastards making $15 or $20 an hour who got three days of training and have to somehow enforce an inadequate, vague, and ever-changing corporate content policy.

And the company itself isn't going to want to aggressively censor content because they know doing so will negatively affect views and clicks and that will result in less advertising revenue. Heck, Tumblr's decision to ban porn cost them 100 million page views in January. That's about 20% of its site traffic.

Corporations might be able to enforce bans on the most outrageous Nazi or white supremacist content, but there's going to be an absolute sh*tload of stuff that will leak through.

Should they ban any discussion of the Civil War, the Confederate Flag, or "Southern heritage" that doesn't make it plain that the Confederates were traitors to America who fought a war for the right to own other people? If you don't then you still have a powerful white supremacist recruiting tool and an outlet for racists. Is that photo showing someone flashing a white supremacist hand sign or merely the "OK" hand sign? The moderator has to be quick about it because they have about 30 seconds to decide before they have to move on to the next item (and they know they'll be fired if "accuracy" is deemed too low).

Alas, then we have tried nothing, and we're all out of ideas.

OG_slinger wrote:

They're not going to be happy with corporations censoring things because, well, you simply can't trust corporations. The people moderating content aren't going to be federal judges with years of legal training and a wealth of settled law to draw upon. They're going to be poor, overworked bastards making $15 or $20 an hour who got three days of training and have to somehow enforce an inadequate, vague, and ever-changing corporate content policy.

So...Trump's judges?

OG_slinger wrote:

And the company itself isn't going to want to aggressively censor content because they know doing so will negatively affect views and clicks and that will result in less advertising revenue. Heck, Tumblr's decision to ban porn cost them 100 million page views in January. That's about 20% of its site traffic.

Errrr, so you say that corporations aren't going to censor, then immediately provide a recent example of precisely that?

Saw this in an AV Club article about the trolls attacking Captain Marvel and Brie Larson and everything else. YouTube actually did something smart!

The Verge: YouTube fought Brie Larson trolls by changing its search algorithm

This week [Mar 8], YouTube recategorized “Brie Larson” as a news-worthy search term. That does one very important job: it makes the search algorithm surface videos from authoritative sources on a subject. Instead of videos from individual creators, YouTube responds with videos from Entertainment Tonight, ABC, CBS, CNN, and other news outlets first.

OMGCENSORZ!

ClockworkHouse wrote:

Alas, then we have tried nothing, and we're all out of ideas.

Social media companies have tried things. Hiring tens of thousands of content moderators is hardly "nothing." YouTube demonetizing whole swaths of videos is hardly "nothing."

But that doesn't change the fact that those social media companies are for profit enterprises who simultaneously don't want anything to f*ck with their advertising revenue, don't want their brand or the brands of the companies who advertise with them sullied by the filth that's on the internet, and who literally can't hire enough content moderators to clean up their services without severely impacting their profitability and causing long delays between when someone tweets or comments or uploads a video and it can be seen by everyone else because there's just too much content.

Social media companies are always going to do the bare minimum when it comes to keeping this sh*t off their services because it's simply not worth it to do more. And the things they will do will be geared towards proving to advertisers that they're making a good faith effort to ensure their ad isn't run next to some horrible content.

The users of the service will get lip service because they're the product and who, despite getting doxxed, harassed, and threatened, *still* use the service. That user stickiness is another reason why social media companies are slow to do anything. Why invest millions and millions of dollars to fix something when people are still going to use the service regardless?

The only real solution would be to make a non-profit or government version of social media, one where the experiences and concerns of the end users were paramount. Even then they'd have to deal with the fundamental issue of how to pay for things because I seriously doubt enough people would pay hard money to replace Facebook or Twitter with a version that promised less sexism, racism, etc. And without enough users there's no "social" and the service would fail.

There have got to be some algorithmically easy things they can do just for individuals and groups. Just for example add the ability for a user to mark a video as "junk"; then, other videos 'liked' by people who 'liked' that video, or others from the same channel, would be filtered or downranked for that user. You could compute that on the fly or even client-side.

Then just add the ability to share craplists and you've got something useful. And if there were global craplists seeded from unequivocal Nazi crap, that would be OK with me. Handwave in some Bayes' Theorem and you've got a publishable CS paper.

Some clowns are still going to call that censorship, but all I'm doing is stating that if bad people like something then I probably won't.

Jonman wrote:

Errrr, so you say that corporations aren't going to censor, then immediately provide a recent example of precisely that?

I said they aren't going to want to censor content because it will hit their bottom line and gave an example of just that.

And Tumblr didn't ban porn because they were being socially responsible. They censored content because Apple yanked them from the App Store because of all the child porn on their service. Not being on the App Store would have resulted in the company losing massive numbers of new users and all their follow on views. And that would have changed the formulas for how much they could charge for ads and how much Wall Street valued their company.

So they banned porn and it cost them 20% of their views. And that's going to slam their bottom line this year. That they still made the decision shows that Tumblr's management knew that not being listed on Apple's App Store would have cost the company a lot more.

OG, I realize all the market forces at work here and why those companies don't see a profit motive in doing better, but I'm just not going to agree with you that therefore we should all shrug our shoulders and be like, "welp, sh*t's a hard problem, guess we'll just let those companies keep doing their thing".

What you've done is just laid out a roadmap for how corporations can be compelled to self-censor.

The only problem is that it's just kicking the can up the chain to another self-interested corporation.

ClockworkHouse wrote:

OG, I realize all the market forces at work here and why those companies don't see a profit motive in doing better, but I'm just not going to agree with you that therefore we should all shrug our shoulders and be like, "welp, sh*t's a hard problem, guess we'll just let those companies keep doing their thing".

More power to you if you can figure out how to make it financially worthwhile for social media companies to spend heaps and heaps of money and be on the hook for censoring (or not censoring) content that their users can't agree should (or shouldn't) be censored in the first place.

Even if it magically cost nothing to moderate all content tomorrow, companies would still be loath to censor anything but the most blatantly terrible content because it's a very slippery slope out there. Where's the cut off between vocally supporting Trump or the Confederate flag and promoting racism or radicalizing some dumb motherf*cker? And do we want a multi-billion international company deciding that?

The most effective things people could do get these companies to change is either simply stop using them or pay another entity cold-hard cash every month to build a non-sh*tty, heavily moderated version of Facebook, etc. But the vast majority of people don't want to give up Twitter or IG. And they sure as hell don't want to pay for something that they can get for free. So here we are. Wishing social media was a public utility that could be regulated and moderated for the betterment of society instead of it being a money-making platform where people could say mostly anything knowing full well that 30 to 40% of them were just horrible f*ckers with terrible views and opinions.

Jonman wrote:

What you've done is just laid out a roadmap for how corporations can be compelled to self-censor.

The only problem is that it's just kicking the can up the chain to another self-interested corporation.

Ah, the libertarian paradise!

Christchurch: This Will Keep Happening

On Friday, a 28-year-old Australian man went live on Facebook as he drove toward a Christchurch, New Zealand, mosque where he would allegedly begin a shooting spree, killing 49 people and injuring dozens of others. "Remember lads, subscribe to PewDiePie,” the suspect remarked as he loaded guns into the back of his car.

PewDiePie, the infamous Swedish YouTuber, has been connected to a string of racist and anti-Semitic controversies over the years. The reference was among several the suspected killer made that were all seemingly intended to make his gruesome murder spree go viral. Like his reference to far-right influencer Candace Owens in a 74-page manifesto published just before the killings began, the PewDiePie quip immediately set off discussions on Twitter about whether or not the killer was trolling the media, whose coverage would inevitably follow.

No matter the killer’s intent, we should take what this disaffected individual allegedly said and wrote and did online seriously. Not only because he killed 49 people, but also because he tapped into a well-established digital feedback loop where white male violence is uploaded, distributed, consumed, and remixed by others. We often think of memes as images, funny words on funny pictures. But at their core, they are just ideas that spread. The Christchurch killer wasn’t only trying to make himself go viral. He — with extreme self-awareness — was hijacking the white male violence digital feedback loop to spread and amplify his ideas and actions.

Earlier in the video stream, the killer showed off his guns, all of them emblazoned with text. One reads, “For Rotherham Alexandre Bissonette [sic] Luca Traini.” “For Rotherham" is a reference to a child exploitation scandal that took place in the UK involving predominantly Muslim men. Alexandre Bissonnette killed six people and injured 19 more at a mosque in Quebec City in 2017. Luca Traini shot and wounded six African migrants during a shooting spree in Macerata, Italy, in 2018. Like the PewDiePie remark, these references tie him to a larger universe of far-right white nationalism and violence.

Murderers have always wanted virality. Jack the Ripper sent letters to the London papers. The Zodiac killer taunted police via the San Francisco Chronicle. The Unabomber also had a manifesto. What's new is this global, unfiltered, digital feedback loop that can instantly amplify everything. Before Friday’s carnage started, photos of the weapons were posted to Twitter. A 74-page manifesto titled “The Great Replacement" was published online. There appears to have been a post on the anonymous message board 8chan announcing the attack. “Well lads, it's time to stop sh*tposting and time to make a real life effort post,” the 8chan post read. “If I don't survive the attack, goodbye, god bless, and I will see you all in Valhalla!”

As NBC’s Ben Collins noted, “After the shooter posted links to the livestream and his manifesto, 8chan users cheered him on in the posts immediately following his threat.”

Like his guns sharpied in silver, covered in references, the gunman’s manifesto too is a sprawling, garbled self-aware mess of white nationalist symbology pulled from every dark corner of the internet. It’s full of Wikipedia links. It reads like an unhinged blog post. Several sections devolve almost into nonsensical racist poetry. It opens with an FAQ. It has jokes in it.

In one section, titled “Were you taught violence and extremism by video games, music, literature, cinema?” the author replies, “Yes, Spyro The Dragon 3 taught me ethno-nationalism. Fortnite trained me to be a killer and to floss on the corpses of my enemies. No.”

Sarcastically self-aware, that remark seems intentionally designed for use in pieces exactly like this one. And there are others. In a section titled “Were/are you a fascist?” the author replies, “Yes. For once, the person that will be called a fascist is an actual fascist. I am sure the journalists will love that.”

To take the typo-filled, rambling manifesto at face value, Friday’s attack was inspired by the white nationalist concept of “white genocide." It’s a far-right conspiracy theory that’s common online, linking junk evolutionary science to fears of a looming race war and the decline of the white Aryan race. The suspected gunman describes himself as a partisan defending against an occupying force. The manifesto cites immigration and fertility rates across Europe. It links out to articles about recent European terror attacks.

But to say that the manifesto’s author was radicalized online would be like saying the tip of an iceberg rose directly from the surface of the water. And in the same way, to ignore the role the internet has played in Friday’s attack would be like saying you don’t need water to make ice. It’s all a piece of the other.

Until we pull apart the loop, we’re trapped in all of this will keep happening.

qaraq wrote:

There have got to be some algorithmically easy things they can do just for individuals and groups. Just for example add the ability for a user to mark a video as "junk"; then, other videos 'liked' by people who 'liked' that video, or others from the same channel, would be filtered or downranked for that user. You could compute that on the fly or even client-side.

Then just add the ability to share craplists and you've got something useful. And if there were global craplists seeded from unequivocal Nazi crap, that would be OK with me. Handwave in some Bayes' Theorem and you've got a publishable CS paper.

Some clowns are still going to call that censorship, but all I'm doing is stating that if bad people like something then I probably won't.

At this point I'd support removing the legal protection Section 230 of the CDA provides for any content they allow to be monetized, either through their partner program or through AdSense. They should remain protected for comments and non-monetized videos, but If they're making money off it they should accept responsibility for it.

I don't know if I'm just more aware of it now or if it's just rose colored glasses, but I don't remember Youtube being as damaging to humanity back before they decided to allow anyone and everyone to monetize their videos. Sure, hateful videos were still on it, but at least they weren't providing financial incentive for people to be as controversial as possible in an effort to get more views (and thus more money).

Again, to be fair to YouTube, they do demonetize many of those videos, but that doesn't stop the ideology from spreading (also, YouTube's Superchat feature allows for live donations, like Twitch, which is probably going to become its own issue very soon).

Moreover, many of those creators have found alternative methods for creating a revenue stream, so demonetizing doesn't really hurt them that much, beyond giving them more grist for the "FREE SPEECH!" mill.

qaraq wrote:

There have got to be some algorithmically easy things they can do just for individuals and groups. Just for example add the ability for a user to mark a video as "junk"; then, other videos 'liked' by people who 'liked' that video, or others from the same channel, would be filtered or downranked for that user. You could compute that on the fly or even client-side.

Then just add the ability to share craplists and you've got something useful. And if there were global craplists seeded from unequivocal Nazi crap, that would be OK with me. Handwave in some Bayes' Theorem and you've got a publishable CS paper.

Some clowns are still going to call that censorship, but all I'm doing is stating that if bad people like something then I probably won't.

Absolutely, this sort of stuff naturally happens baked in to their tech, because their algorithms are using machine learning to identify new groupings of content and the people that like them on the fly. For example, the very recent children's video comment demonetization purge came about because Youtube discovered that the algorithm had created a grouping for pedophiles and material they find titillating. No Youtube employee created that grouping, but after enough pedophiles started passing around and linking to videos of preteen gymnastics meets and whatnot Youtube itself recognized that there was a categorization of interest there and started collating.

Now I don't know how many of these new categorizations are created every day, but "categorizations that have been attempted and are actually working for users after several days" seems like something that would be moderatable with far more ease than, say, watching every video or something. The problem is that our culture can't acknowledge that when you identify a heaping pile of garbage people, or garbage content (in this case it was innocent content, but in most of these cases the content itself is toxic crap) those things should be cleaned out.

Prederick wrote:

Again, to be fair to YouTube, they do demonetize many of those videos, but that doesn't stop the ideology from spreading (also, YouTube's Superchat feature allows for live donations, like Twitch, which is probably going to become its own issue very soon).

Moreover, many of those creators have found alternative methods for creating a revenue stream, so demonetizing doesn't really hurt them that much, beyond giving them more grist for the "FREE SPEECH!" mill.

I'm well past wanting to be fair to YouTube. They ought to be removing those videos when they're brought to their attention, not just demonetizing them. They should be proactively looking for them as well, not just waiting for a public outcry to be loud enough. The child exploiting/targeting videos were a known problem for years, and they were content to play whack-a-mole with offending videos until they got enough bad press telling parents that the only way to be safe was to stop watching Youtube entirely. They can use their current algorithm to identify them since it does such a great job of that already, and come up with a new (or at least tweaked) algorithm for recommendations that won't constantly try to radicalize people. They should only allow videos to be monetized after reviewing them to make sure the content doesn't violate either their TOS or any laws. People will find alternative methods, but that doesn't mean tech companies can't at least try to make it harder for them.

Just to add to Stengah post I'll point this out as people seem to be talking in the abstract but the EU is in an assessment period of the Code of Practice on Disinformation that all the major tech companies have signed up to. The Code of Practice is really aimed at the revenue from advertising and how the tech companies ensure they do their due diligence regarding that revenue.

Currently this is an honour system but the EU is more than happy to switch to enforcement as we've seen in the past. The assessment period occurs annually and can be called in emergency situations which most suspect is to protect the upcoming European Parliament elections as they fall within the initial 12 month period. The EU is just giving these companies a chance to implement proper oversight over their advertising system which I'm sure they are delighted to do.

I accidentally left YouTube playing last night after watching a video review of an air conditioner. From there it went to a few videos of calming bedtime sounds, which I do watch, to freaky ass Russian toy videos, which I definitely do not watch. These videos woke up my wife in the middle of the night and freaked her out quite a bit. Great algorithm, YouTube.

I always try to keep the "autoplay" *clears throat* feature...turned off on YouTube since it comes up with a lot of stupid stuff to put into a "playlist" and I typically want to just watch the videos I want to watch and not all this other stuff.

However, it is super, super annoying that autoplay always goes back to its default "on" whenever I need to upgrade my browser. Personally, I'd delete the entire autoplay thing if it were up to me and there should be a way to permanently toggle it to "disappear" in our settings.

And don't even get me started on how YouTube just randomly changes resolutions at any given time no matter what resolution I want it always set to because there is no turning off the "auto" *clears throat* feature...there either.

With that said, I do like that "normal" people can post and make videos and do all sorts of fun things. It's a real shame that pedophiles, abusers, and scummy people in general can abuse it and potentially ruin it for everyone else.