[Discussion] The Inconceivable Power of Trolls in Social Media

This is a follow-on to the nearly two year old topic "Trouble at the Kool-Aid Point." The intention is to provide a place to discuss the unreasonable power social media trolls have over women and minorities, with a primary focus on video games (though other examples are certainly welcome).

Ain't got no brains anyhow.

Landrum said one of the most popular Flat Earth videos, “200 proofs Earth is not a spinning ball” appears to be effective because it offers arguments that appeal to so many mindsets, from biblical literalists and conspiracy theorists to those of a more scientific bent.

lolwut

Those viewers' mindsets are bent alright, but certainly not in any scientific direction.

I know I've plugged it here before, but I highly recommend Algorithms of Oppression by Safiya Umoja Noble. It's primary focus is how algorithmically promoted content reinforce racism and sexism, but it's directly relevant to what Prederick posted about conspiracy theories for its examination of how profit-driven advertising companies have effectively become arbiters of truth and gatekeepers for information. The truth surfaced by Google's algorithms don't necessarily reflect objective fact and instead reflect the biases of the people shaping the algorithms, the people using the service, and the companies paying to influence results in their favor.

Prederick wrote:

FUUUUUUUUUUUTTTTUUUUUUUUUUUUUUUUUUUUURRRRRRRREEEEEE

Facebook is aggressively being used by anti-vaccination advocates to target pregnant women with sponsored advertisements to spread false information and conspiracy theories as the US battles a climbing measles outbreak.

A sponsored ad found by Quartz journalist Jeremy Merrill shows the anti-vaccination organisation Stop Mandatory Vaccination targeting women ages 20 to 60 who have expressed interest in pregnancy living in the state of Washington – where the governor recently declared a state of emergency over the measles outbreak.

Nearly 50 children and young adults in Clark County, Washington have become sickened by the disease since January.

According to the CDC, there have been over 100 instances of measles since January – more than the entire year of 2016, when there were only 86. So far, nearly every child who has gotten ill is un-vaccinated.

My wife got served this ad. So gross.

mudbunny wrote:

So, those of you that follow the Zak S story, Wizards of the Coast put out a statement.

To all D&D fans,

We spent the last week listening and learning from the D&D community.

Zak Smith, along with many others, was engaged by Wizards to provide feedback on D&D Next, the playtest which evolved into D&D fifth edition. We have not contracted with him since, and regret our choice to do so in 2014. Because of that, we are removing Zak’s credit from future physical printings and digital versions of the Player’s Handbook.

We applaud how the D&D community supports one another and fully support the planned Dungeon Masters Guild bundle raising funds to donate to RAINN (Rape, Abuse & Incest National Network). The bundle is live now and we will be amplifying it going forward!

We are grateful to be a part of this wonderful community, and we thank you for your passion. We remain committed to working with and learning from you, the D&D community. You may always share your comments and thoughts with us on our social media platforms and we are setting up an email address to receive feedback more directly.

Sincerely,
The D&D Team

Still no comment from WotC or Mike Mearls about the allegations that Mearls was asking people for feedback on Zak, and then feeding the responses to him.

Tanglebones wrote:

Still no comment from WotC or Mike Mearls about the allegations that Mearls was asking people for feedback on Zak, and then feeding the responses to him.

And there will be nothing further. WotC, for a subsidiary of a multinational corporation, saying what they did is a huge statement. There is nothing more to be gained by digging deeper into this. The overwhelming majority of people who were angry at WotC for Zak's inclusion in the credits will be happy with their statement. If Mearls/WotC does say something, they are in the position of trying to prove they never did anything, and they will end up in a he-said/he-said situation. And there is no winning with that.

Now, everyone has their own point at whether or not to the statement by WotC is adequate or not.

ClockworkHouse wrote:

I know I've plugged it here before, but I highly recommend Algorithms of Oppression by Safiya Umoja Noble. It's primary focus is how algorithmically promoted content reinforce racism and sexism, but it's directly relevant to what Prederick posted about conspiracy theories for its examination of how profit-driven advertising companies have effectively become arbiters of truth and gatekeepers for information. The truth surfaced by Google's algorithms don't necessarily reflect objective fact and instead reflect the biases of the people shaping the algorithms, the people using the service, and the companies paying to influence results in their favor.

To be at least a little fair to FB/Google et al (which I am loathe to do), Buzzfeed's Joe Bernstein tweeted out a few things about this digital morass that gave me pause.

This is a fundamental point about YouTube, which has incentivized for years precisely the kind of content people don't find through gatekeepers. There's no ignored *good* anti-conspiracy YouTube because there was never a demand.

To put it another way, no one goes to YouTube to find out that the "truth" about 9/11 is that it wasn't an inside job or that the "truth" about the shape of the earth is that it is round.

Frankly, that's why I've always been pessimistic that algorithmic changes will solve anything long term. IMVHO YouTube (and other content platforms) fill a deep psycho-cultural need for bias confirmation and oversimplification.

@kevinroose gets at just this in his story. People aren't just passive receptacles of information that gets recommended to them; I strongly suspect we go to social platforms looking for information like this.

And the platforms will never in a million years put themselves in the position of determining what's true and what isn't, and making the moderation decisions that follow.

The key issue here remains that we are all given to stupidity.

ClockworkHouse wrote:

I know I've plugged it here before, but I highly recommend Algorithms of Oppression by Safiya Umoja Noble

I'll have to read that. In another forum I had a similar-sounding book recommended to me; Cathy O'Neil's Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. I don't work with those kinds of algorithms now but I could at some point, and it's probably good practice in general to know how the unintended consequences sneak in.

YouTube - Still YouTubin'

YouTube is facing a new wave of criticism over the alarming number of predatory comments and videos targeting young children.

The latest concerns started with a Reddit post, submitted to r/Drama, and a YouTube video, exposing a “wormhole into a soft-core pedophilia ring on YouTube,” according to Matt Watson. Watson, a former YouTube creator who returned with a single video and live stream about the topic, demonstrated how a search for something like “bikini haul,” a subgenre of video where women show various bikinis they’ve purchased, can lead to disturbing and exploitative videos of children. The videos aren’t pornographic in nature, but the comment sections are full of people time stamping specific scenes that sexualize the child or children in the video. Comments about how beautiful young girls are also litter the comment section.

Although Watson’s video is gaining mainstream attention, this isn’t the first time that YouTube has dealt with this issue. In 2017, YouTube updated its policies to address an event known as “ElsaGate,” in which disturbing, sexualized kids’ content was being recommended to children. That same year, YouTube decided to close some comment sections on videos with children in an attempt to block predatory behavior from pedophiles. As early as 2013, Google changed its search algorithm to prevent exploitative content from appearing in searches on both Google and YouTube. But despite years of public outcry, YouTube still hasn’t found a way to effectively deal with apparent predators on its platform.

The heart of the problem is YouTube’s recommendation algorithm, a system that has been widely criticized in the past. It only took two clicks for Watson to venture away from a video of a woman showcasing bikinis she’s purchased to a video of a young girl playing. Although the video is innocent, the comments below — which include timestamps calling out certain angles in the video and predatory responses to the images — certainly aren’t.

“Youtube’s recommended algorithm is facilitating pedophiles’ ability to connect with each-other, trade contact info, and link to actual child pornography in the comments,” Watson wrote on Reddit. “I can consistently get access to it from vanilla, never-before-used YouTube accounts via innocuous videos in less than ten minutes, in sometimes less than five clicks.”

Prederick wrote:

Not that I think YT gives a sh*t about this in any material way, mind you. I don't think this is a moral decision, it's a entirely financial one. They know the platform they've built has basically become Frankenstein's monster, a borderline un-policeable, incredibly powerful disinformation machine that's designed to keep you watching with an algorithm that, in pursuit of your continued viewership, quickly creates filter bubbles of content.

And CP as well!

No, honestly, if I may something in YT's very, very, very meager defense, I think the YT model as it stands is functionally impossible to police well. 450 hours of content gets uploaded every minute, and automated moderation just isn't capable of doing the job. They'd need to hire the entire population of Indonesia to have anything close to a shot at actually tackling the problem.

Prederick wrote:

No, honestly, if I may something in YT's very, very, very meager defense...

*ten minutes later*

Nah, they suck, f*ck em'

Facebook, under pressure from lawmakers and in the midst of a national measles outbreak, says it’s working to prevent anti-vaccination content from being algorithmically recommended to users. But on YouTube, an increasingly popular source of health information, vaccine-related searches such as “should i vaccinate my kids” frequently return search results and recommendations for videos that describe vaccines as dangerous and harmful.

For example, last week, a YouTube search for “immunization” in a session unconnected to any personalized data or watch history produced an initial top search result for a video from Rehealthify that says vaccines help protect children from certain diseases. But YouTube’s first Up Next recommendation following that video was an anti-vaccination video called “Mom Researches Vaccines, Discovers Vaccination Horrors and Goes Vaccine Free” from Larry Cook’s channel. He is the owner of the popular anti-vaccination website StopMandatoryVaccination.com.

In the video, a woman named Shanna Cartmell talks about her decision to stop vaccinating her children after she read a book that said leukemia is a side effect of vaccination.

(While some studies have found lower risk of leukemia in kids exposed to infections, that doesn’t mean vaccination increases risk of leukemia; in fact, a 2017 study found that vaccinations were associated with reduced leukemia risk. Despite the growing popularity of the anti-vaccination movement, the scientific consensus is that vaccines are safe, and do not cause autism.)

“I wasn't always that person who was going to not vaccinate, but it has to start somewhere. If you go down a road, follow the road, and see where it leads,” Cartmell says in the video. “Unless you know for sure that your child will be 100% safe, do you want to play that game? If you can’t say ‘yes’ right now, pause.”

YouTube’s promotion of misleading testimonials like Cartmell’s is concerning precisely because people do turn to YouTube for health information — and Google knows that. During a Google earnings call earlier this month, CEO Sundar Pichai said, “YouTube is a place where we see users not only come for entertainment. They come to find information. They’re coming to learn about things. They’re coming to discover, to research.” A recent Pew Research survey, which found that more than half of YouTube users turn to the site for news and information, confirms Pichai’s statement.

Last week, California Rep. Adam Schiff sent a letter to both Facebook and Google asking that each company address the anti-vaccination issue. In his letter to Pichai, Schiff expressed his “concern that YouTube is surfacing and recommending messages that discourage parents from vaccinating their children, a direct threat to public health, and reversing progress made in tackling vaccine-preventable diseases.”

But while Facebook responded last week by saying it would take “steps to reduce the distribution of health-related misinformation on Facebook,” so far, YouTube hasn’t responded publicly.

The US House of Representatives Committee on Energy and Commerce plans to hold a hearing next week addressing concerns about the reemergence of diseases that can be prevented by vaccine. “It’s unconscionable that YouTube’s algorithms continue to push conspiracy videos to users that spread disinformation and, ultimately, harm the public health,” Rep. Frank Pallone, the committee’s chairman, told BuzzFeed News. “We have a hearing next week on the measles outbreak concentrated in the Pacific Northwest and will be sure to discuss this with the public health experts who are testifying.”

YouTube, which did publish a blog post earlier this month announcing that it would be tweaking its recommendation algorithm to better handle conspiracies following a BuzzFeed News report, said it’s working on reducing the spread of harmful misinformation.

Prederick wrote:

They'd need to hire the entire population of Indonesia to have anything close to a shot at actually tackling the problem.

Which they likely could afford to do.

I think Aetius has been getting unfairly crapped on lately across a few threads, but Prederick's post is a perfect demonstration of private companies unable to handle having nice things without government regulation.

I won't be surprised if in 5 years at least one of the Big Tech co's gets AT&T'd, and the YouTube Law passes making it so that if your platform enables access to CP (with anti-vaxx tacked on only if there are Democrats in power) in any way, you get fined to the tune of 5% of the US National Debt. It'll be one of those laws where you'd have to be Roy Moore to vote against it.

What Mao said (LOL) reminds me of a law or statute or something that came up when FOSTA/SESTA Act got passed last year. I forget it's name (and probably some details), so bear with me.

As I understand it (and please correct me if I'm wrong), websites are not held legally responsible for the content users put on them, for the most part. So (and again, I'm not 100% sure I'm on here so please, fact check me if I'm not) if someone were to be sex-trafficked on Craigslist, Craigslist itself could not be held legally accountable.

FOSTA-SESTA didn't explicitly change that, I think, but it certainly threatened it in such a way that Craigslist scrubbed basically every single category that could have been used for sex-trafficking from the site, and several others did something similar, especially after Backpage.com got nuked for similar reasons.

Now, from what I remember reading, a lot of Internet civil liberty organizations like the EFF have made an entirely reasonable case as to why doing this is a good thing, allows for freedom online and other stuff (again, I am grossly simplifying here). But, even as sex workers were protesting FOSTA/SESTA, I remember thinking it was a dead-cert to pass because no politician in the country was going to allow themselves to get easily smeared as "the guy who voted against stopping the sex-trafficking of children, and for prostitution".

Anyway, Mao (again, LOL), I think you're right in that something similar is coming down the pike for Facebook/Google/et al, and that we will begin to see a shift in holding more of these companies and websites legally responsible for the content that gets put on their site.

Regulation (along with trust-busting) is definitely coming for them, the only question at this point is whose regulation (Left-wing populist or Right-wing populist) it is.

Ahem, that's ROFLMAO to you, sir!

edit

ClockworkHouse wrote:
Prederick wrote:

They'd need to hire the entire population of Indonesia to have anything close to a shot at actually tackling the problem.

Which they likely could afford to do.

It's a population of 264 million and GDP of $932 billion, so probably not quite. Which is a pretty incredible population density when you compare to the US, but I digress.

I think there's a lot better case to be built that big social media companies like Facebook, YouTube, and Twitter can and should be held responsible for user content. The notion of "safe harbor" shouldn't apply to them because they explicitly determine what "related" content is shown to viewers.

In the case of YouTube, of course it is not feasible for them to review every second of every video uploaded. But it is eminently feasible for them to review their recommendation software for specific issues. In fact, they do this all the time for paying customers! (Viewers are not the customer, they are the product.) I'm not saying it would be easy, but they already have manual input into the scoring that produces recommendations. It doesn't make sense to me that it would be infeasible for them to use the tweaks already in place to reduce or eliminate recommendations of videos of young girls when a bikini haul video is displayed.

Facebook decided which users are interested in Nazis — and let advertisers target them directly

Facebook makes money by charging advertisers to reach just the right audience for their message — even when that audience is made up of people interested in the perpetrators of the Holocaust or explicitly neo-Nazi music.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Algorithms!

Anti-vaxx propaganda has gone viral on Facebook. Pinterest has a cure

On Wednesday morning, Adam Schiff, the powerful chair of the House intelligence committee, joined journalists around the world in a nascent Twitter meme: he searched “vaccine” on Facebook and posted a screenshot of the results.

Schiff’s search results were indeed alarming: autofill suggestions for phrases such as “vaccination re-education discussion forum”, a group called “Parents Against Vaccination”, and the page for the National Vaccine Information Center, an official-sounding organization that promotes anti-vaccine propaganda. And while search results on Facebook are personalized to each user, a recent Guardian report found similarly biased results for a brand new account.

If the congressman had tried to search “vaccines” on the rival social media site Pinterest, however, he would have had little more to screenshot than a blank white screen. Recognizing that search results for a number terms related to vaccines were broken, Pinterest responded by “breaking” its own search tool.

As pressure mounts on Facebook to explain its role in promoting anti-vaccine misinformation, Pinterest offers an example of a dramatically different approach to managing health misinformation on social media.

“We’re a place where people come to find inspiration, and there is nothing inspiring about harmful content, said Ifeoma Ozoma, a public policy and social impact manager at Pinterest. “Our view on this is we’re not the platform for that.”

I heard about this from @xuhulk on Twitter, who raised a interesting point about it however:

@xuhulk wrote:

1) This is DEFINITELY the right move on Pinterest's part because kids are dying of measles, AND
2) This is exactly how censorship is often enacted on the Chinese internet, SO
3) What systems of oversight do we need for these critical, complex choices?

In case my choice of conjunction above didn't make it clear, it's NOT 1 vs. 2. It's—who do we trust to determine how our information pathways are managed, and what oversight / pushback do we have? I don't think the answer should be either just "government" or "corporations."

I think, to the information management question, the salient point is that our information pathways are already being managed, to the n-th degree. It's not as centralized as in China, but it absolutely happens. And has huge splash damage: That's why Tumblr banned so many blogs, why YouTube has been banning Pokemon Go videos. They were caught up in the over-zealous culls, while the conspiracy videos flourish.

I will never stop being entertained by the fact that I could set up a blog on Tumblr tomorrow advocating the immediate establishment of an all-white ethnostate and the removal of any non-whites from that state as long as I'm not explicitly advocating violence (this, of course, ignoring the fact that the creation of such a state would necessitate violence), but if I put a "female-presenting nipple" up that's what'd get taken down.

Time for a BlacKkTumblrina?

BadKen wrote:

Time for a BlacKkTumblrina?

Nah, satire is dead.

I've it seen way too many times over the past decade: someone sets up something they view as obvious satire and are horrified when they find out that a person took it seriously.

I saw a Q-Anon thread this morning, in the raw, unfiltered talk-among-ourselves mode. They were giving each other pep talks because they were freaking out that things were moving too slowly and the pedophile child-sacrificers wouldn't get punished fast enough. They didn't trust anyone except a tight circle around Trump: Pence was mentioned as having ritually sacrificed 30 babies, for example. Lots of Evangelical-style cries to God--some of them were specifically talking about asking for their "heart to be hardened" but it wasn't happening--i.e. they were in distress thinking about the babies and they tried to suppress the feelings to be less distracted but couldn't.

The accounts are locked and private now, so I can't view the thread anymore, but the ten minutes I browsed it were chilling. Particularly when I remembered the attempt to burn down Comet Ping-Pong last week, or the constant stream of white supremacist terrorists that are continually being caught.

Gremlin wrote:
BadKen wrote:

Time for a BlacKkTumblrina?

Nah, satire is dead.

I've it seen way too many times over the past decade: someone sets up something they view as obvious satire and are horrified when they find out that a person took it seriously.

Taking it seriously would be the point. Ron Stallworth's book Black Klansman isn't a satire. It's the story of a real cop trying to expose racist violence in his city. The whole situation brings out some super dark humor (dare I say... "black comedy"...), but that isn't the whole story.

A Tumblr or Facebook page like Prederick describes would be a great honeypot for identifying people spreading hate on social media. And the best part is the awful irony that the page wouldn't be taken down.

I no longer view The Onion as satire in many cases.

bekkilyn wrote:

I no longer view The Onion as satire in many cases. :)

I wonder what impact Trump has had on their viewership. I certainly don't read it as much I used to, but I'm not sure if I'm just tired of satire in general, or feel like you can't satirize the absurd, which is what they seem to be trying to do.

It might kill their readership, but I'd love to see them take on Trump by writing stories that only show him doing rational, intelligent things that we expect from a president. Write them totally straight, as dull as possible.

And then to really take it to the next level, write op-ed pieces on this fantasy Trump presidency they've created. This is where they could have the hard-core blovatiors go on about him, but all based on the fiction of his being a rational, level-headed and thoughtful president.

I fully recognize I may be in the minority on this desire.

This Wall Street Journal article is paywalled, so I'll post the highlights cribbed from Twitter. Edit: Maybe it's not paywalled, actually. A rare generous day from WSJ?

IMAGE(https://pbs.twimg.com/media/D0B3vtiXcAAub5X.jpg)

IMAGE(https://pbs.twimg.com/media/D0B3vtjWkAMlpxf.jpg)

IMAGE(https://pbs.twimg.com/media/D0B3vtnXgAMzfke.jpg)

IMAGE(https://pbs.twimg.com/media/D0B3vtlXcAAZtMJ.jpg)

Bah, beat me to it!

But yeah, that article's nuts, but it tracks with several other pieces I've seen around recently basically all saying that yeah, insurance companies are already using your social media profiles to determine whether or not you should get coverage or are a risk and so on.

It's not great, Bob!

OH GOD yeah, that story. There aren't enough "NOPE" gifs on the planet.

YouTube just demonetized anti-vax channels saying they fell under its policy prohibiting the monetization of videos with “dangerous and harmful” content.

Additionally, they're adding a new information panel on anti-vax videos that highlights "vaccine hesitancy" as a major global health crisis and provides a link to a Wiki article on the same.

And for the creepy pedophiles, YouTube has come up with the fantastic solution to demonetize videos based on the comments. Even if your video is family friendly, YouTube will demonetize it if they deem the comments to be not advertiser friendly.

YouTube says this policy will only apply to videos featuring kids, but a lot of content creators are concerned YouTube will lazily apply this solution to any controversial content and that the policy could be easily weaponized by assholes to starve channels featuring content that racists, bigots, etc. don't like.

OG_slinger wrote:

YouTube just demonetized anti-vax channels saying they fell under its policy prohibiting the monetization of videos with “dangerous and harmful” content.

Additionally, they're adding a new information panel on anti-vax videos that highlights "vaccine hesitancy" as a major global health crisis and provides a link to a Wiki article on the same.

And for the creepy pedophiles, YouTube has come up with the fantastic solution to demonetize videos based on the comments. Even if your video is family friendly, YouTube will demonetize it if they deem the comments to be not advertiser friendly.

YouTube says this policy will only apply to videos featuring kids, but a lot of content creators are concerned YouTube will lazily apply this solution to any controversial content and that the policy could be easily weaponized by assholes to starve channels featuring content that racists, bigots, etc. don't like.

Good. If a channel is truly trying to be family friendly, comments should be disabled anyway.

Yes it makes sense to me, as long as the creator gets an opportunity to either self moderate or disable comments before their content is permanently demonitised.