[Discussion] The Inconceivable Power of Trolls in Social Media

This is a follow-on to the nearly two year old topic "Trouble at the Kool-Aid Point." The intention is to provide a place to discuss the unreasonable power social media trolls have over women and minorities, with a primary focus on video games (though other examples are certainly welcome).

Conspiracy theorists arrested for alleged threats at site of Texas church shooting

From YouTube to the streets.

Conspiracy theorists were arrested at the site of the Sutherland Springs church mass shooting after harassing families and survivors with death threats and taunts about their deceased loved ones, residents said.

Robert Ussery, 54, was charged with making a “terroristic threat” on Monday after he showed up to the First Baptist church in Texas and and allegedly threatened to “hang” the pastor, who lost his 14-year-old daughter when a gunman killed 26 people in November. Ussery shouted profanities at the pastor, Frank Pomeroy, and demanded proof that his daughter had died and that the shooting was real, according to witnesses. Jodie Mann, a woman who showed up with Ussery, was also arrested for trespassing.

The case appears to be the latest example of viral online conspiracy theories leading to real-world harassment and abuse of gun violence victims and grieving families in the US. In recent years, conspiracy theorists have repeatedly spread false claims that mass shootings were staged and have attacked survivors as “actors” – a form of harassment that has intensified in the wake of the recent Florida high school shooting.

“He taunts people on the internet and in person,” Sherri Pomeroy, the pastor’s wife, told the Guardian on Tuesday, referencing Ussery. “He says, ‘Produce me a death certificate,’ like we have to prove something to him … He was spouting all this hatefulness.”

The small town of Sutherland Springs, which has a population of just a few hundred, was thrust into the national spotlight last year when a gunman killed 26 worshippers in one of the deadliest mass shootings in modern US history. In recent months, Ussery – who is from Lockhart, Texas, one hour north of the church – has repeatedly targeted Sutherland Springs residents with harassing comments, both in town and online, according to Sherri and other locals.

New Foils for the Right: Google and Facebook

Conservatives are zeroing in on a new enemy in the political culture wars: Big Tech.

Arguing that Silicon Valley is stifling their speech and suppressing right-wing content, publishers and provocateurs on the right are eyeing a public-relations battle against online giants like Google and Facebook, the same platforms they once relied on to build a national movement.

In a sign of escalation, Peter Schweizer, a right-wing journalist known for his investigations into Hillary Clinton, plans to release a new film focusing on technology companies and their role in filtering the news.

Tentatively titled “The Creepy Line,” Mr. Schweizer’s documentary is expected to have its first screening in May in Cannes, France — during the Cannes Film Festival, but not as part of the official competition. He used the same rollout two years ago for his previous film, an adaptation of his book “Clinton Cash” that he produced with Stephen K. Bannon, the former head of Breitbart News.

“The Creepy Line” alludes to an infamous 2010 speech by Eric Schmidt, the chief executive of Google at the time, who dismissed concerns about privacy by declaring that his company’s policy was “to get right up to the creepy line and not cross it.”

The documentary, which has not been previously reported, dovetails with concerns raised in recent weeks by right-wing groups about censorship on digital media — a new front in a rapidly evolving culture war.

The Death of Civility in the Digital Age

Last October, the morning that the Harvey Weinstein story broke in The New York Times, I published a short, stupid piece in Tablet titled “The Specifically Jewy Perviness of Harvey Weinstein.” I compared Weinstein to the sexually obsessed Alexander Portnoy, the narrator of Philip Roth’s 1969 novel Portnoy’s Complaint, “a grown man whose emotional and sexual life is still all one big performance piece.” I suggested that having grown up a schlubby Jewish kid in Queens, feeling like an outsider, might have stunted and distorted Weinstein’s sexuality—basically, given him something to prove, particularly in the presence of stereotypically hot Gentile women.

There was a lot wrong with the piece, which I wrote in about twenty minutes in the hour after I read the Weinstein story. It was analytically inadequate, making an analogy between Portnoy, a fictional fetishist and pervert, and Weinstein, a real-life sociopath, a comparison that had the effect of underplaying Weinstein’s crimes and diminishing real women’s suffering. I was wrong on the facts, too, for the rolling revelations of the ensuing days showed that Weinstein was an equal-opportunity predator, happy to degrade and devour Jewish women, Gentile women, African Americans, etc., whoever and whenever.

In the week to come, I received one of those public Twitter and Facebook shamings that writers now expect as an occupational hazard. Hundreds or possibly thousands of people, including close friends and professional colleagues, wrote or shared critiques of my piece; wondered in public what had become of me; lamented my decline (which had the strangely complimentary effect of suggesting that I had some status to lose, which few writers ever really feel they do). “This is a sick, disgusting and rapist viewpoint on Weinstein’s behavior,” said one person on Twitter. “Oppenheimer’s analysis is equally as vile as Weinstein’s behavior,” said another. “Fire him.” I got offline almost immediately, but I gathered from friends that as my old cohorts were upbraiding me, enemies were embracing me. I was praised by white nationalist Richard Spencer and David Duke, whose website ran a piece titled, “Major Jewish Mag Admits Weinstein is a Jewish Racist Who Wants to Defile White People and White Women.”

The day after the piece ran, I published a short apology. “The analysis I offered was hasty and ill-considered,” I wrote. “I take this as a lesson in the importance of knowing as much as one can about a given story, and in taking the time to think and feel things completely through before opining.” I’ve written a lot of pieces that have offended people but that I’ve stood by; but I wished I hadn’t written this one. So in one respect, I was grateful for all the feedback. When I do bad work, I want to be called on it, and to have a chance to own my mistakes. But I did wonder whether there was a better, more constructive way to have the same conversation.

I began writing professionally in 1996. In those early years, if somebody disagreed with something that I wrote, he or she wrote a letter to the editor, which they placed in the mail. In a daily newspaper, that letter ran perhaps three days later; in a magazine, it ran two weeks or even two months later. I usually got a heads-up before the letter ran, but I almost never got a chance to respond in kind. And that was generally the end of it. Email was just then becoming ubiquitous, so some readers found me that way, and sent a note. The notes were usually positive, occasionally negative, but never mean. In fact, in my first decade as a writer, nobody was ever snide or insulting to me about my writing—a fact that’s surely unimaginable to young writers today.

I used to love readers, but today I am wary of them because I generally get to know them via social media or the comments sections below online articles. There is a vein of misanthropy that runs through a lot of criticism of social media, as if people aren’t as nice as they used to be. I think that’s wrong. People continue to be overwhelmingly decent when communicating in the old ways. But that is not true of newer media. The web is thus doing something even more dispiriting than turning us into bad people: It’s giving us amnesia about how fundamentally good we are.

The web is thus doing something even more dispiriting than turning us into bad people: It’s giving us amnesia about how fundamentally good we are.

It's such a bizarre phenomenon. I bet there are pages and pages of sociology and even psychology studies but I have yet to see any concrete conclusions about causes or mitigation.

Mass shooting hoaxers are a lovely mix of the delusionally unhinged and the terrifyingly overarmed.

Quintin_Stone wrote:

Mass shooting hoaxers are a lovely mix of the delusionally unhinged and the terrifyingly overarmed.

You're not allowed to post your shower thoughts anymore.

The Grim Conclusions of the Largest-Ever Study of Fake News

Basically, we're f*cked.

The massive new study analyzes every major contested news story in English across the span of Twitter’s existence—some 126,000 stories, tweeted by 3 million users, over more than 10 years—and finds that the truth simply cannot compete with hoax and rumor. By every common metric, falsehood consistently dominates the truth on Twitter, the study finds: Fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.

“It seems to be pretty clear [from our study] that false information outperforms true information,” said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. “And that is not just because of bots. It might have something to do with human nature.”

The study has already prompted alarm from social scientists. “We must redesign our information ecosystem in the 21st century,” write a group of 16 political scientists and legal scholars in an essay also published Thursday in Science. They call for a new drive of interdisciplinary research “to reduce the spread of fake news and to address the underlying pathologies it has revealed.”

“How can we create a news ecosystem ... that values and promotes truth?” they ask.

The new study suggests that it will not be easy. Though Vosoughi and his colleagues only focus on Twitter—the study was conducted using exclusive data that the company made available to MIT—their work has implications for Facebook, YouTube, and every major social network. Any platform that regularly amplifies engaging or provocative content runs the risk of amplifying fake news along with it.

Though the study is written in the clinical language of statistics, it offers a methodical indictment of the accuracy of information that spreads on these platforms. A false story is much more likely to go viral than a real story, the authors find. A false story reaches 1,500 people six times quicker, on average, than a true story does. And while false stories outperform the truth on every subject—including business, terrorism and war, science and technology, and entertainment—fake news about politics regularly does best.

Twitter users seem almost to prefer sharing falsehoods. Even when the researchers controlled for every difference between the accounts originating rumors—like whether that person had more followers or was verified—falsehoods were still 70 percent more likely to get retweeted than accurate news.

And blame for this problem cannot be laid with our robotic brethren. From 2006 to 2016, Twitter bots amplified true stories as much as they amplified false ones, the study found. Fake news prospers, the authors write, “because humans, not robots, are more likely to spread it.”

NYT's Zeynep Tufekci took on this ground as well today.

YouTube, the Great Radicalizer

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigation of YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Well, that's disturbing. Unintentional, I'm sure, but YouTube has zero motivation to rein in that sort of AI recommendation system behavior if it is making them more money.

YouTube just leads me to more Let's Play and cooking and food videos (and 80s and 90s music videos when I've been drinking).

I also have to wonder what YouTube's thumbs up/thumbs down does to its video recommendations.

I've noticed this as well. Mine is currently most soccer content, but that's in part down to me getting infuriated with the algorithm's recommendations and pruning my recommended videos. I watched a Ben Shapiro video a year or so ago, and basically spent the next six months having various Gamergate-et-al material recommended to me.

Oddly, no matter how much music I listen to, I get little to no recommendations on THAT.

OG_slinger wrote:

I also have to wonder what YouTube's thumbs up/thumbs down does to its video recommendations.

It also doesn't help that YouTube keeps its algorithms and methodologies here purposefully vague and opaque as hell. As I've posted examples of several times before in this thread, most of the conspiracy/hateful content that has gotten purged from YT hasn't been as a result of YT taking action independently, but out outside input from citizens and journalists pointing things out.

Prederick wrote:

It also doesn't help that YouTube keeps its algorithms and methodologies here purposefully vague and opaque as hell. As I've posted examples of several times before in this thread, most of the conspiracy/hateful content that has gotten purged from YT hasn't been as a result of YT taking action independently, but out outside input from citizens and journalists pointing things out.

Meh. YouTube's algorithms are pretty clear: outside any other input, it's going to assume that if you continue to watch some sh*t that you like said sh*t. Similar algorithms on Pandora have taught me that I secretly like 70s funk (which, as I found out, I really do).

I've watched plenty of YouTube videos about stuff and views I vehemently disagree with. The key is watching it and then thumbs downing/downvoting it so Google knows not to include that video in its recommendation algorithms. If you don't do that it's going to assume that you really like Nazi videos because you've been watching them for hours.

OG_slinger wrote:
Prederick wrote:

It also doesn't help that YouTube keeps its algorithms and methodologies here purposefully vague and opaque as hell. As I've posted examples of several times before in this thread, most of the conspiracy/hateful content that has gotten purged from YT hasn't been as a result of YT taking action independently, but out outside input from citizens and journalists pointing things out.

Meh. YouTube's algorithms are pretty clear: outside any other input, it's going to assume that if you continue to watch some sh*t that you like said sh*t. Similar algorithms on Pandora have taught me that I secretly like 70s funk (which, as I found out, I really do).

I've watched plenty of YouTube videos about stuff and views I vehemently disagree with. The key is watching it and then thumbs downing/downvoting it so Google knows not to include that video in its recommendation algorithms. If you don't do that it's going to assume that you really like Nazi videos because you've been watching them for hours.

Isn’t the real story here that YouTube’s algorithm has figured out that Trump’s most ardent supporters are most likely to be white supremacist.

I wonder how much of what is being reported is due to unrealistic edge-case behavior. To research YouTube recommendations and Google search results, you necessarily have to start with a clean slate to avoid biasing the result. But the recommendation engine can't do much with a clean slate. I'm pretty sure Google uses everything it knows about you to recommend videos and search results. From the outside, how do you measure something like that?

It seems to me that the only thing that could produce meaningful data on how it really works would be a carefully conducted social science experiment. Even attempting to do that kind of research would be fraught with difficulty. Google concerns, privacy concerns, political concerns, bias from research sponsors... it'd be a nightmare.

Also it is very important to remember that Google is not optimizing for what people want to see. Google is optimizing for what brings in the most ad revenue. That is a very important distinction. Imagine if they pair a vehemently anti-bigotry ad with a bigoted video that YouTube presents to someone. The performance of that ad might be much higher than if it was placed with a neutral video. The engine may not even know anything about the content of the video or the ad, only that the ad performs well when paired with the video.

It's a very tricky thing to measure.

DSGamer wrote:

Isn’t the real story here that YouTube’s algorithm has figured out that Trump’s most ardent supporters are most likely to be white supremacist.

No. That's not in any way what it's saying.

YouTube suggesting I might want to watch The Amazing Atheist doesn't mean I'm a misogynistic, bigoted asshole. Continuing to watch (and presumably enjoy) those videos, of course, would. And I hate that anyone vaguely interested in atheism is promptly shunted to a bunch of racist pieces of sh*t, which almost certainly makes more sh*tty people as young men have their insecurities validated with big words and angry rants. But that doesn't mean atheists are more likely to be racist, sexist bigots.

Youtube doesn’t seem great about distinguishing between videos FOR and AGAINST something, either. I watched ONE video debunking MRA claims about soy diets a couple weeks ago and I started getting red-piller videos in my recommendations.

ruhk wrote:

Youtube doesn’t seem great about distinguishing between videos FOR and AGAINST something, either. I watched ONE video debunking MRA claims about soy diets a couple weeks ago and I started getting red-piller videos in my recommendations.

They probably use most of the same keywords. I don't know how YouTube determines similarity (tagging, categorization, tracking what else someone watches, analyzing the transcript for similar words) but most of the ways I can think of would be prone to that failure.

YouTube was really designed for cat videos and stolen music videos, not angry diatribes. It just turns out that angry diatribes (and creepy kids videos) game the recommendation algorithm and get even higher results...

ruhk wrote:

Youtube doesn’t seem great about distinguishing between videos FOR and AGAINST something, either. I watched ONE video debunking MRA claims about soy diets a couple weeks ago and I started getting red-piller videos in my recommendations.

I've seen the same thing in blog advertising and I assume it's all about vague keywords. It's been common for me to see ads for gun accessories on lefty blogs, and "how to expand your ministry" programs on atheist blogs, so I bet they're just keying off of "religion" or "politics".

A couple of years ago I watched the Innuendo Studios "Why Are You So Angry?" series about G*merg*te, and Youtube started recommending some seriously toxic garbage. I ended up adding the Video Blocker popup to Chrome and it helped- you can right-click on any Youtube link to block that channel from any searches or recommendations.

In India, 'fake news' and hoaxes catch fire as millions see YouTube for the first time

YouTube is tackling hoaxes and other problems that plague it in much of the developed world, but it's falling short in a booming new market.

In India, fake news, hoaxes and other misleading videos have thrived on Google's video platform with little friction for years. Despite outcries, users and YouTubers say, the Google-owned service has yet to introduce strict discipline on its website.

In late 2016, when the Indian government invalidated much of the cash in circulation in the nation, rumors of new bills containing GPS-tracking microchips began making rounds on YouTube. In a second example months later, a viral video on the platform falsely claimed that France President Emmanuel Macron had, in what would have been a sign of respect in some Indian cultures, touched the feet of India's Prime Minister Narendra Modi.

The issue regained prominence in India late last month following the death of famous actress Sridevi. The vast majority of videos listed on YouTube's Trending feed in India following her death were found to be peddling false information. The news-focused videos, some of which were created by unverified channels, dominated the feed in the country for days.

A Google spokesperson said the company is working on fixing the issues, and pointed CNBC to blog posts in which it outlines the steps it is taking to curb mischievous acts on its platform.

People outside India probably have not come across those videos. Most countries share the same Trending feed, but Google has created a separate Trending feed for India. Google says it created a special feed for India to better serve growing non-English speaking Internet audiences."

Reddit and the Struggle to Detoxify the Internet

Which Web sites get the most traffic? According to the ranking service Alexa, the top three sites in the United States, as of this writing, are Google, YouTube, and Facebook. (Porn, somewhat hearteningly, doesn’t crack the top ten.) The rankings don’t reflect everything—the dark Web, the nouveau-riche recluses harvesting bitcoin—but, for the most part, people online go where you’d expect them to go. The only truly surprising entry, in fourth place, is Reddit, whose astronomical popularity seems at odds with the fact that many Americans have only vaguely heard of the site and have no real understanding of what it is. A link aggregator? A microblogging platform? A social network?

To its devotees, Reddit feels proudly untamed, one of the last Internet giants to resist homogeneity. Most Reddit pages have a throwback aesthetic, with a few crudely designed graphics and a tangle of text: an original post, comments on the post, responses to the comments, responses to the responses. That’s pretty much it. Reddit is made up of more than a million individual communities, or subreddits, some of which have three subscribers, some twenty million. Every subreddit is devoted to a specific kind of content, ranging from vital to trivial: r/News, r/Politics, r/Trees (for marijuana enthusiasts), r/MarijuanaEnthusiasts (for tree enthusiasts), r/MildlyInteresting (“for photos that are, you know, mildly interesting”). Some people end up on Reddit by accident, find it baffling, and never visit again. But people who do use it—redditors, as they’re called—often use it all day long, to the near-exclusion of anything else. “For a while, we called ourselves the front page of the Internet,” Steve Huffman, Reddit’s C.E.O., said recently. “These days, I tend to say that we’re a place for open and honest conversations—‘open and honest’ meaning authentic, meaning messy, meaning the best and worst and realest and weirdest parts of humanity.”

On November 23, 2016, shortly after President Trump’s election, Huffman was at his desk, in San Francisco, perusing the site. It was the day before Thanksgiving. Reddit’s administrators had just deleted a subreddit called r/Pizzagate, a forum for people who believed that high-ranking staffers of Hillary Clinton’s Presidential campaign, and possibly Clinton herself, were trafficking child sex slaves. The evidence, as extensive as it was unpersuasive, included satanic rituals, a map printed on a handkerchief, and an elaborate code involving the words “cheese” and “pizza.” In only fifteen days of existence, the Pizzagate subreddit had attracted twenty thousand subscribers. Now, in its place, was a scrubbed white page with the message “This community has been banned.”

The reason for the ban, according to Reddit’s administrators, was not the beliefs of people on the subreddit, but the way they’d behaved—specifically, their insistence on publishing their enemies’ private phone numbers and addresses, a clear violation of Reddit’s rules. The conspiracy theorists, in turn, claimed that they’d been banned because Reddit administrators were part of the conspiracy. (Less than two weeks after Pizzagate was banned, a man fired a semiautomatic rifle inside a D.C. pizzeria called Comet Ping Pong, in an attempt to “self-investigate” claims that the restaurant’s basement was a dungeon full of kidnapped children. Comet Ping Pong does not have a basement.)

Some of the conspiracy theorists left Reddit and reunited on Voat, a site made by and for the users that Reddit sloughs off. (Many social networks have such Bizarro networks, which brand themselves as strongholds of free speech and in practice are often used for hate speech. People banned from Twitter end up on Gab; people banned from Patreon end up on Hatreon.) Other Pizzagaters stayed and regrouped on r/The_Donald, a popular pro-Trump subreddit. Throughout the Presidential campaign, The_Donald was a hive of Trump boosterism. By this time, it had become a hermetic subculture, full of inside jokes and ugly rhetoric. The community’s most frequent commenters, like the man they’d helped propel to the Presidency, were experts at testing boundaries. Within minutes, they started to express their outrage that Pizzagate had been deleted.

Redditors are pseudonymous, and their pseudonyms are sometimes prefaced by “u,” for “username.” Huffman’s is Spez. As he scanned The_Donald, he noticed that hundreds of the most popular comments were about him:

“f*ck u/spez”

“u/spez is complicit in the coverup”

“u/spez supports child rape”

One commenter simply wrote “u/SPEZ IS A CUCK,” in bold type, a hundred and ten times in a row.

Huffman, alone at his computer, wondered whether to respond. “I consider myself a troll at heart,” he said later. “Making people bristle, being a little outrageous in order to add some spice to life—I get that. I’ve done that.” Privately, Huffman imagined The_Donald as a misguided teen-ager who wouldn’t stop misbehaving. “If your little brother flicks your ear, maybe you ignore it,” he said. “If he flicks your ear a hundred times, or punches you, then maybe you give him a little smack to show you’re paying attention.”

Although redditors didn’t yet know it, Huffman could edit any part of the site. He wrote a script that would automatically replace his username with those of The_Donald’s most prominent members, directing the insults back at the insulters in real time: in one comment, “f*ck u/Spez” became “f*ck u/Trumpshaker”; in another, “f*ck u/Spez” became “f*ck u/MAGAdocious.”

The_Donald’s users saw what was happening, and they reacted by spinning a conspiracy theory that, in this case, turned out to be true.

“Manipulating the words of your users is f*cked,” a commenter wrote.

“Even Facebook and Twitter haven’t stooped this low.”

“Trust nothing.”

The incident became known as Spezgiving, and it’s still invoked, internally and externally, as a paradigmatic example of tech-executive overreach. Social-media platforms must do something to rein in their users, the consensus goes, but not that.

Huffman can no longer edit the site indiscriminately, but his actions laid bare a fact that most social-media companies go to great lengths to conceal—that, no matter how neutral a platform may seem, there’s always a person behind the curtain. “I f*cked up,” Huffman wrote in an apology the following week. “More than anything, I want Reddit to heal, and I want our country to heal.” Implicit in his apology was a set of questions, perhaps the central questions facing anyone who worries about the current state of civic discourse. Is it possible to facilitate a space for open dialogue without also facilitating hoaxes, harassment, and threats of violence? Where is the line between authenticity and toxicity? What if, after technology allows us to reveal our inner voices, what we learn is that many of us are authentically toxic?

EDIT:

BOY HOWDY WHAT A SECTION -

“Uh-oh, looks like we missed a bestiality sub,” the woman in the captain’s cap said. “Apparently, SexWithDogs was on our list, but DogSex was not.”

“Did you go to DogSex?” Ashooh said.

“Yep.”

“And what’s on it?”

“I mean . . .”

“Are there people having sex with dogs?”

“Oh, yes, very much.”

“Yeah, ban it.”

SWEET METEOR OF DEATH WHERE ARE YOU

From December. Originally put this in "Around the World" but it's more focused on social media:

How Syria's White Helmets became victims of an online propaganda machine

The Syrian volunteer rescue workers known as the White Helmets have become the target of an extraordinary disinformation campaign that positions them as an al-Qaida-linked terrorist organisation.

The Guardian has uncovered how this counter-narrative is propagated online by a network of anti-imperialist activists, conspiracy theorists and trolls with the support of the Russian government (which provides military support to the Syrian regime).

The White Helmets, officially known as the Syria Civil Defence, is a humanitarian organisation made up of 3,400 volunteers – former teachers, engineers, tailors and firefighters – who rush to pull people from the rubble when bombs rain down on Syrian civilians. They’ve been credited with saving thousands of civilians during the country’s continuing civil war.

Despite this positive international recognition, there’s a counter-narrative pushed by a vocal network of individuals who write for alternative news sites countering the “MSM agenda”. Their views align with the positions of Syria and Russia and attract an enormous online audience, amplified by high-profile alt-right personalities, appearances on Russian state TV and an army of Twitter bots.

The way the Russian propaganda machine has targeted the White Helmets is a neat case study in the prevailing information wars. It exposes just how rumours, conspiracy theories and half-truths bubble to the top of YouTube, Google and Twitter search algorithms.

“This is the heart of Russian propaganda. In the old days they would try and portray the Soviet Union as a model society. Now it’s about confusing every issue with so many narratives that people can’t recognise the truth when they see it,” said David Patrikarakos, author of War in 140 Characters: How Social Media is Reshaping Conflict in the 21st Century.

Alright, one last one, I promise:

Myanmar: UN blames Facebook for spreading hatred of Rohingya

Facebook has been blamed by UN investigators for playing a leading role in possible genocide in Myanmar by spreading hate speech.

Facebook had no immediate comment on the criticism on Monday, although in the past the company has said that it was working to remove hate speech in Myanmar and ban the people spreading it.

More than 650,000 Rohingya Muslims have fled Myanmar’s Rakhine state into Bangladesh since insurgent attacks sparked a security crackdown last August. Many have provided harrowing testimonies of murders and rapes by Myanmar security forces.

The UN human rights chief said last week he strongly suspected acts of genocide had taken place. Myanmar’s national security adviser demanded “clear evidence”.

Marzuki Darusman, chairman of the UN Independent International Fact-Finding Mission on Myanmar, told reporters that social media had played a “determining role” in Myanmar.

“It has … substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public. Hate speech is certainly of course a part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media,” he said.

The UN Myanmar investigator Yanghee Lee said Facebook was a huge part of public, civil and private life, and the government used it to disseminate information to the public.

“Everything is done through Facebook in Myanmar,” she told reporters, adding that Facebook had helped the impoverished country but had also been used to spread hate speech.

“It was used to convey public messages but we know that the ultra-nationalist Buddhists have their own Facebooks and are really inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities,” she said.

“I’m afraid that Facebook has now turned into a beast, and not what it originally intended.”

The most prominent of Myanmar’s hardline nationalist monks, Wirathu, emerged from a one-year preaching ban on Saturday and said his anti-Muslim rhetoric had nothing to do with violence in Rakhine state.

Facebook suspends and sometimes removes anyone that “consistently shares content promoting hate”, the company said last month in response to a question about Wirathu’s account.

“If a person consistently shares content promoting hate, we may take a range of actions such as temporarily suspending their ability to post and ultimately, removal of their account.”

Muslim Cyber Army: a 'fake news' operation designed to derail Indonesia's leader

Police in Indonesia believe they have uncovered a clandestine fake news operation designed to corrupt the political process and destabilise the government.

In a string of arrests across the archipelago in recent weeks, authorities have revealed the inner workings of a self-proclaimed cyber-jihadist network known as the Muslim Cyber Army (MCA).

The network is accused of spreading fake news and hate speech to inflame religious and ethnic schisms; fan paranoia around gay men and lesbians, alleged communists and Chinese people; and spread defamatory content to undermine the president.

Police say the network was orchestrated through a central Whatsapp group called the Family MCA.

One wing was tasked with stockpiling divisive content to disseminate, while a separate “sniper” team was employed to hack accounts and spread computer viruses on the electronic devices of their opponents.

The arrest of 14 individuals is the second such syndicate police have busted in the last year – deepening fears around Indonesia’s vulnerability to the pernicious spread of fake news.

False accounts and lies
In the world’s largest Muslim-majority nation, among the top five biggest users of Facebook and Twitter globally, some say it is unsurprising that rising religiosity and racial division is playing out viciously online.

It is in this environment that the Muslim Cyber Army was born and has since thrived, in a digital ecosystem flush with bots, fake accounts and lies.

A Guardian investigation conducted over several months uncovered one coordinated cluster of the Muslim Cyber Army on Twitter.

The investigation identified:

- A matryoshka doll-like system of more than 100 bots or semi-automated accounts.
- Links between the cyber army and opposition parties, as well as the military.
- Details of 103 cases of brutal “bounty hunting” incited by the “cyber-jihadists”.
- The network identified by the Guardian was created for the sole purpose of tweeting inflammatory content and messages designed to amplify social and religious division, and push a hardline Islamist and anti-government line.

How Conservative Activists Catfished Twitter

Mo Norai has worked in Silicon Valley for a decade. He’s done stints at Google, Twitter, Facebook, and Apple, but only as a contract worker, meaning he has missed out on the tech giants’ storied perks, benefits and job security. So when he was approached last April by a recruiter from a company called Tech Jobs Box about a full-time job, he was intrigued.

A woman named Kelly Dale contacted him via LinkedIn promising that “salary and benefits would be competitive.”

“It really hooked me,” Norai told me last month.

After a brief phone conversation, Dale said he seemed like a great candidate and set up in-person interviews with her colleague and an investor in the company. Those went well and for four months last year, Norai thought he had a new job. He was in regular communication with his new colleagues, meeting up with them for dinner, drinks, and a baseball game, but they kept pushing his start date back, saying they were securing office space and finalizing funding.

But in fact, there was no job. Tech Jobs Box wasn’t a real company. Kelly Dale and the rest of his new “colleagues” were actually operatives for Project Veritas, a conservative investigative group founded by James O’Keefe that specializes in secretly recording people. It’s perhaps best known for catalyzing the downfall of ACORN, a low-income advocacy group that lost its federal funding after Project Veritas released undercover videos of the group’s employees counseling a sex worker and her “pimp” (a disguised O’Keefe).

Project Veritas traditionally targets politicians, government agencies, and media organizations, but decided to go after Silicon Valley last year because of its perceived biases against conservatives. “Big tech companies like Google, Facebook and Twitter have become media monopolies and they are censoring people,” said O’Keefe by phone.

In January, Project Veritas released three videos about Twitter’s content-moderation practices that feature hidden camera footage of nine current and former Twitter employees—one woman and eight men, including Norai. They even secretly filmed Twitter CEO Jack Dorsey by having an operative pose as a homeless person and confront him at a Blue Bottle coffee shop.

The videos don’t contain blockbuster information. The employees reveal that there aren’t a lot of conservatives at Twitter; that Twitter tries to make spammy content less visible on the platform; that many of the sock puppet Twitter accounts banned in the last year posed as Trump supporters; and that Twitter would cooperate, as required by law, in any investigation of President Trump by handing over his private Twitter messages. The most surprising part was a former engineer’s claim that Twitter historically “shadow banned” users. (A “shadow ban” means that a user’s content on a platform can’t be seen and the user doesn’t realize it.)

Project Veritas points out that Senator Ted Cruz cited their videos while questioning tech companies during a hearing about content moderation, terrorism, and Russia in January.

“The individuals depicted in these videos were speaking in a personal capacity and do not represent or speak for Twitter,” said a Twitter spokesperson by email, pointing me to a page that explains how and why Twitter accounts are censored or made less visible. “Twitter does not shadowban accounts. We do take actions to downrank accounts that are abusive, and mark them accordingly so people can still click through and see this information if they so choose.”

While Project Veritas’s findings weren’t particularly shocking, how they were obtained was. Project Veritas didn’t just fake-recruit its targets, it fake-seduced them. Many of the male employees were secretly recorded while on dates at dimly-lit restaurants, sipping wine. Based on the number of times he appears in the videos in different locations and dress, one security engineer, Clay Haynes, appears to have been enamored enough with the operative pumping him for information to go out with her at least three times. All of the Veritas operatives’ faces are blurred, but you can see his date’s jangly bracelets and long blond hair. It’s unclear just how far the seduction of Haynes went, but they became serious enough to go on a double date to Morton’s Steakhouse with her friend, a disguised James O’Keefe.

“NO ONE should have to experience this,” said Haynes via Facebook message. Haynes, who is still employed by Twitter, ultimately opted not to talk to me at the company’s request.

Beyond the questionable journalistic ethics of exploiting people’s desires for work and love, Project Veritas’s tactics broke the law, says John Nockleby, a professor who specializes in privacy at Loyola Law School-Los Angeles. While consent laws for recording conversations vary from state to state, California is a two-party consent state, meaning you have to tell someone if you’re recording them, or face up to a year of jail time and a $2,500 fine. “You’re allowed to do video in a public place without getting consent, but not take audio, unless it’s someone like a politician giving a speech to a crowd,” Nockleby told me by phone. “In California, even in a public place, if you’re audio recording without consent, that’s not legal.”

O’Keefe, who paid a $100,000 settlement in 2012 to an ACORN employee who sued him over California’s law against surreptitious recording, expressed the belief that his operatives are allowed to record people in public places, like bars, restaurants or a conference room where a door is open.

“We have a number of lawyers who handle compliance for us. California is a two-party state but we can operate in areas where there are no expectations of privacy,” said O’Keefe by phone. “With the Twitter story, we did not break the law. Period.”

In a follow-up email, a Project Veritas spokesperson pointed to an exception in the law for circumstances in “which the parties to the communication may reasonably expect that the communication may be overheard or recorded.” (Norai says the door to the conference room where his interviews took place was closed.)

The story Silicon Valley likes to tell about itself is that it conquered the world by making it more open and connected, and by getting strangers to trust each other. Project Veritas exploited that ecosystem of connection and trust to wage its year-long investigation, turning the tools that Silicon Valley created against it. In a phone interview, O’Keefe declined to reveal how many undercover journalists were involved or how much it spent on the operation, saying only that it “was very expensive” because travel and lodging in San Francisco “was outrageous.” (Project Veritas doesn’t seem to be having money problems; its budget has nearly doubled every year, according to financial filings. O’Keefe says it raised more than $7 million in 2017.) He said he couldn’t talk about his group’s methods because the investigation of tech companies is ongoing. Google and Facebook employees should beware.

Reddit just banned two large racist subs: /r/uncensorednews and /r/european.

https://np.reddit.com/r/AgainstHateS...

At least one redditor believes it was because the mods of /r/uncensorednews revealed a secret that reddit admins didn't want getting out:

https://np.reddit.com/r/AgainstHateS...

I'm 90% sure r/uncensorednews was banned because they accidentally exposed that Steve Huffman has been lying to everyone about why Reddit keeps their collection of hatesubs online.

u/Spez has been out there trying to make the case that all these alt-right subs should get to stay, because they have such important voices and:

1. At least they were following the rules, and
2. They were willingly cooperating when asked to correct something.

Not long ago, the mods of uncensored posted a screenshot of mod actions the trust and safety team were making happen on their sub, crying about how they were being censored. They claimed to be unaware it was happening.

By doing that, the UncensoredNews mods accidentally made it obvious Spez has been lying to reddit's advertisers, their investors, and us, the community:

1. Reddit's alt-right subs aren't following the rules, Reddit's trust and safety team has been following the rules for them (note one of the posts they had to remove contained people's addresses and phone numbers), and at very least the uncensorednews mods were surprised to find out about it.
2. They aren't willingly cooperating. Reddit has been working on the downlow to sanitize the content of alt-right communities that will not moderate themselves. And they've been doing it behind the scenes for reasons they have not explained to anyone (how many other communities are they having to shadow-moderate to make it seem like they're playing by the rules, when they're really doing all they can to break them?)

For the record, until Reddit adopts and enforces rules prohibiting hate speech, intentionally misleading propaganda, and requiring subreddits to self-moderate, none of this matters. Someone else can come along and pull the same crap.

The hate speech problem on Reddit is significant, but drama and backseat driving doesn't help anyone. As far as I'm concerned, anyone with complaints can run the 4th largest site on the internet for a while and see how that goes. There are undoubtedly things going on behind the scenes and compromises that have to be made to keep that giant house of cards standing.

Taking a stronger stand against hate speech and against promotion of falsehoods might be a good thing, but iirc, Reddit has done just that several times. Trying to dig into the details of how they make it work is a great way to lose a lot of context. It's easy to make them look bad from the outside.

Reddit is far from perfect. It is also not the wretched hive of scum and villainy some make it out to be. Some subreddit mods are certainly flouting the rules and even actively supporting vile content, but Reddit itself certainly doesn't.

BadKen wrote:

As far as I'm concerned, anyone with complaints can run the 4th largest site on the internet for a while and see how that goes.

I feel the need to point out that this is a terrible argument. The same can be said for, say, being the President of the United States of America. Or running a major corporation. Or literally any other topic we discuss on these and any other forums.

Nothing, anywhere, is going to get better if people who see problems decide to quietly sit on their hands and hope that those in charge have a plan.

bnpederson wrote:
BadKen wrote:

As far as I'm concerned, anyone with complaints can run the 4th largest site on the internet for a while and see how that goes.

I feel the need to point out that this is a terrible argument. The same can be said for, say, being the President of the United States of America. Or running a major corporation. Or literally any other topic we discuss on these and any other forums.

Nothing, anywhere, is going to get better if people who see problems decide to quietly sit on their hands and hope that those in charge have a plan.

Yeah, taken out of context, in isolation, it's a terrible argument, but if you read the sentence before and after, it's not.

I dunno, I read that whole post as "Yes, there are problems but cut them some slack. They're working on it, I'm sure. And not all of it is that bad." Maybe you got something different out of it, though.