[Discussion] The Inconceivable Power of Trolls in Social Media

This is a follow-on to the nearly two year old topic "Trouble at the Kool-Aid Point." The intention is to provide a place to discuss the unreasonable power social media trolls have over women and minorities, with a primary focus on video games (though other examples are certainly welcome).

Social Media Sites Can’t Decide How to Handle ‘Non-Offending’ Pedophiles

When Ender Wiggin was banned from Twitter last December, it wasn’t because he was a far-right troll or Nazi sympathizer. In fact, Wiggin had an army of pizzagaters harassing him all hours of the day, insisting he kill himself right up until the moment his account was disabled on December 14.

That’s because Ender—aka @enderphile—is the pseudonym of a “non-offending” or “anti-contact” pedophile: someone who is attracted to children, but claims to be against adult-child sex and child pornography. Inside that community, he’s known as the unofficial leader, and claims he’s been using social media to reduce the stigma associated with pedophilia, showing other pedophiles they can live lives without offending.

Except in the case of accusations flying around a certain failed Alabama senator, pedophiles are a natural boogeyman for the far-right, and a social media war between the two groups has been going on a long time. It’s created an insular online community of two extreme viewpoints that desperately try to get the other permanently banned.

“Twitter has permitted these pedophiles to exist and operate on their platform, advocating for the normalization of pedophilia,” says Grant J. Kidney, a self-described nationalist and pizzagate conspiracist “running for congress in 2020 on a pro-MAGA platform.” He campaigned for months to get Ender removed since, he says, pedophilia isn’t something to be championed—it’s an illness.

“These people, they’re not left, they’re not right. They’re sick people.”

Despite the ban, Ender was able to make a new account after weeks of being automatically removed—something he announced to the world in a January 31 thread, writing “It’s me, The Real Ender™, and I’m back on Twitter,” before launching into an attack on his main enemies, and closing with the fact that he’s “ready to take some names and kick some asses.” His bravado was short lived though, as his account was suspended again Wednesday. Ender and others who support him believe that these back and forth bans are because Twitter hasn’t decided on how to deal with pedophiles, and instead simply remove them when enough users send in reports, something that’s never in short supply.

This Is How A Popular Minecraft YouTube Star Lured An Underage Fan Into A Sexual Relationship

After going dark for more than a year amid accusations on social media of soliciting and distributing nude images of his adolescent fans, a Minecraft YouTuber named Marcus Wilton, better known online as LionMaker, has resurfaced.

His quiet reemergence is happening at a pivotal moment for how YouTube moderates its platform and for how society at large deals with allegations of sexual abuse.

Before Wilton’s disappearance, his LionMakerStudios YouTube channel had around half a million subscribers, with more than 30 Minecraft videos numbering view counts in the millions. But rumors circulated in chat rooms for Minecraft — considered one of the most popular video games of all time — about Wilton’s behavior toward his young fans.

Things came to a head in December 2015, when Wilton confessed in a series of now-deleted tweets to having a sexual relationship with a UK-based YouTuber named Paige, known as Paige the Panda online, when she was underage — and to posting nude images of her on the platform. Wilton, who has yet to be formally charged with a crime, met Paige when she was 14. After the tweets were deleted several hours later, Wilton claimed he had been hacked.

And two months ago, a British YouTuber named Colossal tweeted a screenshot of an email Wilton wrote where he confessed to having had a sexual relationship with the underage girl. Paige, now 18, requested to have her last name withheld for privacy reasons and declined to comment.

After Wilton’s 2015 Twitter meltdown, the online backlash mounted. Allegations from two more minors accusing Wilton of sexual harassment and abuse would surface in a Vice article. Wilton denies both accusations. But what would play out online would lead to police in both Belgium and the UK investigating him for distributing child pornography and statutory rape.

Wilton has spent the last several months in a psychiatric hospital outside of Antwerp, Belgium, as part of the police investigation that has been open for the better part of the last year. He told BuzzFeed News that, before that, he spent 10 months in detention as part of a Belgian legal process called arrest huis, where law enforcement can hold you for a period of time out of fear that you may be a danger to society or tamper with evidence pertaining to their investigation. BuzzFeed News has reached out to Belgian police for comment.

Initially, Wilton tried to continue making videos, which became darker and darker, littered with threats of legal action against any YouTuber who accused him of using his status in the Minecraft community to prey on his underage fans. Then, around June 2016, Wilton’s online presence vanished. He scrubbed his YouTube account of all the troubling videos he had made. He stopped tweeting. The LionMaker Instagram stopped posting. And it seemed as though he had finally left the community — that is, until now.

He Predicted The 2016 Fake News Crisis. Now He's Worried About An Information Apocalypse.

In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse.”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s NewsFeed integrity effort.

“At the time, it felt like we were in a car careening out of control and it wasn’t just that everyone was saying, ‘we’ll be fine’ — it’s that they didn't even see the car,” he said.

Ovadya saw early what many — including lawmakers, journalists, and Big Tech CEOs — wouldn’t grasp until months later: Our platformed and algorithmically optimized world is vulnerable — to propaganda, to misinformation, to dark targeted advertising from foreign governments — so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact.

But it’s what he sees coming next that will really scare the sh*t out of you.

“Alarmism can be good — you should be alarmist about this stuff,” Ovadya said one January afternoon before calmly outlining a deeply unsettling projection about the next two decades of fake news, artificial intelligence–assisted misinformation campaigns, and propaganda. “We are so screwed it's beyond what most of us can imagine,” he said. “We were utterly screwed a year and a half ago and we're even more screwed now. And depending how far you look into the future it just gets worse.”

That future, according to Ovadya, will arrive with a slew of slick, easy-to-use, and eventually seamless technological tools for manipulating perception and falsifying reality, for which terms have already been coined — “reality apathy,” “automated laser phishing,” and "human puppets."

Which is why Ovadya, an MIT grad with engineering stints at tech companies like Quora, dropped everything in early 2016 to try to prevent what he saw as a Big Tech–enabled information crisis. “One day something just clicked,” he said of his awakening. It became clear to him that, if somebody were to exploit our attention economy and use the platforms that undergird it to distort the truth, there were no real checks and balances to stop it. “I realized if these systems were going to go out of control, there’d be nothing to reign them in and it was going to get bad, and quick,” he said.

Today Ovadya and a cohort of loosely affiliated researchers and academics are anxiously looking ahead — toward a future that is alarmingly dystopian. They’re running war game–style disaster scenarios based on technologies that have begun to pop up and the outcomes are typically disheartening.

Prederick wrote:

This Is How A Popular Minecraft YouTube Star Lured An Underage Fan Into A Sexual Relationship

After going dark for more than a year amid accusations on social media of soliciting and distributing nude images of his adolescent fans, a Minecraft YouTuber named Marcus Wilton, better known online as LionMaker, has resurfaced.

His quiet reemergence is happening at a pivotal moment for how YouTube moderates its platform and for how society at large deals with allegations of sexual abuse.

Before Wilton’s disappearance, his LionMakerStudios YouTube channel had around half a million subscribers, with more than 30 Minecraft videos numbering view counts in the millions. But rumors circulated in chat rooms for Minecraft — considered one of the most popular video games of all time — about Wilton’s behavior toward his young fans.

Things came to a head in December 2015, when Wilton confessed in a series of now-deleted tweets to having a sexual relationship with a UK-based YouTuber named Paige, known as Paige the Panda online, when she was underage — and to posting nude images of her on the platform. Wilton, who has yet to be formally charged with a crime, met Paige when she was 14. After the tweets were deleted several hours later, Wilton claimed he had been hacked.

And two months ago, a British YouTuber named Colossal tweeted a screenshot of an email Wilton wrote where he confessed to having had a sexual relationship with the underage girl. Paige, now 18, requested to have her last name withheld for privacy reasons and declined to comment.

After Wilton’s 2015 Twitter meltdown, the online backlash mounted. Allegations from two more minors accusing Wilton of sexual harassment and abuse would surface in a Vice article. Wilton denies both accusations. But what would play out online would lead to police in both Belgium and the UK investigating him for distributing child pornography and statutory rape.

Wilton has spent the last several months in a psychiatric hospital outside of Antwerp, Belgium, as part of the police investigation that has been open for the better part of the last year. He told BuzzFeed News that, before that, he spent 10 months in detention as part of a Belgian legal process called arrest huis, where law enforcement can hold you for a period of time out of fear that you may be a danger to society or tamper with evidence pertaining to their investigation. BuzzFeed News has reached out to Belgian police for comment.

Initially, Wilton tried to continue making videos, which became darker and darker, littered with threats of legal action against any YouTuber who accused him of using his status in the Minecraft community to prey on his underage fans. Then, around June 2016, Wilton’s online presence vanished. He scrubbed his YouTube account of all the troubling videos he had made. He stopped tweeting. The LionMaker Instagram stopped posting. And it seemed as though he had finally left the community — that is, until now.

it is called "huisarrest", which is Dutch for "being grounded".

SPLC: The Alt-Right is Killing People

On December 7, 2017, a 21-year-old white male posing as a student entered Aztec High School in rural New Mexico and began firing a handgun, killing two students before taking his own life.

At the time, the news of the shooting went largely ignored, but the online activity of the alleged killer, William Edward Atchison, bore all the hallmarks of the “alt-right”—the now infamous subculture and political movement consisting of vicious trolls, racist activists, and bitter misogynists.

But Atchison wasn’t the first to fit the profile of alt-right killer—that morbid milestone belongs to Elliot Rodger, the 22-year-old who in 2014 killed seven in Isla Vista, California, after uploading a sprawling manifesto filled with hatred of young women and interracial couples (Atchison went by “Elliot Rodger” in one of his many online personas and lauded the “supreme gentleman,” a title Rodger gave himself and has since become a meme on the alt-right).

Including Rodger’s murderous rampage there have been at least 13 alt-right related fatal episodes, leaving 43 dead and more than 60 injured (see list). Nine of the 12 incidents counted here occurred in 2017 alone, making last year the most violent year for the movement.

Milo has dropped his lawsuit.

I shall enjoy a quiet moment of schadenfreude.

So, post-Parkland, it is emphatically Not Good online, with social media basically flooded with conspiracy theories about "crisis actors" and so on.

How bad? A conspiracy video was trending at #1 on YouTube until the media pointed it out to YouTube.

Yep, I think you mean every recent tragedy as well. Going back to sandy hook and the Boston marathon bombing. It just is going supersonic now with these infowars like outlets doing anything and everything to spin an event to their narrative.

Sickens me.

Why are our technological overlords so absolutely bad at stopping misinformation?

In the first hours after last October's mass shooting in Las Vegas, my colleague Ryan Broderick noticed something peculiar: Google search queries for a man initially (and falsely) identified as a victim of the shooting were returning Google News links to hoaxes created on 4chan, a notorious message board whose members were working openly to politicize the tragedy. Two hours later, he found posts going viral on Facebook falsely claiming the shooter was a member of Antifa. An hour or so after that, a cursory YouTube search returned a handful of similarly-minded conspiracy videos— all of them claiming crisis actors were posing as shooting victims to gain political points. Each time, Broderick tweeted his findings.

Over the next two days, journalists and misinformation researchers uncovered and tweeted still more examples of fake news and conspiracy theories propagating in the aftermath of the tragedy. The New York Times' John Herrman found pages of conspiratorial YouTube videos with hundreds of thousands of views, many of them highly ranked in search returns. Cale Weissman at Fast Company noticed that Facebook's Crisis response page was surfacing news stories from Alt Right blogs and sites like "End Time Headlines" rife with false information. I tracked how YouTube’s recommendation engine allows users to stumble down an algorithm-powered conspiracy video rabbit hole. In each instance, the journalists reported their findings to the platforms. And in each instance, the platforms apologized, claimed they were unaware of the content, promised to improve, and removed it.

This cycle — of journalists, researchers, and others spotting — with the simplest of search queries — hoaxes and fake news long before the platforms themselves repeats itself after every major mass shooting and tragedy. Just a few hours after news broke of the mass shooting in Sutherland Springs, Texas, Justin Hendrix, a researcher and executive director of NYC Media Lab spotted search results inside Google's 'Popular On Twitter' widget rife with misinformation. Shortly after an Amtrak train crash involving GOP lawmakers in January, The Daily Beast's Ben Collins quickly checked Facebook and discovered a trove of conspiracy theories inside Facebook's Trending News section, which is prominently positioned to be seen on millions of users.

y the time the Parkland school shooting occurred, the platforms had apologized for missteps during a national breaking news event three times in four months, in each instance promising to do better. But in their next opportunity to do better, again they failed.In the aftermath of the Parkland school shooting, journalists and researchers on Twitter were the first to spot dozens of hoaxes, trolls impersonating journalists, and viral Facebook posts and top "Trending" YouTube posts smearing the victims and claiming they were crisis actors. In each instance, these individuals surfaced this content — most of which is a clear violation of the platforms' rules — well before YouTube, Facebook, and Twitter. The New York Times' Kevin Roose summed up the dynamic recently on Twitter noting, "Half the job of being a tech reporter in 2018 is doing pro bono content moderation for giant companies."

Among those who pay close attention to big technology platforms and misinformation, the frustration over the platforms’ repeated failures to do something that any remotely savvy news consumer can do with minimal effort is palpable: despite countless articles, emails with links to violating content, and viral tweets nothing changes. The tactics of YouTube shock jocks and Facebook conspiracy theorists hardly differ from those of their analogue predecessors; crisis actor posts and videos have, for example, been a staple of peddled misinformation for years.

This isn't some new phenomenon. Still, the platforms are proving themselves incompetent when it comes to addressing them — over and over and over again. In many cases, they appear to be surprised by that such content sits on their websites. And even their public relations responses seem to suggest they've been caught off guard with no plan in place for messaging when they slip up.

All of this raises a mind-bendingly simple question that YouTube, Google, Twitter, and Facebook have not yet answered: How is it that the average, untrained human can do something that multi-billion dollar technology companies that pride themselves on innovation cannot? And beyond that, how is it that — after multiple national tragedies politicized by malicious hoaxes and misinformation — such a question even needs be put to them?

Maybe instead of deleting the account they lock it so no one can post from it or to it. Then send out a tweet to everyone that ever retweeted or posted to the account that it was fake. Also put up a banner on the account that it is fake account so anyone going to will be notified.

Inside Atomwaffen As It Celebrates a Member for Allegedly Killing a Gay Jewish College Student: ProPublica obtained the chat logs of Atomwaffen, a notorious white supremacist group. When Samuel Woodward was charged with killing 19-year-old Blaze Bernstein last month in California, other Atomwaffen members cheered the death, concerned only that the group’s cover might have been blown.

Late last month, ProPublica reported that the California man accused of killing a gay and Jewish University of Pennsylvania student was an avowed neo-Nazi and a member of Atomwaffen Division, one of the country’s most notorious extremist groups.

The news about the murder suspect, Samuel Woodward, spread quickly throughout the U.S., and abroad. Woodward was accused of fatally stabbing 19-year-old Blaze Bernstein and burying his body in an Orange County park.

The report, it turns out, was also taken up in the secretive online chats conducted by members of Atomwaffen Division, a white supremacist group that celebrates both Hitler and Charles Manson.

“I love this,” one member wrote of the killing, according to copies of the online chats obtained by ProPublica. Another called Woodward a “one man gay Jew wrecking crew.”

More soon joined in.

“What I really want to know is who leaked that sh*t about Sam to the media,” a third member wrote.

At least one member wanted to punish the person who had revealed Woodward’s affiliation with Atomwaffen.

“Rats and traitors get the rope first.”

Encrypted chat logs obtained by ProPublica — some 250,000 messages spanning more than six months — offer a rare window into Atomwaffen Division that goes well beyond what has surfaced elsewhere about a group whose members have been implicated in a string of violent crimes. Like many white supremacist organizations, Atomwaffen Division uses Discord, an online chat service designed for video gamers, to engage in its confidential online discussions.

In a matter of months, people associated with the group, including Woodward, have been charged in five murders; another group member pleaded guilty to possession of explosives after authorities uncovered a possible plot to blow up a nuclear facility near Miami.

My feelings on each and every abhorrent Atomwaffen Division member, particularly the cold-blooded murderers:

IMAGE(https://i.imgur.com/P3zX9mv.png)

I concur. As good as it feels to rail against the crazies, sourcing the crazy spreads it.

Like commenting on how nuts it is to train teachers to use guns to prevent school shootings. Even dignifying that f*cking ridiculous drivel with the response of referring to it AS ridiculous drivel validates and spreads the idea to an extent. Especially with how polarized culture is right now because it’s too satisfying to put someone on blast and the blastees are looking for the same, to turn the tables.

Everyone has an audience now. Everyone has a peanut gallery, someone to perform for. That leads to radicalization since we’re only engaging on a social media level. It’s a problem without a solution at present.

The tricky thing is the craziest seems to be POTUS. Not sure how you avoid highlighting him. Making claims that armed teachers would have instantly stopped the latest school shooting.

Twitter thread:

NOBODY is talking about how the online depression community has been infiltrated by alt-right recruiters deliberately preying on the vulnerable.

There NEED to be public warnings about this. 'Online pals' have attempted to groom me multiple times when at my absolute lowest.

— Mister Happy Die Happy (@MrHappyDieHappy) February 23, 2018

Online dashboard tracking Russian troll bot armies and influence operations.

https://dashboard.securingdemocracy....

Gremlin wrote:

Twitter thread:

NOBODY is talking about how the online depression community has been infiltrated by alt-right recruiters deliberately preying on the vulnerable.

There NEED to be public warnings about this. 'Online pals' have attempted to groom me multiple times when at my absolute lowest.

— Mister Happy Die Happy (@MrHappyDieHappy) February 23, 2018

I wanted to mention this in the Politics Post thread, where I placed that WaPo article about a young man who had turned into a neo-Nazi, but I cannot not see the resemblances between the stories about who these groups are targeting and a huge chunk of the stories I've read about young men in the Middle East who signed up with extremist groups. There are so many similarities.

Now, if you really wanna get into the nitty-gritty on this, you need to ask, if the two groups are similar, is the better tactic shaming and ostracism or de-radicalization? I'm not sure, since the two are similar, but not like-for-like copies. But MAN they have SO much in common, not just who they try to recruit, even the aforementioned Atomwaffen Division puts together recruiting videos that look preeeeeeety similar to ones a recently-famous ME extremist group does.

Timothy Snyder on Russian internet manipulation and what it means. He tries to put it in a historical context. I'm not entirely happy with the term "colony" here but I don't think there is a good word for a country whose internal politics is manipulated indirectly by another.

P.S. If you haven't read his Black Earth: The Holocaust as History and Warning, I can't recommend it enough.

DoveBrown wrote:

Timothy Snyder on Russian internet manipulation and what it means. He tries to put it in a historical context. I'm not entirely happy with the term "colony" here but I don't think there is a good word for a country whose internal politics is manipulated indirectly by another.

P.S. If you haven't read his Black Earth: The Holocaust as History and Warning, I can't recommend it enough.

I just want everyone to know that this is definitely not an alt-account I created just to post more Timothy Snyder videos!

Also, Black Earth is amazing.

'Taking them down fuels it more': why conspiracy theories are unstoppable

The lies multiplied so rapidly Cori Langdon could hardly keep up. The taxi driver’s cellphone video of the Las Vegas mass shooting was being widely republished by conspiracy theorists, who were using it as “proof” that the massacre was staged by the government and that Langdon was an “actor”.

Langdon thought that if YouTube and Facebook removed some of the content, the online attacks and bogus stories might stop spreading. But it wasn’t so simple.

When some footage was taken down, conspiracy theorists saw it as further evidence that Langdon was involved in a cover-up. Some told her they were worried the FBI had gotten to her.

“It just fuels them even more,” said Langdon, who was harassed online even while being hailed by some as a hero for picking up passengers at the Mandalay Bay shooting that killed 58 people in October. “These conspiracy theorists love their guns. They are freaked out and paranoid.”

Concerns about conspiracy theorists bullying victims of mass shootings have escalated this month as student survivors of a Florida high school massacre have become vocal proponents for gun reforms, making them prime targets for online abuse. Google and Facebook have faced particularly intense scrutiny for their role in spreading false stories claiming teenage survivors are so-called crisis actors hired to promote gun control.

But recent efforts to restrict offensive posts have shone a harsh light on a seemingly intractable problem of the modern conspiracy theory epidemic: that censoring the content can reinforce and enhance false beliefs and that there is no easy way to change the mind of a conspiracy theorist. Some wind up on alternative platforms.

Social media companies taking down content and mainstream news coverage of hoax claims can also drive traffic to those sites creating the questionable content. That appeared to be the case this week with Infowars, a rightwing website that has fueled “crisis actor” claims and is facing consequences from YouTube as a result. Radio host Alex Jones, a leading conspiracy theorist and far-right pundit, has used YouTube’s clampdown to drum up interest in his attacks on the Florida students, presenting it as a “free speech” issue.

“If you believe your institutions are conspiring and then you expose it and then they ban your speech, how could you not think that that’s part of it?” said Joseph Uscinski, a University of Miami professor and conspiracy theory expert. If Jones and Infowars continue to face YouTube censorship, he added, “it will convince his fans that he’s on to something.”

There’s no easy solution, though many agree that YouTube and Facebook could do a better job removing content that constitutes harassment and preventing its algorithms from actively promoting fake news.

Infowars is reportedly one strike away from a YouTube ban, though Jones has a long history of offensive material, most notoriously spreading claims that the 2012 Sandy Hook massacre that killed 20 children was a hoax – a theory that led grieving parents to face death threats. If permanently banned, it’s unclear if Jones would end up hosting his own videos or using a platform like PewTube, which was created for “alt-right” users kicked off mainstream sites.

In some cases, fringe commentators with massive online followings have tried to push boundaries by avoiding explicit hoax claims and instead cast doubt on shooting survivors by simply “raising questions”.

Americans strongly opposed to gun control are susceptible to believing this kind of content, especially if the creators are facing threats from the “establishment” media or Silicon Valley.

WP: We studied thousands of anonymous posts about the Parkland attack — and found a conspiracy in the making

Forty-seven minutes after news broke of a high school shooting in Parkland, Fla., the posters on the anonymous chat board 8chan had devised a plan to bend the public narrative to their own designs: “Start looking for [Jewish] numerology and crisis actors.”

The voices from this dark corner of the Internet quickly coalesced around a plan of attack: Use details gleaned from news reports and other sources to push false information about one of America’s deadliest school shootings.

The posters on anonymous forums, a cauldron of far-right extremist politics, over the next few hours speculated about the shooter’s ethnicity (“Hope the kid isn’t white”) and cracked off-color jokes. They began crafting false explanations about the massacre, including that actors were posing as students, in hopes of blunting what they correctly guessed would be a revived interest in gun control.

The success of this effort would soon illustrate how lies that thrive on raucous online platforms increasingly shape public understanding of major events. As much of the nation mourned, the story concocted on anonymous chat rooms soon burst onto YouTube, Twitter and Facebook, where the theories surged in popularity.

Amid corporate efforts to beat back the falsehoods, the episode became the latest cautionary tale about how the Internet itself had become a potent tool of deception wielded by political extremists, disinformation warriors and conspiracy theorists.

Discord is purging alt-right, white nationalist and hateful servers

Atomwaffen Division, The Right Server, Nordic Resistance Movement, Iron March and European Domas are just some of the servers that were shut down recently as part of Discord’s attempt to purge its platform of hateful content.

“Discord has a Terms of Service (ToS) and Community Guidelines that we ask all of our communities and users to adhere to,” a Discord representative told Polygon. “These specifically prohibit harassment, threatening messages, or calls to violence. Though we do not read people’s private messages, we do investigate and take immediate appropriate action against any reported ToS violation by a server or user. There were a handful of servers that violated these ToS recently and were swiftly removed from the platform.”

I don't think taking things down is about changing the minds of conspiracy theorists, it's more about limiting exposure to potential theorists.

Chumpy_McChump wrote:

I don't think taking things down is about changing the minds of conspiracy theorists, it's more about limiting exposure to potential theorists.

I have a theory to the nature of good, evil, and ideas. These things are given form through volume. For example saying something like aliens are eating babies to a billion people will cause some number of people to believe it. They will not only believe they will act on it and spread it like a virus. It doesn't matter if it is true or not a none zero number of people will believe this. And this none zero number of people could grow to any number.

Now what if the message was instead of aliens are eating babies but antifa is killing people or Obama wasn't born in America or love thy neighbor or try to listen to people. It is kind of like the saying repeat a lie enough times and it becomes truth. I think this is true of all things not just lies. So closing down hate groups puts a dent in the evil being spread by reducing the volume of the message being heard.

This is why the president is the direct cause of so much hate. His negativity goes out to million of peoples. Or if you rather his opinions take form because they reach million.

Will people still go to other places to hear the message? Sure they will but there is a difference between a person searching to reaffirm their own ideas and a person being presented with a new idea. Also the stage on which they go will be much smaller.

A 'political hit job'? Why the alt-right is accusing big tech of censorship

In January, Charles C “Chuck” Johnson filed a suit contesting his ban from Twitter back in May 2015.

Johnson, an American rightwing provocateur, has a long history of smearing and hunting political opponents. He runs a scurrilous news site, GotNews, and another that crowdsources bounties for damaging information on his self-selected foes. He was eighty-sixed from Twitter following outrage from other users after a tweet appealing for crowdsourcing to “take out” Black Lives Matter activist, DeRay McKesson.

It was an early example of Twitter appearing to accede to user pressure in scrubbing rightwing accounts.

“In recent months, social media companies have been more ready to sideline certain views and the users who promote them. Changes to Twitter’s rules in late 2017 saw numerous far right accounts scrubbed. Medium recently banned “alt light” users like Mike Cernovich, Jack Posobiec and Laura Loomer. And this week, Alex Jones’s Infowars YouTube channel, with more than 2 million subscribers, has inched closer to a total ban after making allegations that CNN’s post-Parkland town hall was staged.

And as of Thursday, rightwing media was lamenting what they were billing as a “purge” of prominent conservative, alt-right, and “classical liberal” accounts.

The lawyer acting for Johnson in California, Robert Barnes, says it was a clear example of Twitter “misusing their monopoly power to punish disfavored speech”, and that Johnson’s tweet was not a threat. Barnes points to reporting on internal Twitter memos about Johnson, claiming that it shows his ban was arbitrary and discriminatory.

Barnes’s website promotes him as a tribune for the underdog, and he says that this is part of what drew him to the case – in his view, Johnson is a David to Twitter’s Goliath. But he also says that he is committed to free speech, and he wanted to offer a model for other lawyers to take on social media firms on behalf of banned clients.

“The idea was to create a template that others could copycat,” he says. “Fortunately a lot of lawyers have looked at it and decided that it is the best way to proceed. I predict that by the end of the year there will be a dozen such suits against Twitter, Google and Facebook.”

Barnes’s suit is not in fact the first of its kind, but he may be right in saying that it won’t be the last.

Beginning in late 2017, a number of legal actions have been filed that draw on similar legal principles. They argue, among other things, that companies like Twitter and YouTube are de facto public forums, with a responsibility to guarantee free speech. They are thus challenging the commonsense view that such companies can ban whomever they like. Their lawyers think they can force tech giants to abandon what they describe as arbitrary censorship and liberal bias.

I just can't fathom how these people get their followers to line up behind them calling the actions of social media companies liberal bias, when what the companies are "censoring" are threats, incitement to violence, and elaborate fabrications of pure fiction crafted to generate as much hate speech as possible.

This is completely apart from the ludicrous assertion that a private company is a public forum. Twitter, Facebook, and YouTube don't have the power to exclude people from the internet. All they can do is take back the megaphone they freely provide to anyone willing to abide by standards they, as businesses, put in place to maintain the value of their services as advertising platforms.

A week or two after Reddit was mentioned in the media as a target of Russian Propaganda they had an announcement trying to paper over their complete and total lack of action to attempt to deal with the problem.

I found it a little comforting how the top voted respondents were all taking none of the excuses and really taking them to task with their failures.

That's a lot of work from spez to avoid mentioning or even acknowledging The_Donald