[Discussion] The Inconceivable Power of Trolls in Social Media

This is a follow-on to the nearly two year old topic "Trouble at the Kool-Aid Point." The intention is to provide a place to discuss the unreasonable power social media trolls have over women and minorities, with a primary focus on video games (though other examples are certainly welcome).

Great book. Highly recommend.

How YouTube Drives People to the Internet’s Darkest Corners

The focus on Twitter and Facebook has people completely overlooking YouTube, where it gets wiiiild.

YouTube is the new television, with more than 1.5 billion users, and videos the site recommends have the power to influence viewpoints around the world.

Those recommendations often present divisive, misleading or false content despite changes the site has recently made to highlight more-neutral fare, a Wall Street Journal investigation found.

People cumulatively watch more than a billion YouTube hours daily world-wide, a 10-fold increase from 2012, the site says. Behind that growth is an algorithm that creates personalized playlists. YouTube says these recommendations drive more than 70% of its viewing time, making the algorithm among the single biggest deciders of what people watch.

The Journal investigation found YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content. When users show a political bias in what they choose to view, YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints.

Such recommendations play into concerns about how social-media sites can amplify extremist voices, sow misinformation and isolate users in “filter bubbles” where they hear largely like-minded perspectives. Unlike Facebook Inc. and Twitter Inc. sites, where users see content from accounts they choose to follow, YouTube takes an active role in pushing information to users they likely wouldn’t have otherwise seen.

“The editorial policy of these new platforms is to essentially not have one,” said Northeastern University computer-science professor Christo Wilson, who studies the impact of algorithms. “That sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win.’ But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and people are being gamed.”

YouTube says it recommends more than 200 million different videos in 80 languages each day, typically alongside clips users are currently watching or in personalized playlists on YouTube’s home page.

Long a place for entertainment, YouTube has recently begun trying to make it a more reliable site for news, said YouTube Chief Product Officer Neal Mohan.

YouTube has been tweaking its algorithm since last autumn to surface what its executives call “more authoritative” news sources to people searching about breaking-news events. YouTube last week said it is considering a design change to promote relevant information from credible news sources alongside videos that push conspiracy theories.

After the Journal this week provided examples of how the site still promotes deceptive and divisive videos, YouTube executives said the recommendations were a problem. “We recognize that this is our responsibility,” said YouTube’s product-management chief for recommendations, Johanna Wright, “and we have more to do.”

YouTube engineered its algorithm several years ago to make the site “sticky”—to recommend videos that keep users staying to watch still more, said current and former YouTube engineers who helped build it. The site earns money selling ads that run before and during videos.

The algorithm doesn’t seek out extreme videos, they said, but looks for clips that data show are already drawing high traffic and keeping people on the site. Those videos often tend to be sensationalist and on the extreme fringe, the engineers said.

Social Media Sites Can’t Decide How to Handle ‘Non-Offending’ Pedophiles

When Ender Wiggin was banned from Twitter last December, it wasn’t because he was a far-right troll or Nazi sympathizer. In fact, Wiggin had an army of pizzagaters harassing him all hours of the day, insisting he kill himself right up until the moment his account was disabled on December 14.

That’s because Ender—aka @enderphile—is the pseudonym of a “non-offending” or “anti-contact” pedophile: someone who is attracted to children, but claims to be against adult-child sex and child pornography. Inside that community, he’s known as the unofficial leader, and claims he’s been using social media to reduce the stigma associated with pedophilia, showing other pedophiles they can live lives without offending.

Except in the case of accusations flying around a certain failed Alabama senator, pedophiles are a natural boogeyman for the far-right, and a social media war between the two groups has been going on a long time. It’s created an insular online community of two extreme viewpoints that desperately try to get the other permanently banned.

“Twitter has permitted these pedophiles to exist and operate on their platform, advocating for the normalization of pedophilia,” says Grant J. Kidney, a self-described nationalist and pizzagate conspiracist “running for congress in 2020 on a pro-MAGA platform.” He campaigned for months to get Ender removed since, he says, pedophilia isn’t something to be championed—it’s an illness.

“These people, they’re not left, they’re not right. They’re sick people.”

Despite the ban, Ender was able to make a new account after weeks of being automatically removed—something he announced to the world in a January 31 thread, writing “It’s me, The Real Ender™, and I’m back on Twitter,” before launching into an attack on his main enemies, and closing with the fact that he’s “ready to take some names and kick some asses.” His bravado was short lived though, as his account was suspended again Wednesday. Ender and others who support him believe that these back and forth bans are because Twitter hasn’t decided on how to deal with pedophiles, and instead simply remove them when enough users send in reports, something that’s never in short supply.

This Is How A Popular Minecraft YouTube Star Lured An Underage Fan Into A Sexual Relationship

After going dark for more than a year amid accusations on social media of soliciting and distributing nude images of his adolescent fans, a Minecraft YouTuber named Marcus Wilton, better known online as LionMaker, has resurfaced.

His quiet reemergence is happening at a pivotal moment for how YouTube moderates its platform and for how society at large deals with allegations of sexual abuse.

Before Wilton’s disappearance, his LionMakerStudios YouTube channel had around half a million subscribers, with more than 30 Minecraft videos numbering view counts in the millions. But rumors circulated in chat rooms for Minecraft — considered one of the most popular video games of all time — about Wilton’s behavior toward his young fans.

Things came to a head in December 2015, when Wilton confessed in a series of now-deleted tweets to having a sexual relationship with a UK-based YouTuber named Paige, known as Paige the Panda online, when she was underage — and to posting nude images of her on the platform. Wilton, who has yet to be formally charged with a crime, met Paige when she was 14. After the tweets were deleted several hours later, Wilton claimed he had been hacked.

And two months ago, a British YouTuber named Colossal tweeted a screenshot of an email Wilton wrote where he confessed to having had a sexual relationship with the underage girl. Paige, now 18, requested to have her last name withheld for privacy reasons and declined to comment.

After Wilton’s 2015 Twitter meltdown, the online backlash mounted. Allegations from two more minors accusing Wilton of sexual harassment and abuse would surface in a Vice article. Wilton denies both accusations. But what would play out online would lead to police in both Belgium and the UK investigating him for distributing child pornography and statutory rape.

Wilton has spent the last several months in a psychiatric hospital outside of Antwerp, Belgium, as part of the police investigation that has been open for the better part of the last year. He told BuzzFeed News that, before that, he spent 10 months in detention as part of a Belgian legal process called arrest huis, where law enforcement can hold you for a period of time out of fear that you may be a danger to society or tamper with evidence pertaining to their investigation. BuzzFeed News has reached out to Belgian police for comment.

Initially, Wilton tried to continue making videos, which became darker and darker, littered with threats of legal action against any YouTuber who accused him of using his status in the Minecraft community to prey on his underage fans. Then, around June 2016, Wilton’s online presence vanished. He scrubbed his YouTube account of all the troubling videos he had made. He stopped tweeting. The LionMaker Instagram stopped posting. And it seemed as though he had finally left the community — that is, until now.

He Predicted The 2016 Fake News Crisis. Now He's Worried About An Information Apocalypse.

In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse.”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s NewsFeed integrity effort.

“At the time, it felt like we were in a car careening out of control and it wasn’t just that everyone was saying, ‘we’ll be fine’ — it’s that they didn't even see the car,” he said.

Ovadya saw early what many — including lawmakers, journalists, and Big Tech CEOs — wouldn’t grasp until months later: Our platformed and algorithmically optimized world is vulnerable — to propaganda, to misinformation, to dark targeted advertising from foreign governments — so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact.

But it’s what he sees coming next that will really scare the sh*t out of you.

“Alarmism can be good — you should be alarmist about this stuff,” Ovadya said one January afternoon before calmly outlining a deeply unsettling projection about the next two decades of fake news, artificial intelligence–assisted misinformation campaigns, and propaganda. “We are so screwed it's beyond what most of us can imagine,” he said. “We were utterly screwed a year and a half ago and we're even more screwed now. And depending how far you look into the future it just gets worse.”

That future, according to Ovadya, will arrive with a slew of slick, easy-to-use, and eventually seamless technological tools for manipulating perception and falsifying reality, for which terms have already been coined — “reality apathy,” “automated laser phishing,” and "human puppets."

Which is why Ovadya, an MIT grad with engineering stints at tech companies like Quora, dropped everything in early 2016 to try to prevent what he saw as a Big Tech–enabled information crisis. “One day something just clicked,” he said of his awakening. It became clear to him that, if somebody were to exploit our attention economy and use the platforms that undergird it to distort the truth, there were no real checks and balances to stop it. “I realized if these systems were going to go out of control, there’d be nothing to reign them in and it was going to get bad, and quick,” he said.

Today Ovadya and a cohort of loosely affiliated researchers and academics are anxiously looking ahead — toward a future that is alarmingly dystopian. They’re running war game–style disaster scenarios based on technologies that have begun to pop up and the outcomes are typically disheartening.

Prederick wrote:

This Is How A Popular Minecraft YouTube Star Lured An Underage Fan Into A Sexual Relationship

After going dark for more than a year amid accusations on social media of soliciting and distributing nude images of his adolescent fans, a Minecraft YouTuber named Marcus Wilton, better known online as LionMaker, has resurfaced.

His quiet reemergence is happening at a pivotal moment for how YouTube moderates its platform and for how society at large deals with allegations of sexual abuse.

Before Wilton’s disappearance, his LionMakerStudios YouTube channel had around half a million subscribers, with more than 30 Minecraft videos numbering view counts in the millions. But rumors circulated in chat rooms for Minecraft — considered one of the most popular video games of all time — about Wilton’s behavior toward his young fans.

Things came to a head in December 2015, when Wilton confessed in a series of now-deleted tweets to having a sexual relationship with a UK-based YouTuber named Paige, known as Paige the Panda online, when she was underage — and to posting nude images of her on the platform. Wilton, who has yet to be formally charged with a crime, met Paige when she was 14. After the tweets were deleted several hours later, Wilton claimed he had been hacked.

And two months ago, a British YouTuber named Colossal tweeted a screenshot of an email Wilton wrote where he confessed to having had a sexual relationship with the underage girl. Paige, now 18, requested to have her last name withheld for privacy reasons and declined to comment.

After Wilton’s 2015 Twitter meltdown, the online backlash mounted. Allegations from two more minors accusing Wilton of sexual harassment and abuse would surface in a Vice article. Wilton denies both accusations. But what would play out online would lead to police in both Belgium and the UK investigating him for distributing child pornography and statutory rape.

Wilton has spent the last several months in a psychiatric hospital outside of Antwerp, Belgium, as part of the police investigation that has been open for the better part of the last year. He told BuzzFeed News that, before that, he spent 10 months in detention as part of a Belgian legal process called arrest huis, where law enforcement can hold you for a period of time out of fear that you may be a danger to society or tamper with evidence pertaining to their investigation. BuzzFeed News has reached out to Belgian police for comment.

Initially, Wilton tried to continue making videos, which became darker and darker, littered with threats of legal action against any YouTuber who accused him of using his status in the Minecraft community to prey on his underage fans. Then, around June 2016, Wilton’s online presence vanished. He scrubbed his YouTube account of all the troubling videos he had made. He stopped tweeting. The LionMaker Instagram stopped posting. And it seemed as though he had finally left the community — that is, until now.

it is called "huisarrest", which is Dutch for "being grounded".

SPLC: The Alt-Right is Killing People

On December 7, 2017, a 21-year-old white male posing as a student entered Aztec High School in rural New Mexico and began firing a handgun, killing two students before taking his own life.

At the time, the news of the shooting went largely ignored, but the online activity of the alleged killer, William Edward Atchison, bore all the hallmarks of the “alt-right”—the now infamous subculture and political movement consisting of vicious trolls, racist activists, and bitter misogynists.

But Atchison wasn’t the first to fit the profile of alt-right killer—that morbid milestone belongs to Elliot Rodger, the 22-year-old who in 2014 killed seven in Isla Vista, California, after uploading a sprawling manifesto filled with hatred of young women and interracial couples (Atchison went by “Elliot Rodger” in one of his many online personas and lauded the “supreme gentleman,” a title Rodger gave himself and has since become a meme on the alt-right).

Including Rodger’s murderous rampage there have been at least 13 alt-right related fatal episodes, leaving 43 dead and more than 60 injured (see list). Nine of the 12 incidents counted here occurred in 2017 alone, making last year the most violent year for the movement.

Milo has dropped his lawsuit.

I shall enjoy a quiet moment of schadenfreude.

So, post-Parkland, it is emphatically Not Good online, with social media basically flooded with conspiracy theories about "crisis actors" and so on.

How bad? A conspiracy video was trending at #1 on YouTube until the media pointed it out to YouTube.

Yep, I think you mean every recent tragedy as well. Going back to sandy hook and the Boston marathon bombing. It just is going supersonic now with these infowars like outlets doing anything and everything to spin an event to their narrative.

Sickens me.

Why are our technological overlords so absolutely bad at stopping misinformation?

In the first hours after last October's mass shooting in Las Vegas, my colleague Ryan Broderick noticed something peculiar: Google search queries for a man initially (and falsely) identified as a victim of the shooting were returning Google News links to hoaxes created on 4chan, a notorious message board whose members were working openly to politicize the tragedy. Two hours later, he found posts going viral on Facebook falsely claiming the shooter was a member of Antifa. An hour or so after that, a cursory YouTube search returned a handful of similarly-minded conspiracy videos— all of them claiming crisis actors were posing as shooting victims to gain political points. Each time, Broderick tweeted his findings.

Over the next two days, journalists and misinformation researchers uncovered and tweeted still more examples of fake news and conspiracy theories propagating in the aftermath of the tragedy. The New York Times' John Herrman found pages of conspiratorial YouTube videos with hundreds of thousands of views, many of them highly ranked in search returns. Cale Weissman at Fast Company noticed that Facebook's Crisis response page was surfacing news stories from Alt Right blogs and sites like "End Time Headlines" rife with false information. I tracked how YouTube’s recommendation engine allows users to stumble down an algorithm-powered conspiracy video rabbit hole. In each instance, the journalists reported their findings to the platforms. And in each instance, the platforms apologized, claimed they were unaware of the content, promised to improve, and removed it.

This cycle — of journalists, researchers, and others spotting — with the simplest of search queries — hoaxes and fake news long before the platforms themselves repeats itself after every major mass shooting and tragedy. Just a few hours after news broke of the mass shooting in Sutherland Springs, Texas, Justin Hendrix, a researcher and executive director of NYC Media Lab spotted search results inside Google's 'Popular On Twitter' widget rife with misinformation. Shortly after an Amtrak train crash involving GOP lawmakers in January, The Daily Beast's Ben Collins quickly checked Facebook and discovered a trove of conspiracy theories inside Facebook's Trending News section, which is prominently positioned to be seen on millions of users.

y the time the Parkland school shooting occurred, the platforms had apologized for missteps during a national breaking news event three times in four months, in each instance promising to do better. But in their next opportunity to do better, again they failed.In the aftermath of the Parkland school shooting, journalists and researchers on Twitter were the first to spot dozens of hoaxes, trolls impersonating journalists, and viral Facebook posts and top "Trending" YouTube posts smearing the victims and claiming they were crisis actors. In each instance, these individuals surfaced this content — most of which is a clear violation of the platforms' rules — well before YouTube, Facebook, and Twitter. The New York Times' Kevin Roose summed up the dynamic recently on Twitter noting, "Half the job of being a tech reporter in 2018 is doing pro bono content moderation for giant companies."

Among those who pay close attention to big technology platforms and misinformation, the frustration over the platforms’ repeated failures to do something that any remotely savvy news consumer can do with minimal effort is palpable: despite countless articles, emails with links to violating content, and viral tweets nothing changes. The tactics of YouTube shock jocks and Facebook conspiracy theorists hardly differ from those of their analogue predecessors; crisis actor posts and videos have, for example, been a staple of peddled misinformation for years.

This isn't some new phenomenon. Still, the platforms are proving themselves incompetent when it comes to addressing them — over and over and over again. In many cases, they appear to be surprised by that such content sits on their websites. And even their public relations responses seem to suggest they've been caught off guard with no plan in place for messaging when they slip up.

All of this raises a mind-bendingly simple question that YouTube, Google, Twitter, and Facebook have not yet answered: How is it that the average, untrained human can do something that multi-billion dollar technology companies that pride themselves on innovation cannot? And beyond that, how is it that — after multiple national tragedies politicized by malicious hoaxes and misinformation — such a question even needs be put to them?

Maybe instead of deleting the account they lock it so no one can post from it or to it. Then send out a tweet to everyone that ever retweeted or posted to the account that it was fake. Also put up a banner on the account that it is fake account so anyone going to will be notified.

Inside Atomwaffen As It Celebrates a Member for Allegedly Killing a Gay Jewish College Student: ProPublica obtained the chat logs of Atomwaffen, a notorious white supremacist group. When Samuel Woodward was charged with killing 19-year-old Blaze Bernstein last month in California, other Atomwaffen members cheered the death, concerned only that the group’s cover might have been blown.

Late last month, ProPublica reported that the California man accused of killing a gay and Jewish University of Pennsylvania student was an avowed neo-Nazi and a member of Atomwaffen Division, one of the country’s most notorious extremist groups.

The news about the murder suspect, Samuel Woodward, spread quickly throughout the U.S., and abroad. Woodward was accused of fatally stabbing 19-year-old Blaze Bernstein and burying his body in an Orange County park.

The report, it turns out, was also taken up in the secretive online chats conducted by members of Atomwaffen Division, a white supremacist group that celebrates both Hitler and Charles Manson.

“I love this,” one member wrote of the killing, according to copies of the online chats obtained by ProPublica. Another called Woodward a “one man gay Jew wrecking crew.”

More soon joined in.

“What I really want to know is who leaked that sh*t about Sam to the media,” a third member wrote.

At least one member wanted to punish the person who had revealed Woodward’s affiliation with Atomwaffen.

“Rats and traitors get the rope first.”

Encrypted chat logs obtained by ProPublica — some 250,000 messages spanning more than six months — offer a rare window into Atomwaffen Division that goes well beyond what has surfaced elsewhere about a group whose members have been implicated in a string of violent crimes. Like many white supremacist organizations, Atomwaffen Division uses Discord, an online chat service designed for video gamers, to engage in its confidential online discussions.

In a matter of months, people associated with the group, including Woodward, have been charged in five murders; another group member pleaded guilty to possession of explosives after authorities uncovered a possible plot to blow up a nuclear facility near Miami.

My feelings on each and every abhorrent Atomwaffen Division member, particularly the cold-blooded murderers: