[Discussion] The Inconceivable Power of Trolls in Social Media

This is a follow-on to the nearly two year old topic "Trouble at the Kool-Aid Point." The intention is to provide a place to discuss the unreasonable power social media trolls have over women and minorities, with a primary focus on video games (though other examples are certainly welcome).

Yeah Facebook is great for birthday reminders, finding out when people get married or are having a baby.

And the messenger thing is useful to communicate over internet in lieu of SMS. Good for things like friends/family visiting other countries.

Something is breaking American politics, but it's not social media

Here’s something everyone knows: Social media is driving American politics into a ditch of partisanship. Political junkies log on and cocoon themselves in a bubble of friendly punditry, appealing fake news, and outrageous acts from the other side. Every retweet and every like is another moment of identity confirmation, another high five to our friends, another reminder that we’re right and they’re wrong.

The result is, well, this ugly mess — President Donald Trump, red and blue Americas, polls showing we fear and hate the other party more than ever before, conspiracy theories growing like weeds, a polity where agreement is impossible and everyone is angry. Damn you, Facebook! Curse you, Twitter! (Instagram, you’re cool.)

But what if this obvious analysis is wrong? What if social media isn’t driving rising polarization in American politics?

That’s the conclusion of a new paper by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro. Their study, released recently through the National Bureau of Economic Research, tests the conventional wisdom about polarization on social media nine ways from Sunday and finds that it’s wrong, or at least badly incomplete.

Their approach is simple. Using data from the American National Election Survey, they compare the most web-savvy voters (the young, where 80 percent used social media in 2012) and the least web-savvy voters (the old, where fewer than 20 percent used social media in 2012) on nine different tests of political polarization. The measures cover everything from feelings about political parties to ideological consistency to straight-ticket voting, and the data shows how polarization changed among these groups between 1996 and 2012.

The results? On fully eight of the nine measures, “polarization increases more for the old than the young.” If Facebook is the problem, then how come the problem is worst among those who don’t use Facebook?

If Facebook is the problem, then how come the problem is worst among those who don’t use Facebook?

Fox News.

Ok, I admit I have not read more into it and don't have the time at the moment, but these results set off some warnings. Yes, the young use social media more, but Facebook is where more older social users are.
Of course, I am not discounting Fox New and AM radio or whatever. Not to mention sh*tty email forwards are still out there.

Call-Out Culture Is a Toxic Garbage Dumpster Fire of Trash

Mixed feelings about this, although I too am increasingly critical of so-called "Call-Out Culture". It does speak to something I saw mentioned in a previous article I posted here though, about how the most vicious fights in online political spaces now are not between people who disagree on 95% of issues, but between people who agree on 95% of issues.

Ah, here it is!

Frankly, it’s unnerving to examine too closely the kind of people we become in the Vortex. For one thing, there’s the embarrassing tendency to end up angrier at people closer to you on the political spectrum than those much further away, perhaps because an in-group’s integrity depends on closely policing its boundaries.
lunchbox12682 wrote:

Ok, I admit I have not read more into it and don't have the time at the moment, but these results set off some warnings. Yes, the young use social media more, but Facebook is where more older social users are.
Of course, I am not discounting Fox New and AM radio or whatever. Not to mention sh*tty email forwards are still out there.

Yeah, my mother in law got dragged (via her friends) into all the bullsh*t. Obama wanting Israel destroyed, Pizzagate, Seth Rich, everything. Eventually her friends on Facebook showed her posts about how Snopes couldn't be trusted, so here's alternative Snopes.

Anecadata, but I've witnessed firsthand how poor media literacy and Facebook can combine to cause an older person to consume all the worst fake, partisan news.

Prederick wrote:

This Mom’s Full-Time Job Is Posting To Instagram And This Is What It’s Like

Weeeeeeeeeeeiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiird.

So an upmarket Honey Boo-Boo that's on IG instead of TLC?

Meant to post that one, thanks Gremlin.

Wired: THE DIRTY WAR OVER DIVERSITY INSIDE GOOGLE

FIRED GOOGLE ENGINEER James Damore says he was vilified and harassed for questioning what he calls the company’s liberal political orthodoxy, particularly around the merits of diversity.

Now, outspoken diversity advocates at Google say that they are being targeted by a small group of their coworkers, in an effort to silence discussions about racial and gender diversity.

In interviews with WIRED, 15 current Google employees accuse coworkers of inciting outsiders to harass rank-and-file employees who are minority advocates, including queer and transgender employees. Since August, screenshots from Google’s internal discussion forums, including personal information, have been displayed on sites including Breitbart and Vox Popoli, a blog run by alt-right author Theodore Beale, who goes by the name Vox Day. Other screenshots were included in a 161-page lawsuit that Damore filed in January, alleging that Google discriminates against whites, males, and conservatives.

What followed, the employees say, was a wave of harassment. On forums like 4chan, members linked advocates’ names with their social-media accounts. At least three employees had their phone numbers, addresses, and deadnames (a transgender person’s name prior to transitioning) exposed. Google site reliability engineer Liz Fong-Jones, a trans woman, says she was the target of harassment, including violent threats and degrading slurs based on gender identity, race, and sexual orientation. More than a dozen pages of personal information about another employee were posted to Kiwi Farms, which New York has called “the web’s biggest community of stalkers.”

Maybe it's high time Google cleaned house big time.

I don't know if it's because I'm overwhelmed by all these social media horror stories every day or the internet is actually the worst thing ever but I wonder if we'd actually be better off without it all together.

All Followers Are Fake Followers - A New York Times exposé of a “black market” for online fame diagnoses the symptom of social-media despair, but misses its cause

In the summer of 2015, the game designer Bennett Foddy and I were sloshing down cocktails while waiting for prime dry-aged rib-eye steaks in Midtown Manhattan. We weren’t living large, exactly, but we did pause to assess our rising professional fortunes. Among them, both of us seemed to be blowing up on Twitter. “Where did all these followers come from?” I asked. We’d both added tens of thousands of apparent fans over the previous year or so.

Foddy, an unpresuming Australian with a doctorate in moral philosophy who now makes video games that purposely abuse their players, encouraged me not to get too chuffed about my entourage. We’d both been added to a list of accounts that are recommended to new Twitter users during the sign-up process, he explained. Many of our new followers were fake, created for the purposes of spam or resale. They had followed us automatically.

Even so, their effect was real: Foddy and I looked far more popular and important than our cronies. A colleague of Foddy’s at New York University, passed over by Twitter for such largesse, had become so agitated that he’d cornered Foddy to ask if he was buying followers. The whole matter was a Pandora’s box that I kept carefully hidden and firmly closed.

Last weekend, The New York Times opened it up. The paper ran an extensive investigative report about the companies that sell Twitter followers and retweets, and the people who buy them. Many of those people are famous already: The Times story opens with geometrically fractured portraits of the model Kathy Ireland, the athlete Ray Lewis, and the actor John Leguizamo. They are presented like modern mug shots of fraudsters caught in the act.

The report exposes Twitter as willfully duplicitous to users, advertisers, and investors—revelations that could (and should) harm the company’s value and reputation. But it also takes for granted that “real” followers are valid and valuable. The problem with Twitter—and with social media in general—isn’t that influence can be faked. It’s that it is seen to have so much significance in the first place.

Social media is giving us trypophobia

Something is rotten in the state of technology.

But amid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape.

Fake news and disinformation are just a few of the symptoms of what’s wrong and what’s rotten. The problem with platform giants is something far more fundamental.

The problem is these vastly powerful algorithmic engines are blackboxes. And, at the business end of the operation, each individual user only sees what each individual user sees.

The great lie of social media has been to claim it shows us the world. And their follow-on deception: That their technology products bring us closer together.

In truth, social media is not a telescopic lens — as the telephone actually was — but an opinion-fracturing prism that shatters social cohesion by replacing a shared public sphere and its dynamically overlapping discourse with a wall of increasingly concentrated filter bubbles.

Social media is not connective tissue but engineered segmentation that treats each pair of human eyeballs as a discrete unit to be plucked out and separated off from its fellows.

Think about it, it’s a trypophobic’s nightmare.

Or the panopticon in reverse — each user bricked into an individual cell that’s surveilled from the platform controller’s tinted glass tower.

Little wonder lies spread and inflate so quickly via products that are not only hyper-accelerating the rate at which information can travel but deliberately pickling people inside a stew of their own prejudices.

First it panders then it polarizes then it pushes us apart.

We aren’t so much seeing through a lens darkly when we log onto Facebook or peer at personalized search results on Google, we’re being individually strapped into a custom-moulded headset that’s continuously screening a bespoke movie — in the dark, in a single-seater theatre, without any windows or doors.

Are you feeling claustrophobic yet?

It’s a movie that the algorithmic engine believes you’ll like. Because it’s figured out your favorite actors. It knows what genre you skew to. The nightmares that keep you up at night. The first thing you think about in the morning.

It knows your politics, who your friends are, where you go. It watches you ceaselessly and packages this intelligence into a bespoke, tailor-made, ever-iterating, emotion-tugging product just for you.

Its secret recipe is an infinite blend of your personal likes and dislikes, scraped off the Internet where you unwittingly scatter them. (Your offline habits aren’t safe from its harvest either — it pays data brokers to snitch on those too.)

No one else will ever get to see this movie. Or even know it exists. There are no adverts announcing it’s screening. Why bother putting up billboards for a movie made just for you? Anyway, the personalized content is all but guaranteed to strap you in your seat.

If social media platforms were sausage factories we could at least intercept the delivery lorry on its way out of the gate to probe the chemistry of the flesh-colored substance inside each packet — and find out if it’s really as palatable as they claim.

Of course we’d still have to do that thousands of times to get meaningful data on what was being piped inside each custom sachet. But it could be done.

Alas, platforms involve no such physical product, and leave no such physical trace for us to investigate.

JeremyK wrote:

I don't know if it's because I'm overwhelmed by all these social media horror stories every day or the internet is actually the worst thing ever but I wonder if we'd actually be better off without it all together.

We wouldn't be, this is just a really bad time as we're all now hooked into it on a massive level and haven't figured out how to live with it.

Prederick wrote:

Call-Out Culture Is a Toxic Garbage Dumpster Fire of Trash

Mixed feelings about this, although I too am increasingly critical of so-called "Call-Out Culture". It does speak to something I saw mentioned in a previous article I posted here though, about how the most vicious fights in online political spaces now are not between people who disagree on 95% of issues, but between people who agree on 95% of issues.

Gawd, finally people on the left are talking about this.

Facebook Pushes 'False Flag' Amtrak Conspiracies in Trending Section

The “People Are Saying” section of Facebook’s curated Trending News feature prominently surfaced several conspiracy theories about Wednesday’s Amtrak crash pushed by personal accounts, alleging a “false flag” attack by “commie-lib resisters” or Hillary Clinton herself.

For users logged out of Facebook, “People Are Saying” is the only section that surfaces on the topic page for the Amtrak crash in Charlottesville that left one dead and two others injured.

The train was carrying Republican members of Congress heading to a retreat. The one death and two serious injuries were reportedly sustained by the people in a truck hit by the oncoming train.

The top posts in “People Are Saying” are mostly written by people who are not public figures but who have shared posts by reputable news organizations.

“I want to know if Rep. Devin Nunes was on this train and how Hillary still has the power to order these kinds of strikes,” reads one post, affixed to an ABC News story.

Another post, linking to a Daily Caller article, claims it knows the “real reason Capitol Police… would not let any (legislators) get off of the train,” referring to a deeper conspiracy theory not listed in the linked article.

“This was a set up for a small arms and explosives attack upon Republican Congressmen and their families by either rabid, Commie-Lib-Dim ‘resisters,’ or Muslim jihadis, in, unfortunately, that order of probability,” wrote that user, whose post has 12 shares and 55 comments.

Facebook did not respond to a request for comment at press time.

The social network has continually struggled to keep disinformation and conspiracy theories off of its platform, often in emergency situations.

In October, Facebook’s “Safety Check” feature, which was created to notify friends and family of a user’s safety after an emergency situation, pushed websites like “TheAntiMedia” and “RedNewsHere” instead of traditional news sources.

Some of those websites solicited Bitcoin handouts on the Safety Check page, others sold bumper stickers, and one linked to “Funny Videos” in the hours after the Las Vegas massacre. Conspiracy theories proliferated about the Vegas shooter’s political motives on the Safety Check page.

After the debacle, a Facebook spokesperson promised that the service was “working to fix the issue,” and the page leaned more heavily on reputable news outlets hours later.

The same procedure does not appear to be in place for Facebook’s Trending News section.

Prederick wrote:

Facebook Pushes 'False Flag' Amtrak Conspiracies in Trending Section

“I want to know if Rep. Devin Nunes was on this train and how Hillary still has the power to order these kinds of strikes,” reads one post, affixed to an ABC News story.

Bodies would be stacked like cordwood if the Clinton's were actually the deep state mass murderers they're made out to be.

Actually, brb. Gotta pitch Netflix for surefire replacement to House of Cards.

As people have pointed out on Twitter, these conspiracy theories require Democrats/Antifa/BLM/George Soros being both all-power geniuses and complete, utter idiots simultaneously.

@oneunderscore__ wrote:

You think Democrats tried to stage a terror attack in a Reichstag Fire incident, but did it by placing a truck with three of their own people in it, killing one of them and injuring two others, and doing no damage to the Congresspeople? You think this is how they'd do it?

Also, who is signing up for this? "Yes, Mr. Soros, put me in the Death Truck. This plan seems 100% sound. Just gonna wait in a truck and die to inconvenience but not maim rival congressmen. I will accept my payment in Liberal Heaven."

And yet, the response, verbatim is "Can’t know for sure but, accident is not likely. Out of all the trains, of all the days, just happens to be a train with hundreds of Republicans on it.. yeah. It was more of a warning imo. But, really, who knows. One has to consider it possible that it was not a accident."

I work in news in the NYC Metro area, and we literally report on trains hitting people and cars almost bi-weekly. And yet, homeboy is all-in on this. Motivated reasoning y'all.

I... kind of want to talk about Liberal Heaven specifically the right's interpretation thereof.

oilypenguin wrote:

I... kind of want to talk about Liberal Heaven specifically the right's interpretation thereof.

IMAGE(https://i.imgur.com/ywOpPB7.gif)

I can't believe I haven't posted the Deepfake stuff yet, which is yet another example of reality looking at fiction, scoffing, rolling up its sleeves and going "Hold my beer."

Why Reddit’s face-swapping celebrity porn craze is a harbinger of dystopia

“REALITY CHECK,” wrote a deeply troubled user on Reddit’s r/deepfakes subreddit, after news broke that the forum had become a repository for AI technology with far-reaching implications. “ALL OF THIS IS F**KING INSANE.”

The technology in question? A new tool, driven by machine learning, that lets users easily swap the faces of their favorite celebrities onto preexisting video images.

In other words, endless videos in which the faces of porn stars have been replaced by celebrity faces — or rather, algorithmic approximations of celebrity faces that reside deep within the Uncanny Valley.

On r/deepfakes, eerie approximations of Emma Watson, Emilia Clarke, Sophie Turner, Natalie Portman, Kristen Bell, Daisy Ridley, Ariana Grande, and many others borrow the expressions, moves, sultry-eyed camera stares, and orgiastic glee of the porn stars upon whose faces they’ve been transplanted. You can find an example here. (Warning: this is full-on porn, and quite jarring and eerie at that. Do not click if you don’t want to see jarring, eerie, full-on porn.)

The Redditor alarmed by all of this, poshpotdllr, came to the subreddit to express the fear that this new face-swapping craze would result in a planet full of celibate men using digital enhancements to feed their unattainable fantasies, while women would be forced to “[choose] between solitude and polygamy.”

That argument is a tad dire — we’re still a long way off from robot-based porn driving us all to gender-based separatism. The sudden rise of the face-swapping tool and the plausibility of its output does, however, raise a host of serious questions about where all this technology is headed.

Most important: What happens to issues of consent when videos like this proliferate across the internet? And what implications does it hold for the integrity of any video in a digital age?

Note: You can find the video mentioned above at the link. I did not link it for obvious reasons of it being INSANELY NSFW, but I did see it before I came in to work today, and it's.... a thing. I feel like Sophie Turner (whose face is used in the video) should sue.... a lot of people. I don't know what for, but I still feel like she should.

Meanwhile, Gfycat's apparently taking action against some of this stuff.

Pornographic videos that used new software to replace the original face of an actress with that of a celebrity are being deleted by a service that hosted much of the content.

San Francisco-based Gfycat has said it thinks the clips are "objectionable".

The creation of such videos has become more common after the release of a free tool earlier this month that made the process relatively simple.

The developer says FakeApp has been downloaded more than 100,000 times.

It works by using a machine-learning algorithm to create a computer-generated version of the subject's face.

To do this it requires several hundred photos of the celebrity in question for analysis and a video clip of the person whose features are to be replaced.

The results - known as deepfakes - differ in quality.

But in some cases, where the two people involved are of a similar build, they can be quite convincing.

I worry most about the 'revenge porn' possibilities against any women, not just celebrities.
Not to mention what a group like Gamergate would (sorry...WILL) do with it, and to whom.

oilypenguin wrote:

I... kind of want to talk about Liberal Heaven specifically the right's interpretation thereof.

We need to start training this AI to always use Nic Cage's face.

'Fiction is outperforming reality': how YouTube's algorithm distorts truth

It was one of January’s most viral videos. Logan Paul, a YouTube celebrity, stumbles across a dead man hanging from a tree. The 22-year-old, who is in a Japanese forest famous as a suicide spot, is visibly shocked, then amused. “Dude, his hands are purple,” he says, before turning to his friends and giggling. “You never stand next to a dead guy?”

Paul, who has 16 million mostly teen subscribers to his YouTube channel, removed the video from YouTube 24 hours later amid a furious backlash. It was still long enough for the footage to receive 6m views and a spot on YouTube’s coveted list of trending videos.

The next day, I watched a copy of the video on YouTube. Then I clicked on the “Up next” thumbnails of recommended videos that YouTube showcases on the right-hand side of the video player. This conveyor belt of clips, which auto-play by default, are designed to seduce us to spend more time on Google’s video broadcasting platform. I was curious where they might lead.

The answer was a slew of videos of men mocking distraught teenage fans of Logan Paul, followed by CCTV footage of children stealing things and, a few clicks later, a video of children having their teeth pulled out with bizarre, homemade contraptions.

I had cleared my history, deleted my cookies, and opened a private browser to be sure YouTube was not personalising recommendations. This was the algorithm taking me on a journey of its own volition, and it culminated with a video of two boys, aged about five or six, punching and kicking one another.

“I’m going to post it on YouTube,” said a teenage girl, who sounded like she might be an older sibling. “Turn around an punch the heck out of that little boy.” They scuffled for several minutes until one had knocked the other’s tooth out.

The article's focus is primarily on "Did YouTube cost Hillary the election?" which... meh, but surprisingly missed focusing on how YT could be driving radicalization, similar to a blurb that Yonder pointed out from an article I posted to a different thread. Especially considering the far-right has set up shop on YouTube already.

garion333 wrote:
Prederick wrote:

Call-Out Culture Is a Toxic Garbage Dumpster Fire of Trash

Mixed feelings about this, although I too am increasingly critical of so-called "Call-Out Culture". It does speak to something I saw mentioned in a previous article I posted here though, about how the most vicious fights in online political spaces now are not between people who disagree on 95% of issues, but between people who agree on 95% of issues.

Gawd, finally people on the left are talking about this.

Published almost 3 years ago.

Great book. Highly recommend.

How YouTube Drives People to the Internet’s Darkest Corners

The focus on Twitter and Facebook has people completely overlooking YouTube, where it gets wiiiild.

YouTube is the new television, with more than 1.5 billion users, and videos the site recommends have the power to influence viewpoints around the world.

Those recommendations often present divisive, misleading or false content despite changes the site has recently made to highlight more-neutral fare, a Wall Street Journal investigation found.

People cumulatively watch more than a billion YouTube hours daily world-wide, a 10-fold increase from 2012, the site says. Behind that growth is an algorithm that creates personalized playlists. YouTube says these recommendations drive more than 70% of its viewing time, making the algorithm among the single biggest deciders of what people watch.

The Journal investigation found YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content. When users show a political bias in what they choose to view, YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints.

Such recommendations play into concerns about how social-media sites can amplify extremist voices, sow misinformation and isolate users in “filter bubbles” where they hear largely like-minded perspectives. Unlike Facebook Inc. and Twitter Inc. sites, where users see content from accounts they choose to follow, YouTube takes an active role in pushing information to users they likely wouldn’t have otherwise seen.

“The editorial policy of these new platforms is to essentially not have one,” said Northeastern University computer-science professor Christo Wilson, who studies the impact of algorithms. “That sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win.’ But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and people are being gamed.”

YouTube says it recommends more than 200 million different videos in 80 languages each day, typically alongside clips users are currently watching or in personalized playlists on YouTube’s home page.

Long a place for entertainment, YouTube has recently begun trying to make it a more reliable site for news, said YouTube Chief Product Officer Neal Mohan.

YouTube has been tweaking its algorithm since last autumn to surface what its executives call “more authoritative” news sources to people searching about breaking-news events. YouTube last week said it is considering a design change to promote relevant information from credible news sources alongside videos that push conspiracy theories.

After the Journal this week provided examples of how the site still promotes deceptive and divisive videos, YouTube executives said the recommendations were a problem. “We recognize that this is our responsibility,” said YouTube’s product-management chief for recommendations, Johanna Wright, “and we have more to do.”

YouTube engineered its algorithm several years ago to make the site “sticky”—to recommend videos that keep users staying to watch still more, said current and former YouTube engineers who helped build it. The site earns money selling ads that run before and during videos.

The algorithm doesn’t seek out extreme videos, they said, but looks for clips that data show are already drawing high traffic and keeping people on the site. Those videos often tend to be sensationalist and on the extreme fringe, the engineers said.