Social Media’s Violence and Misinformation Problem

facebookrussiareportuw_01

Social media platforms have been designed to both keep users engaged, and to modify user behavior through advertising. This is no big secret, it is social media strategy 101. Anyone can engage social media advertising services.

New research suggests that social media has contributed to an increase in political violence through the uninhibited spread of misinformation through social media. For example, the unfolding human rights catastrophe in Myanmar and the persecution of the Rohingnya coincided with the arrival of Facebook.

Yesterday, Mark Zuckerberg announced changes to Facebook that would help curb suspicious content, including those that uses misinformation to provoke violence:

“One of the most important responsibilities we have as a company is to keep people safe and stop anyone from abusing our service

In 2016, in addition to identifying these … threats, we also faced coordinated information operations with networks of fake accounts spreading divisive content and misinformation.”

That Facebook are taking steps to improve detection of ‘bad content’ is positive. But it really is too little too late.Facebook isn’t the only social-media platform that contains hate speech. It also has proliferated on Twitter.

Social media virality often sees the most polarising, nasty messaging gain attention – developed with the intention of increasing post engagement by encouraging emotions to spin out of control. The post below, which appeared in my twitter feed this morning, is a clear example of this (at the time of writing 27k people were talking about the post).

This kind of posting is an example of social media interaction designed to incite emotions to increase engagement. Read the comments:

Social media companies don’t  know the true identities of their advertisers

It is well established that social media companies know more about us than we know about ourselves. And that this knowledge is used by companies to market products and ideas to us, to ‘manipulate’ us in to making a purchase for example. However, social media companies don’t always know the true identities of their advertisers:

New research shows a correlation between Facebook and violence

Karsten Müller and Carlo Schwarz, researchers at the University of Warwick, studiedanti-refugee attacks in Germany: 3,335 over a two-year span. In each, they analyzed the local community by a range of variables. But one thing stood out. Towns where Facebook use was higher than average, like Altena, experienced more attacks on refugees.

“That held true in virtually any sort of community — big city or small town; affluent or struggling; liberal haven or far-right stronghold — suggesting that the link applies universally.”

Fake News: Virality in Developing Countries

In developing countries, access to information technology has increased communication. When I was living in Indonesia I received multiple messages aimed at creating hysteria, paranoia, and moral panic around religious and social minorities. The origin of the messages, which spread rapidly through communities, was always unknown.

A high-profile case in recent times was a false claim circulating on social media in  that Beijing was seeking to wage biological warfare against Indonesia. The viral hoax prompted the Chinese embassy to issue a statement saying that the reports were “misleading.”

The unfolding human rights catastrophe in Myanmar

Since the end of August 2017, more than 700,000 Rohingya refugees have fled escalating violence in Myanmar to Bangladesh where they are staying in refugee camps.

According to an August 15 report, Reuters found more than 1,000 examples of posts, comments and pornographic images attacking the Rohingya and other Muslims on Facebook. Social media companies continue to fail to address this problem.

In March, a United Nations investigator said Facebook was used to incite violence and hatred against the Muslim minority group:

The platform, she said, had “turned into a beast.”

Four months after Zuckerberg’s pledge to act, here is a sampling of posts from Myanmar that were viewable this month on Facebook:

One user posted a restaurant advertisement featuring Rohingya-style food. “We must fight them the way Hitler did the Jews, damn kalars!” the person wrote, using a pejorative for the Rohingya. That post went up in December 2013.”

The Problem

The problem here isn’t what Mark Zuckerberg says it is.

He says Facebook has “been investing heavily to improve safety, security and privacy – and to defend against these coordinated inauthentic campaigns…”

Because “authenticity matters.”

The problem is, in part that:

“We are all carrying around devices that are suitable for mass behavior modification.”

Social media platforms have been designed to engage and encourage users to act. Anyone can engage these services. We are seeing the impact of this in political cultures and actions of political violence across the world.

Facebook’s Claims

An analysis of 1,400 successful ad campaign case studies found that campaigns with purely emotional content performed about twice as well (31% vs. 16%) as those with only rational content. The New York Times viral content study showed that some negative emotions contribute more to virality than others. Most specifically, the negative emotion of anger. The problem is inherent to the social media business model, which involves finding customers (i.e. advertisers) who intend to modify the behavior of users within their targeted demographic (i.e us laypeople browsing our timeline). And to use the platform to manipulate them into some form of action or response. Yet Facebook claims that it doesn’t have the resources to curb the problem:

“As a company we don’t have all the investigative tools and intelligence that governments have, which makes it hard to always attribute particular abuse to particular countries or groups.”

I call bullshit. Facebook has more investigative tools and intelligence than governments will ever have. Facebook has engaged in mass behavior modification without the consent of its users. They have the resources and the technology, but they do not have the will and are also likely  frightened about the implications of admitting some fault.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s