Social media and violence posts may signal whether protest will turn ugly gas 89

The USC researchers also found that people are more likely to endorse violence when they moralize the issue that they are protesting — that is, when they see it as an issue of right and wrong. That holds true when they believe that others in their social network moralize the issue, too.

“Extreme movements can emerge through social networks,” said the study’s corresponding author, Morteza Dehghani, a researcher at the Brain and Creativity Institute at USC. “We have seen several examples in recent years, such as the protests in Baltimore and Charlottesville, where people’s perceptions are influenced by the activity in their social networks. People identify others who share their beliefs and interpret this as consensus. In these studies, we show that this can have potentially dangerous consequences.”

The scientists analyzed 18 million tweets posted during the 2015 Baltimore protests over the death of 25-year-old Freddie Gray, who died as police took him to jail. Researchers used a deep neural network — an advanced machine learning technique — to detect moralized language on Twitter.

They investigated the association between moral tweets and arrest rates, a proxy for violence. This analysis showed that the number of hourly arrests made during the protests was associated with the number of moralized tweets posted in previous hours.

Recent examples of movements tied to social media include the #marchforourlives effort to seek gun control, the #metoo movement against sexual assault and harassment, and #blacklivesmatter, a campaign against systematic racism that began in 2014 after the police-involved shooting death of Michael Brown, 19, in Ferguson, Mo.

An example involving more violence is the Arab Spring revolution, which began in Tunisia in late 2010 and set off protests in Egypt, Libya and other nations, forcing changes in their leadership. In Syria, clashes escalated into a war that has killed hundreds of thousands of people and displaced countless refugees. Detecting moralization online

The scientists developed a model for detecting moralized language based on a prior, deep learning framework that can reliably identify text that evokes moral concerns associated with different types of moral values and their opposites. The “ Moral Foundations Theory” defines these dueling values:

Moralization and political polarization are exacerbated by online “echo chambers,” researchers say. These are social networks where people connect with other like-minded people while distancing themselves from those who don’t share their beliefs. Protests, social media and violence

Social media data help researchers illuminate real-world social dynamics and test hypotheses, explained Joe Hoover, a lead author of the paper and doctoral candidate in psychology at the USC Dornsife College of Letters, Arts and Sciences. “However, as with all observational data, it can be difficult to establish the statistical and experimental control that is necessary for drawing reliable conclusions.”

To make up for this, the scientists conducted a series of controlled behavioral studies, each with more than 200 people. Researchers first asked participants to read a paragraph about the 2017 clashes over the removal of Confederate monuments in Charlottesville, Va. Then the researchers asked how much participants agreed or disagreed with statements about the use of violence against far-right protesters.

The work was supported by a grant from the U.S. Department of Defense. Other study co-authors were Marlon Mooijman of Northwestern University, Hoover from USC Dornsife and the Brain and Creativity Institute at USC, and Ying Lin and Heng Ji of the Rensselaer Polytechnic Institute.