The American Political Science Association held its annual conference in Boston last month, drawing thousands of academics and graduate students to the city for days of discussions and presentations from scholars in the field. While there’s only so much one can take in at such a large conference, I was able to attend several presentations that offer interesting observations relevant to contemporary discussions of free speech, both in America and globally. Two papers in particular offer interesting findings on social media and free speech, in the United States and abroad.
The first paper, “Selective Avoidance on Social Media: A Comparative Study of Western Democracies,” was presented by a team of researchers led by Marko Soric and Qinfeng Zhu of the City University of Hong Kong on selective avoidance on social media. The paper examined how social media users in the United States, France, and the United Kingdom deal with people in their social media networks who espouse political views with which they disagree. They found that selective avoidance behaviors are much more common in the United States than in France or the U.K.
The researchers noted that there are two broad strategies that individuals can use if they wish to avoid opinions that conflict with their own: first, by removing dissonant content (hiding or muting); and second, by dissolving ties with the holders of those opinions (such as by unfriending or unfollowing).
They also found two interesting ways in which American social media users differ from their counterparts in France and the U.K. The level of “crosscutting discussion” (i.e., discussion of politics with persons with political views different from those of the respondent) was significantly higher in the United States sample than in either the French or U.K. samples. Americans were also significantly more likely to employ a “confrontational discussion style” (i.e., arguing with someone holding different views or intentionally starting a discussion using hostile or inflammatory words) than were respondents in France or the U.K. Those differences corresponded with different forms of selective avoidance: Perhaps unsurprisingly given their greater use of confrontational style, Americans were much more likely than French or U.K. respondents to unfriend or unfollow (29.8 percent) and to hide content (24.6 percent). Also unsurprisingly, confrontational discussion style was found to be a significant predictor of both hiding content and unfriending or unfollowing, as was ideological extremity. Finally, there were differences in strategies used by those who engaged in cross-cutting discussions, regardless of whether they used confrontational style: In the United States, cross-cutting discussions were much more likely to result in unfriending or unfollowing, while in the other countries the result was more likely to be hiding content.
The second paper, “#No2Sectarianism: Experimental Approaches to Reducing Sectarian Hate Speech Online,” by Alexandra A. Siegel and Vivienne Badaan of New York University, reported on an experiment testing an effort to reduce the expression of online bias.
The researchers identified 798 Arabic-language Twitter users who had issued five or more anti-Shiite tweets over the previous six months, had fewer than 10,000 followers, had tweeted at least once in the previous six hours, and whose accounts were at least two months old. The researchers defined anti-Shiite tweets as those including certain common epithets, such as “Party of the Devil” and “Salafist,” and they originally identified 1,000 such users; however, 202 of those users were banned by Twitter for pro-ISIS or other extremist tweets before the experiment concluded. The researchers sought to determine whether responding to those users in a manner which increased the salience of super-ordinate identities (e.g., as Muslims or Arabs rather than as Sunnis) would decrease the users’ propensity to engage in anti-Shiite speech.
In order to implement the study, the researchers developed a bot which responded to anti-Shiite tweets with one of several automated responses:
- A “placebo” (“That language sows (sectarian) discord [or] strife”)
- A tweet which evoked common national identity (“That language sows (sectarian) discord [or] strife. We are all Arab.”)
- A tweet which evoked common religious identity (“That language sows (sectarian) discord [or] strife. We are all Muslim.”)
- A tweet which evoked common national identity while invoking elite authority (“Many political leaders say that language sows (sectarian) discord [or] strife. We are all Arab.”)
- A tweet which evoked common religious identity while invoking elite authority (“Many religious leaders say that language sows (sectarian) discord [or] strife. We are all Muslim.”)
Additionally, a control group received no reply from the bot.
Although all of the treatments were associated with reductions in anti-Shiite hate speech two weeks and one month later, the only treatment that was statistically significant was the tweet which evoked common religious identity while invoking elite authority; it was associated with a 29 percent (after 2 weeks) and 24 percent (after one month) reduction in the use of anti-Shiite speech. Interestingly, the placebo message, which merely stated that a tweet might cause strife, had almost as strong of a negative association as receiving most of the other types of messages. The authors suggest that that might mean that merely receiving a critical message might be effective at reducing the subsequent use of hate speech.
Conferences like APSA’s frequently showcase in-progress research on its way to publication in peer-reviewed journals, and where the researchers take their work from here, and whether it will meet the standards of peer-reviewed publication, remains to be seen. Hopefully, however, their findings will be read and discussed more widely, so that their implications for today’s debates over free speech can be considered.