Online hate speech is on the rise. All available academic and government sources indicate a year on year increase in the number of people being exposed to hateful content on social media, online news item comments, and websites. Over half (53%) of UK adult Internet users reported seeing hateful content online in 2018, an increase from 47% in 2017. In 2016, 34% of 12-15-year olds recalled seeing hateful content online. This figure increased to 45% in 2018. Of those who witnessed online hate, less than half took action in relation to the most recent incident.
Survey data only captures a snap-shot of the online hate phenomenon. Data science methods assist in providing a real-time view of hate speech perpetration in action, generating a more complete picture.
In 2016 and 2017 the Brexit vote, and a string of terror attacks in the UK, were followed by significant and unprecedented increases in online hate speech and offline hate crime. Although the production of online hate speech increased dramatically in the wake of all these events, it was less likely to be retweeted in volume and to survive for long periods of time. Where hate speech was re-tweeted, it emanated from a core group of like-minded people who seek out each other’s messages. Hate speech produced around the Brexit vote in particular was found to be largely driven by a small number of Twitter accounts. Around 50% of anti-Muslim hate speech was produced by only 6% of users, many of whom were self-declared anti-Islam.
Many governments now recognise the pernicious problem of online hate speech, and how social media platforms are being manipulated by far-right groups and nefarious states to increase political polarisation to their advantage. If we are to understand our online activities as extensions of our offline lives, and not as some isolated virtual experience that has no consequence beyond the Internet, phenomena like online hate speech are likely reaching into physical space. But until recently there has been a lack of evidence on the effect of online hate speech and increasing online polarisation on community tensions on the streets.
To examine the link between online activity and offline consequences, researchers statistically modelled the effect online hate speech had on the incidence rate of hate crimes on the streets. It appears an increase in online anti-Muslim and anti-Black speech on Twitter is associated with an increase in racially and religiously aggravated violence, criminal damage, and harassment. This statistical association remained when controlling for factors known to predict hate crime, including educational attainment, age, employment, and race. Predictions showed that in an area with a 70% black and minority ethnic population and 300 hate tweets posted per month, the incidence rate of racially and religiously aggravated violence increased by up to 100%.
It’s not possible to conclude that online hate speech directly causes offline hate crime. What we do know is that social media is now part of the formula of hate crime. A hate crime is a process, not a discrete act, with victimisation ranging from hate speech through to violent attacks. This process is set in geographical, social, historical, and political context. Technological context must now form part of this conceptualisation.
The big three social media companies introduced hate speech policies following pressure from national and supranational governments. But despite these efforts, it remains evident that social media, and in particular new platforms with strong free speech principles (such as Voat, Gab and 8chan), have been widely infected with a casual low-level intolerance for the racial Other. With interdisciplinary work researchers are now closer to showing when and where this online intolerance spills onto the streets in violent ways.
Featured Image Credit: “social media” by Pixelkult. CC0 public domain via Pixabay.