On March 15th 2019, a white nationalist opened fire during Friday prayers, killing fifty Muslims and injuring at least fifty others in two mosques in Christchurch, New Zealand. The attack was the largest mass shooting in New Zealand’s history and came as a shock to the small and remote island nation which generally sees itself as free from the extreme violence and terrorism seen elsewhere in the world. The victims included New Zealand citizens, migrants, and in an extremely cruel twist of fate, Muslim refugees who had fled the war in Syria.
To the horror of many, the attacker used Facebook to live-stream the massacre at Al Noor Mosque. Facebook and YouTube were slow to take down the video. Internet users were still uploading and watching the video several hours after the suspect had been arrested. The video was amplified via Reddit, and users of the far-right message board, 8chan, applauded the attack in real-time. One anonymous poster wrote: “I have never been this happy.”
Facebook, Twitter, and other social media sites have been used repeatedly by white nationalists to attack minorities. The perpetrator was most likely radicalised online, and connected with other white supremacists in Europe. Mainstream social media platforms are frequently used to attack refugees. On Twitter, for example, you can find the following hashtags: #fuckrefugees, #RapeRefugees, #MuslimInvasion, and #NotWelcomeRefugees. Some research has even identified a close, causal link between online hate speech and offline violence targeting refugees.
While social media is often used to attack minorities and refugees, it has also provided a platform to support them. Refugees frequently use social media to navigate their way to Europe and to communicate back home. Advocacy organizations have also used social media in support of refugee rights. In September 2015, the image of dead Syrian toddler Alan Kurdi washed up on the shore of Turkey made front pages across the world. The photograph led thousands to tweet #welcome refugees and take to the streets. In the UK, 38 Degrees initiated an on-line petition demanding local councils accept more refugees, which was signed by 137,000 people. Similar social media campaigns were also active in Austria, Germany, Ireland, Poland, Austria, Canada, the United States, and New Zealand. They mobilised people to take part in vigils, offer a bed, a meal or an English lesson to a refugee. In Australia, for instance, GetUp co-organized vigils across Australia under the hashtag #lightthedark and over 10,000 people attended and urged then Australian Prime Minister Tony Abbott to welcome more refugees.
Nevertheless, there is a need to regulate social media companies to ensure they do not disseminate hate speech or violent attacks. As New Zealand’s prime minster, Jacinda Arden, explained immediately after the attacks: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.” The Australian government moved quickly to criminalise the “abhorrent violent material” online. New laws, passed in early April, stipulate that social media companies will be obliged to remove this content “expeditiously” or face fines and/or imprisonment. Germany passed similar laws in May 2018 and requires social media companies to remove objectionable material within 24 hours or face fines of up to €50 million.
While social media is often used to attack minorities and refugees, it has also provided a platform to support them.
Critics of these new laws suggest they will limit free speech, be misused by autocrats, and potentially even play into the hands of the far-right. Others have argued politicians must address a deeper and more difficult structural issue: monopoly control of the internet. Yet, even Facebook CEO Mark Zuckerburg has acknowledged the need for global regulation of social media big tech companies. In a Washington Post op-ed he argued that: “Internet companies should be accountable for enforcing standards on harmful content.” He called for an independent body to set the standards of harmful content distribution, and monitor compliance.
New Zealand and France are already taking a leadership role in regulating online hate speech internationally. On May 15th they co-hosted a summit in Paris to encourage tech companies and concerned countries to sign-up to the “Christchurch call,” a pledge to eliminate violent, extremist content online. Since the Christchurch attacks Ardern has been consistently strong on the need to act internationally, while France has already sought innovative ways to regulate. In January 2019 Emmanuel Macron’s government embedded a team of senior civil servants in Facebook to help regulate hate speech online. France is currently the G7 President and in May will also host a G7 digital ministers meeting. This global initiative still faces many challenges – its language will need to be precise and binding to have force. States must not only sign the pledge, but also commit to regulation within their national jurisdiction to enforce it.
Featured Image Credit: “social media” by Pixelkult. CC0 public domain via Pixabay.
Nina Hall is on the money re hate speech online. As a kiwi I was appalled by the event but impressed by the collective NZ response. It was destined to happen to such a peaceful country and in the very English city of Christchurch, the home region of the Crusaders football team. We need to press harder to get accountability from the social media. It will not be easy.