EDITOR’S NOTE: This article was originally published in the Spring 2019 magazine.
On October 27th, 2018, Robert Bowers walked into a Pittsburgh synagogue and opened fire, killing 11 worshippers and injuring two others and four police officers. Earlier that morning, Bowers, a frequent user of the social media platform Gab, posted a message on his board reading “HIAS likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” Bowers’ consistently violent social media posts, coupled with online radicalization into Neo-Nazi online communities, reignited a national debate over social media platforms’ strict commitment to protect free speech, even if hateful or threatening.
It’s become a mainstay of modern Neo-Nazi recruitment to focus on disseminating messages online; on platforms like Twitter, accounts can reach huge potential audiences with minimal effort and expertise. Atomwaffen Division, the most violent modern sect of Neo-Nazism that attracted followers like Robert Bowers and draws inspiration from Dylann Roof, has an exclusively online organizing effort, focusing on pulling new members from the followers of far-right Twitter personalities and politicians. By doing so, they have built an online infrastructure that recruits and engages without any real leadership, making it difficult to quash. Atomwaffen focuses on hateful, often profane images that feature Neo-Nazi symbols and links to their guiding manuscript, the ultra-violent manifesto “Siege.” Although smaller accounts are quickly banned, larger accounts from Atomwaffen and other prominent groups often linger for months, reaching new users and gaining thousands of followers.
Following Charlottesville, the Tree of Life shooting, and other Neo-Nazi acts of violence, platforms often recommit to removing Neo-Nazis from their platforms. Many call these empty promises, as prominent Neo-Nazis including Richard Spencer have not been removed or had their accounts banned, despite organizing events like Charlottesville and being cited by suspects[ALT1] as their inspiration for violent crimes.
Facebook has had its own share of growing pains regulating hate speech on its platform, particularly after a study found that its policies often resulted in banning victims of hate speech rather than the perpetrators; in an effort to reform, they banned prominent alt-right figures including Alex Jones and deleted many of the larger Neo-Nazi groups. However, groups that masquerade under the veneer of “European heritage” still flourish, as do many more blatantly racist, Anti-Semitic, and homophobic Neo-Nazi groups, pages, and profiles. In particular, the Proud Boys, the National Alliance, and the National Socialist Movement have a large Facebook presence.
Even as Facebook and Twitter have faced public backlash for failing to censor Neo-Nazi propaganda and messaging on their platforms, platforms like Gab have sprung up as a reaction, promising entirely unregulated and uncensored speech no matter the content. A haven for those banned from more traditional platforms, Gab has become almost exclusively for Neo-Nazis and the white supremacist “alt-right” movement by virtue of its tolerance for threats, slurs, and outright forms of hate speech and racism. Recently, the platform faced a hosting crisis after GoDaddy revoked its host in response to Robert Bowers’ frequent use of the site leading up to his 2018 mass shooting; despite public backlash, its founders have remained committed to unregulated postings.
Silicon Valley still considers free speech and the unrestrained spread of ideas to be paramount. This commitment puts social media companies in a difficult situation: while they are unwilling to regulate the content on their platforms, their platforms have almost unparalleled power to drive political discussion and, alternately, to radicalize younger and more malleable users. They have the tools and infrastructure to regulate content, as they do successfully with nudity through AI filters and human reviewers, but until recently have preferred to remove only the most egregious abuses to avoid controversies. As hearings over alleged anti-conservative and anti-right bias proceed on Capitol Hill and banned Neo-Nazis sue over their loss of a platform, it is clear that regulation is no easy task. It has made both Facebook and Twitter particularly easy targets for political attacks and allegations of bias, both from fringe figures and from mainstream politicians. However, as many openly violent accounts and pages still flourish on their platforms, two questions persist: how will tech companies contend with their extremist users, and how many lives hang in the balance of their choices?