This post is by Adi Robertson from The Verge - All Posts
Click here to view on the original site: Original Post
<img alt="" src="https://cdn.vox-cdn.com/thumbor/OCbmXgVpX-scNHOofC6pISG8C2U=/0x0:5760x3840/1310x873/cdn.vox-cdn.com/uploads/chorus_image/image/63247865/1136036756.jpg.0.jpg" />
In the wake of a hate-fueled mass shooting in Christchurch, New Zealand, major web platforms have scrambled to take down a 17-minute video of the attack. Sites like YouTube have applied imperfect technical solutions, trying to draw a line between newsworthy and unacceptable uses of the footage.
But Facebook, Google, and Twitter aren’t the only places weighing how to handle violent extremism. And traditional moderation doesn’t affect the smaller sites where people are still either promoting the video or praising the shooter. In some ways, these sites pose a tougher problem — and their fate cuts much closer to fundamental questions about how to police the web. After all, for years, people have lauded the internet’s ability to connect…
<a href="https://www.theverge.com/2019/3/15/18267638/new-zealand-christchurch-mass-shooting-online-hate-facebook-youtube">Continue reading…</a>