This post is by Julia Alexander from The Verge - All Posts
Click here to view on the original site: Original Post
<img alt="" src="https://cdn.vox-cdn.com/thumbor/Tb7XzR4YyL6sWg26s-adM5-5SHo=/0x0:5033x3355/1310x873/cdn.vox-cdn.com/uploads/chorus_image/image/63244878/1135900679.jpg.0.jpg" />
After a man used Facebook to live stream his attack on two New Zealand mosques last night, the video quickly spread to YouTube. Moderators fought back, trying to take the horrific footage down, but new uploads of the video kept appearing. It led many observers to wonder: since YouTube has a tool for automatically identifying copyrighted content, why can’t it automatically identify this video and wipe it out?
Exact re-uploads of the video will be banned by YouTube, but videos that contain clips of the footage have to be sent to human moderators for review, The Verge has learned. Part of that is to ensure that news videos that use a portion of the video for their segments aren’t removed in the process.
YouTube’s safety team thinks of it as…
<a href="https://www.theverge.com/2019/3/15/18267424/new-zealand-shooting-youtube-video-reupload-content-id-livestream">Continue reading…</a>