Days after the Christchurch mosque massacre, social media platforms are still struggling to delete videos of the attack, as users continue to upload new versions of the gunman’s original 17-minute footage - and questions continue to be raised about the responsibility of tech companies like Facebook, YouTube and Twitter.
The carnage was recorded on a mobile app called LIVE4, which allows video from personal body cameras - in this case, strapped to the gunman’s helmet - to be streamed live on Facebook.
The social media giant’s community standards explicitly ban “individuals engaged in mass murder” from its network - nothing like having high standards, right? - and it quickly deleted the suspect’s account. But users downloaded the original livestream and reposted it with astonishing speed. Many news outlets also hosted the video.
"The killing of 49 people at two mosques in Christchurch, New Zealand, was engineered to be viewed and shared on the world’s largest technology platforms, taking full advantage of Silicon Valley’s laissez-faire approach to content moderation." Los Angeles Times
Facebook said it removed 1.5 million videos depicting images from the shooting in the first 24 hours after it happened - with 1.2 million of those blocked by software at the moment of upload. But eight hours after the attack, there were still live videos on the platform accompanied by the warning that they may “show violent or graphic content.”
YouTube maintains its staff review reported videos all day, every day. In the case of the Christchurch attacks, however, the sheer volume of uploads outstripped the capacity of human reviewers to keep pace.
YouTube’s incident command team took unprecedented steps - including temporarily disabling selected search functions and eliminating human review in an effort to hasten the removal of videos flagged by its automated systems. But many of the new clips were altered in ways that outwitted the company's detection systems.
What's a parent to do?
Between the inability of the tech companies to respond effectively - and the apparently insatiable appetite of many users for violent, hate-filled content - it’s hard not to despair. And that goes double if you happen to be a parent.
But it’s important to remember there are ways to help keep your children safe online, even in a crisis situation. Harnessing the power of Family Zone’s parental controls, you can disable platforms like Facebook, YouTube and Twitter with a single click - whether for 24 or 48 hours, until the situation settles down, or indefinitely.
Reporting offensive content
On a broader scale, there are also ways to fight back that can help make the online world a better, safer place. Reporting offensive content is easy and effective. And it really does work … eventually.
It’s so important not to assume that someone else has already raised the issue. Remember: the more reports, the faster the response. Here’s how:
If a friend, or a page you follow, posts something violent or hate-fuelled, click on the three little dots on the top right of the post. Choose "give feedback" to send information to Facebook about the post.
You can also block or unfollow from this menu.
In the case of an offensive ad, click on the three dots and choose “report ad.”
If you're watching on a mobile device, tap the three little dots and then choose "report." You'll need to select a reason for reporting.
If you're watching on desktop, sign in and click "more" below the video. In the drop-down menu, choose "report."
You can also choose to block the channel that posted the video. Right click on the video and choose "block videos from this channel."
You can block someone by tapping their username and going to the three dots icon. Choose "block."
This same menu can also be used to report the content – tap on "report inappropriate."
Click the arrow icon on the top of a tweet. This gives you the option to block the person who posted it, unfollow them and report the tweet.
Cyberbullying has bloomed like an out-of-control virus during the COVID-19 pandemic. But in this case, handwashing - or for that matter ...
We know these things can happen when kids go online. But not our kids. So let's just say "We heard about a child who ..."
It's not just how much screen-time we use. It's the way we use it.