The rising tide of child abuse on social media

In 2020 alone, Facebook removed 35.9 million pieces of content flagged under « child nudity and sexual exploitation, » according to the social network’s latest transparency report. And Facebook isn’t alone; Instagram, Youtube, Twitter, TikTok, Reddit, and Snapchat combined remove millions of posts and images that fall foul of community guidelines regarding child abuse.

When talking about child pornography, the discussion often pertains to those shady corners of the internet: Omegle, the dark web, Usenet, end-to-end encrypted chat apps, etc. But the problem isn’t limited to private groups or anonymous platforms. Thousands of images and posts containing child abuse, exploitation, and nudity are removed by the biggest names in social media every day.

Comparitech researchers took a dive into the transparency reports of seven of the biggest social networks to find out how prevalent child abuse is on their platforms. Transparency reports typically include content removals, which are broken down into various categories. We looked at content removals specifically related to child nudity, abuse, and sexual exploitation.

TikTok saw almost double the number of content removals for « minor safety » between 2019 and 2020.

Youtube, Reddit, and Snapchat saw the number of content removals relating to child sexual exploitation increase from 2018 to 2020 whilst Facebook and Twitter both saw the number of such removals decrease (albeit only slightly) over the same time period.

Whilst TikTok had the highest rate of content removals dealing with child abuse, it broadly categorizes such cases as « minor safety » instead of « child sexual exploitation » or the like. Of all of TikTok’s removal requests, 23.1% were categorized under minor safety.

Facebook had the lowest proportion of content removals related to child abuse, despite removing the most content overall. A mere 0.34% of removals dealt with child sexual exploitation.

Methodology and limitations

We used content removal figures from each of the following social networks’ latest available transparency reports:

  • Facebook – from Q3 2018 to present
  • Instagram – from Q2 2019 to present
  • Youtube – from Q3 2018 to present
  • Twitter – from July 2018 to June 2020
  • TikTok – from July 2019 to present
  • Reddit – from Q1 2018 to present
  • Snapchat – from July 2019 to June 2020

Note that some reports do not cover an entire year, as reporting might have started or ended mid-year. The overall figures for the year are only based on the time frame that child abuse was recognized as a category so as to provide a fair percentage.

Snapchat only covers accounts removed, not individual pieces of content removed, in its report.

Twitter covers both banned accounts and specific content removals.

Some social networks, including Tumblr and Pinterest, did not have sufficient data for us to analyze.

Transparency reports really only became a trend among social media companies in 2018, so we don’t have a ton of historical data to go by. Furthermore, changes in content moderation policies might skew the numbers.

What tech companies are doing about it

Content removals are typically the result of either an automated filter or users of a service flagging content they think is inappropriate. In some cases, human moderators might be used to judge whether something should be removed, but it would be all but impossible for moderators to examine every single thing posted on a social network.

Child pornography is illegal in the United States (and pretty much all of the world) and is not protected under the First Amendment. However, under Section 230 of the Communications Decency Act, social media companies are protected from liability when their users post something illegal. So they can’t be sued when their users post child abuse content.

Still, social media companies certainly don’t want to be associated with child abuse, so they do what they can to remove such content as quickly as possible. So far, their tactics are largely reactive, not proactive. Pre-screening content would likely be too burdensome and come with serious privacy concerns.

Recently, Apple has started hashing image files on users’ iCloud storage to see if they match those in a law enforcement database of child abuse images. This allows Apple to scan users’ storage for child porn without actually viewing any of the users’ files. Some privacy advocates still take issue with the tactic, and it’s not perfect, but it might be a compromise that other tech companies decide to adopt.

Researcher: George Moody