The rising tide of child abuse on social media

Facebook flagged a staggering 73.3 million pieces of content under “child nudity and sexual exploitation” from Q1 to Q3 of 2022–just 4 million short of 2021’s overall total of 77.5 million. According to the social network’s latest transparency reports, 44 percent of this content (32.4 million pieces) was flagged in Q3 of 2022 and 30 percent (22.3 million) in Q2 of 2022. This suggests 2022 is going to be a record-breaking year for CSAM content on the platform.

As our previous findings reported, Facebook isn’t alone; Instagram, Youtube, Twitter, TikTok, Reddit, Snapchat, Discord, and LinkedIn combined remove millions of posts and images that fall foul of community guidelines regarding child abuse.

When talking about child sexual abuse images, the discussion often pertains to those shady corners of the internet: Omegle, the dark web, Usenet, end-to-end encrypted chat apps, etc. But the problem isn’t limited to private groups or anonymous platforms. Thousands of images and posts containing child abuse, exploitation, and nudity are removed by the biggest names in social media every day.

Below, we take a look at how 2022 compared to previous years for these types of reports and try to find out what happens to all of these pieces of flagged content. What are big tech companies proactively doing to combat the problem? How many of these cases are referred to law enforcement? And how successful are individual countries in investigating these cases?

Tech giant content removals for child exploitation

As we’ve already seen, in just nine months of 2022, Facebook had almost equaled 2021’s content removals for child exploitation. It was a similar story on Instagram (6.08 million pieces of content flagged from Q1-Q3 of 2022 compared to 8.38 million in 2021) and TikTok (140 million pieces of content flagged from Q1-Q3 of 2022 compared to 141.7 million in 2021).

Snapchat looked set to surpass 2021’s total with 201,527 accounts flagged for child sexual exploitation and abuse in the first half of 2022, compared to 317,243 flagged across all of 2021. Whereas, Discord’s Q1-Q3 figures for 2022 had already exceeded 2021’s totals (1.52 million accounts, servers, and pieces of content were flagged from Q1-Q3 of 2022, compared to 1.42 million in 2021).

LinkedIn’s figures saw a 636 percent increase. 226 pieces of child exploitation content were recorded by LinkedIn in 2021 compared to 1,663 in the first half of 2022 alone.

YouTube, however, was a mixed story. The number of comments and content flagged for child safety by the platform were significantly lower from Q1 to Q3 of 2022 compared to the same period of 2021. For example, from Q1 to Q3 of 2021, nearly 9 million pieces of content were flagged for “child safety” reasons by YouTube, compared to 4.4 million across the same period in 2022. This was due to a huge spike in Q1 of 2021 (over 5 million pieces of content). Nevertheless, the number of comment removals did spike again in Q3 of 2022 (over 2 million). And the number of channels reported for child safety also exceeded 2021’s total of 168,000 in Q1 to Q3 of 2022 when more than 266,000 were reported.

What does this mean?

While we don’t have the full year’s data for 2022 yet, the fact that many of the platforms have displayed similar (if not higher) levels of content removals regarding child exploitation (up to Q2 or Q3 of 2022) means 2022 looks set to be another astronomical year for CSAM content. While this could be due to increased vigilance and better detection of such material, it also highlights the large scale of child exploitation on social media.

What happens to the content that’s removed and the people who’ve posted it?

US companies are required by law to report child sexual abuse material (CSAM) to the NCMEC or risk a fine of up to $300,000. However, how thorough the company is at referring cases to the NCMEC depends on its systems and protocols for detecting such material.

For example, a recent report found that Facebook has a new rule in which ‘images of girls with bare breasts will not be reported to NCMEC. Nor will images of young children in sexually suggestive clothing.’ Interviewees (Facebook content moderators) were also quoted as describing a new “bumping up” policy–something none of them agreed with. The policy comes into play when a content moderator cannot determine whether or not the subject in a flagged CSAM photo is in fact a minor (category B) or an adult (category C). When this occurs, the moderators have been told to assume that the subject is an adult. This, therefore, means fewer images will be reported to the NCMEC.

The report also critiques Facebook’s use of the Tanner scale. It was developed in the mid-1900s when researchers (led by James Tanner) studied a large number of children through adolescence, photographing them at their various stages of puberty to create five definitive stages. Facebook’s content moderators then use this chart to categorize the subjects according to which stage of puberty they are at. However, as Krishna points out, ‘CSAM has to do with a child’s age, not pubertal stage. Two children of the same age but in different stages of puberty should not be treated unequally in the CSAM context, but relying on the Tanner scale means that more physically developed children are less likely to be identified as victims of sexual abuse… Even worse, the scale likely has a racial and gender bias. Because the subjects of the study were mostly white children, the Tanner scale does not account for differences in bodily development across race—nor does it attempt to.’

This suggests that the vast number of content flagged by Facebook may only scratch the surface of the problem. But how many of these pieces of content are being referred to NCMEC and what happens after?

In 2019 and 2022, Facebook referred around 50 percent of its flagged content to NCMEC. In 2021, Facebook referred 22,118,952 pieces of content to the NCMEC, which is just over 28.5 percent of the total flagged by the platform. For the first time, separate figures were available for Instagram where just over 3.93 million pieces were referred to the NCMEC–just over 40 percent of its 2021 total (83.8 million).

Where are all the other reports going?

Facebook suggests all of the child exploitation posts it flags are referred to the NCMEC. Therefore, the number that goes unreported to the NCMEC could be duplicated pieces of content/accounts. Facebook did release some statistics in 2021 which suggested 90 percent of the content it reported to NCMEC was “the same as or visually similar to previously reported content.”

Facebook’s lower NCMEC reporting rate compared to its content removal rate isn’t as significant as some of the other top-reporting electronic service providers (ESPs), however.

In 2021, YouTube saw 10.2 million pieces of content flagged as CSAM. Google submitted a total of 268,558 reports to the NCMEC (solely from YouTube) in the same reporting period. That’s 2.6 percent of the flagged content that’s been referred.

Likewise, TikTok flagged 141.7 million pieces of CSAM content in 2021. It referred 154,618 to NCMEC–0.1 percent of its flagged content.

Even though ESPs are mandated to report CSAM to the NCMEC, this does beg the question as to how they determine what content they should and should not refer to the organization. Equally, as ESPs don’t have to notify or send data to prosecutors or police in the country the material/offender originates from. This places an enormous amount of pressure on non-profit organization NCMEC–which, by the way, had just 41 analysts in 2020.

Which countries are hotspots for child exploitation material?

Once the NCMEC receives the reports from ESPs, it uses the geolocation of the reports (which are provided by the ESPs) to refer them to the relevant law enforcement agency in that country. NCMEC’s referral to law enforcement is described as “voluntary”. Once NCMEC refers the case to the agency, it is up to them to investigate. There is no specific requirement. Plus, even though NCMEC may be aware of if and when the law enforcement agency looks into the data sent, they don’t know any more than that.

In 2020, the NCMEC referred 21.8 million reports to law enforcement agencies around the world. In 2021, the number of reports referred to these agencies increased by 35 percent, climbing to 29.4 million.

The top ten countries for referred reports from NCMEC are:

  1. India: 4,699,515 reports–a 73 percent increase on 2020’s 2.7m referrals
  2. The Philippines: 3,188,793 reports–a 138 percent increase on 2020’s 3.2m referrals
  3. Pakistan: 2,030,801 reports–a 58 percent increase on 2020’s 1.3m referrals
  4. Indonesia: 1,861,135 reports–an 89 percent increase on 2020’s 987,000 referrals
  5. Bangladesh: 1,743,240 reports–a 113 percent increase on 2020’s 818,000 referrals
  6. Iraq: 1,220,470 reports–a 33 percent increase on 2020’s 920,000 referrals
  7. Algeria: 1,171,653 reports–a 6 percent increase on 2020’s 1.1m referrals
  8. Mexico: 786,215 reports–a 1 percent decrease on 2020’s 794,000 referrals
  9. United States: 716,474–a 45 percent increase on 2020’s 494,000 referrals
  10. Vietnam: 716,065 reports–a 15 percent decrease on 2020’s 844,000 referrals

At least three of these countries (Iraq, Vietnam, and Bangladesh) have inadequate legislation and/or procedures to combat online child pornography. For example, in Bangladesh, the Digital Security Act is described as failing to contain any provisions on the online abuse of children. And when cybercrimes of this nature are investigated, those involved suggest law enforcement agencies lack the infrastructure and competence to adequately investigate them.

Furthermore, in the countries where investigations are initiated and recorded by police authorities, the figures are a fraction of the ones referred from NCMEC. This is even before considering that the police won’t be dealing solely with cases from the NCMEC but cases from within the country and/or referred from other organizations.

How much content is successfully dealt with by local law enforcement agencies?

Due to a lack of information surrounding the number of investigations looked into from cases that are referred from NCMEC, it is difficult to determine exactly how many of the cases are successfully investigated and lead to convictions. However, our researchers looked through the top 45 countries (by number of referred cases in 2021) to see how many cases of online child exploitation were investigated by relevant authorities in each country.

While we can’t accurately suggest X percent of cases from the NCMEC were investigated, the figures give us some insight into how many cases were investigated per 100,000 cases referred from NCMEC.

The below table describes the figures we were able to find. In some cases, only figures for 2019 or 2020 were available. In these cases, the figures have been compared in the same year and are noted in the sheet.

What the above highlights is the dramatically low number of online child abuse cases that are investigated per year, particularly in Asian countries. Across all of the countries, a total of 3,467 cases of online child exploitation per 100,000 reports submitted by the NCMEC were investigated.

In this update, Germany appeared to have the most proactive rate for investigating online child exploitation with more cases opened by the police than submitted by the NCMEC. France and the UK also saw a more proactive rate for their investigations in this study. France previously saw 13,559 cases investigated per 100,000 reported by the NCMEC and the UK saw 12,890 cases investigated per 100,000 cases.

In contrast, the US’s proactivity appeared to decline this year (dropping from 22,695 reports per 100,000 cases). This is largely due to a huge increase in case figures submitted by the NCMEC (an increase of 45 percent). Equally, the IC3 only investigated 2,167 reports of online child exploitation this year compared to 3,202 last year. But the Internet Crimes Against Children task force did investigate 137,000 cases in 2021 compared to 109,000 in 2020.

Are ESPs “passing the buck” when it comes to online child exploitation?

What the above demonstrates is that a significant number of reports from the NCMEC appear to be disappearing into an unregulated, unlegislated, and uninvestigated black hole in many areas. While the likes of Germany may be actively looking into the majority of cases, in some of the worst-hit countries for this content, the systems are unable to cope with the sheer volume of reports they receive.

ESPs may do their bit by referring cases to the NCMEC, and the NCMEC may also play their role in referring these cases to the relevant law enforcement agency. But after that, many of these cases are simply slipping through the net with no authority having any responsibility to ensure the reports are adequately investigated. Plus, as we saw with Facebook’s content moderation policy, there are serious question marks over how well this material is spotted and reported in the first place.

And from what the figures that are available for 2022 so far tell us–the problem continues to grow at an exponential rate. If the NCMEC sees the same increase it did from 2020 to 2021, it could be facing as many as 39.7 million reports in 2022.

Whatever the figures may be for 2022, one thing’s for sure–many of the specialist organizations and law enforcement agencies around the world are simply overwhelmed by the number of online child exploitation reports they’re receiving. And that does leave us with one significant question–shouldn’t ESPs be doing more to cut off this content at the source?

Methodology and limitations

We used content removal figures from each of the following social networks’ latest available transparency reports:

  • Facebook – from Q3 2018 to Q3 2022
  • Instagram – from Q2 2019 to Q3 2022
  • Youtube – from Q3 2018 to Q3 2022
  • Twitter – from July 2018 to December 2021
  • TikTok – from July 2019 to Q3 2022
  • Reddit – from Q1 2018 to December 2021
  • Snapchat – from July 2019 to June 2022
  • Discord – from July 2020 to Q3 2022
  • LinkedIn – from January 2019 to June 2022

Some reports do not cover an entire year, as reporting might have started or ended mid-year. The overall figures for the year are only based on the time frame that child abuse was recognized as a category so as to provide a fair percentage.

Facebook recently changed its reports having previously included “fake accounts” in content removal figures. Now, these aren’t included, which is why total content figures have reduced from our previous reports (but this is across all years so still offers a fair comparison).

Snapchat only covers accounts removed, not individual pieces of content removed, in its report.

Twitter and Discord cover both banned accounts and specific content removals.

Some social networks, including Tumblr and Pinterest, did not have sufficient data for us to analyze.

Transparency reports really only became a trend among social media companies in 2018, so we don’t have a ton of historical data to go by. Furthermore, changes in content moderation policies might skew the numbers.

What tech companies are doing about it

Content removals are typically the result of either an automated filter or users of a service flagging content they think is inappropriate. In some cases, human moderators might be used to judge whether something should be removed, but it would be all but impossible for moderators to examine every single thing posted on a social network.

Child sexual abuse photography is illegal in the United States (and pretty much all of the world) and is not protected under the First Amendment. However, under Section 230 of the Communications Decency Act, social media companies are protected from liability when their users post something illegal. So they can’t be sued when their users post child abuse content.

Still, social media companies certainly don’t want to be associated with child abuse, so they do what they can to remove such content as quickly as possible. So far, their tactics are largely reactive, not proactive. Pre-screening content would likely be too burdensome and come with serious privacy concerns.

Recently, Apple has started hashing image files on users’ iCloud storage to see if they match those in a law enforcement database of child abuse images. This allows Apple to scan users’ storage for child porn without actually viewing any of the users’ files. Some privacy advocates still take issue with the tactic, and it’s not perfect, but it might be a compromise that other tech companies decide to adopt.

Data researcher: Rebecca Moody