The rising tide of child abuse on social media

In the first three-quarters of 2021, Facebook flagged a staggering 55.6 million pieces of content under “child nudity and sexual exploitation”–20 million more than 2020’s overall total of 35.6 million. According to the social network’s latest transparency reports, 50 percent of this content (28 million pieces) was flagged in Q2 of 2021 and 41 percent (22.7 million) in Q3 of 2021.

As our previous findings reported, Facebook isn’t alone; Instagram, Youtube, Twitter, TikTok, Reddit, and Snapchat combined remove millions of posts and images that fall foul of community guidelines regarding child abuse.

When talking about child pornography, the discussion often pertains to those shady corners of the internet: Omegle, the dark web, Usenet, end-to-end encrypted chat apps, etc. But the problem isn’t limited to private groups or anonymous platforms. Thousands of images and posts containing child abuse, exploitation, and nudity are removed by the biggest names in social media every day.

In this update, we not only wanted to look at how 2021 compared to previous years for these types of reports, but we also wanted to find out what happens to all of these pieces of flagged content. What are big tech companies proactively doing to combat the problem? How many of these cases are referred to law enforcement? And how successful are individual countries in investigating these cases?

Tech giant content removals for child exploitation

As we’ve already seen, in just nine months of 2021, Facebook had already exceeded 2020’s content removals for child exploitation by a whopping 20 million. Instagram, TikTok, and Snapchat also exceeded their 2020 figures, with YouTube looking set to do the same, too (all of these platforms have released some 2021 reports, the remainder have not).

In the first three-quarters of 2021, Instagram saw 4,797,100 content removals for child nudity and sexual exploitation. This is 1.5 million more than the entire year of 2020. Instagram, like Facebook, also saw a sharp rise in Q2 of 2021 with 1.86 million of the year’s content removals coming from April to June. However, this continued into Q3 with a further 2.1 million removals. Tiktok also saw a significant increase in Q2 with 33.7 million reports compared to 22.8 million in Q1. This total for the first half of 2021 (56.5 million) already outpaces the total figure for 2020 (55.4 million).

In contrast, YouTube saw more content removals in Q1 of 2021 than Q2 and Q3 (5.1 million compared to around 1.9 million in each of Q2 and Q3), but it could still overtake 2020’s total of 11.6 million.

By June 2021, Snapchat had already removed 119,134 accounts for child sexual exploitation and abuse, 20,000 more than 2020’s total of 98,166.

What does this mean?

While we don’t have the full year’s data for 2021 yet, the fact that many of the platforms have displayed significant increases in content removals regarding child exploitation (up to Q2 or Q3 of 2021) means 2021 looks set to be an astronomical year for this type of content. While some of this could be due to increased vigilance and better detection of such material, it also highlights the large scale of child exploitation on social media.

Previously, Reddit saw the number of content removals relating to child sexual exploitation increase from 2018 to 2020. And Discord, which we’ve just started tracking in this update, had seen double 2020’s number of account removals for child sexual abuse material (CSAM) in the first half of 2021 (rising from 23,000 to nearly 52,000). Discord only defined CSAM as a category mid-2020.

What happens to the content that’s removed and the people who’ve posted it?

US companies are required by law to report child sexual abuse material (CSAM) to the NCMEC or risk a fine of up to $300,000. However, how thorough the company is at referring cases to the NCMEC depends on its systems and protocols for detecting such material.

For example, a recent report found that Facebook has a new rule in which ‘images of girls with bare breasts will not be reported to NCMEC. Nor will images of young children in sexually suggestive clothing.’ Interviewees were also quoted as describing a new “bumping up” policy–something none of them agreed with. The policy comes into play when a content moderator cannot determine whether or not the subject in a flagged CSAM photo is in fact a minor (category B) or an adult (category C). When this occurs, the moderators have been told to assume that the subject is an adult. This, therefore, means fewer images will be reported to the NCMEC.

The report also critiques Facebook’s use of the Tanner scale. It was developed in the mid-1900s when researchers (led by James Tanner) studied a large number of children through adolescence, photographing them at their various stages of puberty to create five definitive stages. Facebook’s content moderators then use this chart to categorize the subjects according to which stage of puberty they are at. However, as Krishna points out, ‘CSAM has to do with a child’s age, not pubertal stage. Two children of the same age but in different stages of puberty should not be treated unequally in the CSAM context, but relying on the Tanner scale means that more physically developed children are less likely to be identified as victims of sexual abuse… Even worse, the scale likely has a racial and gender bias. Because the subjects of the study were mostly white children, the Tanner scale does not account for differences in bodily development across race—nor does it attempt to.’

This suggests that the vast number of content flagged by Facebook may only scratch the surface of the problem. But how many of these pieces of content are being referred to NCMEC and what happens after?

In 2019, Facebook (and Instagram) had 39.4 million pieces of content flagged as child exploitation. It referred 15.9 million (less than half) to the NCMEC. In 2020, 20.3 million pieces of content were referred to the NCMEC by Facebook–52 percent of its overall total of 38.9 million.

Where are all the other reports going?

Facebook suggests all of the child exploitation posts it flags are referred to the NCMEC. Therefore, the number that goes unreported to the NCMEC could be duplicated pieces of content/accounts. Facebook did release some statistics in 2021 which suggested 90 percent of the content it reported to NCMEC was “the same as or visually similar to previously reported content.”

Facebook’s lower NCMEC reporting rate compared to its content removal rate isn’t as significant as some of the other top-reporting ESPs, however.

In 2020, YouTube saw 11.6 million pieces of content flagged as CSAM. Google submitted a total of 188,955 reports to the NCMEC (solely from YouTube) in the same reporting period. That’s 1.6 percent of the flagged content that’s been referred.

Likewise, TikTok flagged 55.4 million pieces of CSAM content in 2020. It referred 22,692 to NCMEC–0.04 percent of its flagged content.

Even though ESPs are mandated to report CSAM to the NCMEC, this does beg the question as to how they are determining what content they should and should not refer to the organization. Equally, as ESPs don’t have to notify or send data to prosecutors or police in the country the material/offender originates from, this places an enormous amount of pressure on non-profit organization, NCMEC–which, by the way, had just 41 analysts in 2020.

Which countries are hotspots for child exploitation material?

Once the NCMEC receives the reports from ESPs, it uses the geolocation of the reports (which are provided by the ESPs) to refer them to the relevant law enforcement agency in that country. NCMEC’s referral to law enforcement is described as “voluntary,” too. This means that once NCMEC refers the case to the agency, it is up to them to investigate. There is no specific requirement. Plus, even though NCMEC may be aware of if and when the law enforcement agency looks into the data sent, they don’t know any more than that.

The top ten countries for referred reports from NCMEC are:

  1. India – 2,725,518 reports
  2. The Philippines – 1,339,597 reports
  3. Pakistan – 1,288,513 reports
  4. Algeria – 1,102,939 reports
  5. Indonesia – 986,648 reports
  6. Iraq – 916,966 reports
  7. Vietnam – 843,963 reports
  8. Bangladesh – 817,687 reports
  9. Mexico – 793,721 reports
  10. Colombia – 763,997 reports

At least three of these countries (Iraq, Vietnam, and Bangladesh) have inadequate legislation and/or procedures to combat online child pornography. For example, in Bangladesh, the Digital Security Act is described as failing to contain any provisions on the online abuse of children. And when cybercrimes of this nature are investigated, those involved suggest law enforcement agencies lack the infrastructure and competence to adequately investigate them.

Furthermore, in the countries where investigations are initiated and recorded by police authorities, the figures are a fraction of the ones referred from NCMEC. This is even before we’ve considered that the police won’t be dealing solely with cases from the NCMEC but cases from within the country and/or referred from other organizations.

How much content is successfully dealt with by local law enforcement agencies?

Due to a lack of information surrounding the number of investigations looked into from cases that are referred from NCMEC, this makes it difficult to determine exactly how many of the cases are successfully investigated and lead to convictions. However, our researchers looked through the top 45 countries (by number of referred cases in 2020) to see how many cases of online child exploitation were investigated by relevant authorities in each country.

While we can’t accurately suggest X percent of cases from the NCMEC were investigated, the figures give us some insight into how many cases were investigated per 100,000 cases referred from NCMEC.

The below table describes the figures we were able to find. In some cases, only figures for 2019 were available (the Philippines and Thailand). In these cases, the figures have been compared in the same year.

# of Reports from NCMEC to Investigated Cases per 100,000

Country# of Reports from NCMEC (2019)# of Reports from NCMEC (2020)% Increase in Reports from NCMEC# of Reports Investigated by Law Enforcement# of Reports from NCMEC to Investigated Cases per 100,000
India1,987,4302,725,51837.141,102 40
Philippines801,2721,339,59767.1816020
Pakistan1,158,3901,288,51311.231038
Indonesia840,221986,64817.4363564
United States521,658494,388-5.23112,202 22,695
Peru160,839490,878205.2020141
Brazil398,069432,1968.5758,934 13,636
Thailand355,396397,74311.927220
Poland77,741381,254390.422,517 660
Ecuador98,669242,631145.904418
Cambodia91,458188,328105.9215080
Bolivia22,597120,161431.7610587
South Korea83,322100,70920.875,186 5,149
Germany87,89592,7685.547,643 8,239
France71,42289,87125.8312,186 13,559
United Kingdom74,33075,5781.689,742 12,890

What the above highlights is the dramatically low number of online child abuse cases that are investigated per year, particularly in Asian countries. Across all of the countries, a total of 2,380 cases of online child exploitation per 100,000 reports submitted by the NCMEC were investigated.

In the US, the IC3 investigated 3,202 online crimes against children in 2020. The various Internet Crimes Against Children Task Forces (ICACs) also launched 109,000 investigations during the same period. When compared to the number of cases referred to US law enforcement from the NCMEC (nearly 495,000), this seems like a pretty significant chunk. And with the majority of cases opened by individual ICAC departments across the US coming from NCMEC, the US appears to have the best rate of investigations to reports received in all of the countries we can find data for.

Are ESPs “passing the buck” when it comes to online child exploitation?

What the above demonstrates is that a significant number of reports from the NCMEC appear to be disappearing into an unregulated, unlegislated, and uninvestigated black hole in many areas. While the likes of the US may be actively looking into the majority of cases, in some of the worst-hit countries for this content, the systems are unable to cope with the sheer volume of reports they receive.

ESPs may do their bit by referring cases to the NCMEC and the NCMEC may also play their role in referring these cases to the relevant law enforcement agency. But after that, many of these cases are simply slipping through the net with no authority having any responsibility to ensure these reports are adequately investigated. Plus, as we saw with Facebook’s content moderation policy, there are serious question marks over how well this material is spotted and reported in the first place.

And from what the figures that are available for 2021 so far tell us–the problem is growing at an exponential rate. If the NCMEC sees the same increase it did from 2019 to 2020, it could be facing as many as 28.7 million reports in 2021. But the fact Facebook, Instagram, and Tiktok had already far exceeded 2020’s entire totals in just six or nine months of 2021 could mean 2021’s figures are almost double 2020’s. This would mean a total of around 44 million pieces of content reported by ESPs alone.

However, with Facebook’s rollout of end-to-end encryption on its apps, could the figures fall? Many experts, including the director of the National Crime Agency in the UK, Rob Jones, are concerned that the encryption will enable child exploitation (in particular) to go undetected and untraced. Jones suggests that the encryption will prevent officers from accessing “incisive intelligence.” Therefore, it will be interesting to see what figures Facebook’s next quarterly report discloses. At first glance, a fall in figures may seem like a step in the right direction but may simply indicate that millions of reports of child exploitation on these platforms go unnoticed.

Whatever the figures may be for 2021, one thing’s for sure–many of the specialist organizations and law enforcement agencies around the world are simply overwhelmed by the number of online child exploitation reports they’re receiving. And that does leave us with one significant question–shouldn’t those who are at the root of the problem (the ESPs) be doing more to cut off this content at the source?

Methodology and limitations

We used content removal figures from each of the following social networks’ latest available transparency reports:

  • Facebook – from Q3 2018 to Q3 2021
  • Instagram – from Q2 2019 to Q3 2021
  • Youtube – from Q3 2018 to Q3 2021
  • Twitter – from July 2018 to Dec 2020
  • TikTok – from July 2019 to Q2 2021
  • Reddit – from Q1 2018 to Dec 2020
  • Snapchat – from July 2019 to June 2021
  • Discord – from July 2020 to June 2021

Some reports do not cover an entire year, as reporting might have started or ended mid-year. The overall figures for the year are only based on the time frame that child abuse was recognized as a category so as to provide a fair percentage.

Facebook recently changed its reports having previously included “fake accounts” in content removal figures. Now, these aren’t included, which is why total content figures have reduced from our previous reports (but this is across all years so still offers a fair comparison).

Snapchat only covers accounts removed, not individual pieces of content removed, in its report.

Twitter and Discord cover both banned accounts and specific content removals.

Some social networks, including Tumblr and Pinterest, did not have sufficient data for us to analyze.

Transparency reports really only became a trend among social media companies in 2018, so we don’t have a ton of historical data to go by. Furthermore, changes in content moderation policies might skew the numbers.

What tech companies are doing about it

Content removals are typically the result of either an automated filter or users of a service flagging content they think is inappropriate. In some cases, human moderators might be used to judge whether something should be removed, but it would be all but impossible for moderators to examine every single thing posted on a social network.

Child pornography is illegal in the United States (and pretty much all of the world) and is not protected under the First Amendment. However, under Section 230 of the Communications Decency Act, social media companies are protected from liability when their users post something illegal. So they can’t be sued when their users post child abuse content.

Still, social media companies certainly don’t want to be associated with child abuse, so they do what they can to remove such content as quickly as possible. So far, their tactics are largely reactive, not proactive. Pre-screening content would likely be too burdensome and come with serious privacy concerns.

Recently, Apple has started hashing image files on users’ iCloud storage to see if they match those in a law enforcement database of child abuse images. This allows Apple to scan users’ storage for child porn without actually viewing any of the users’ files. Some privacy advocates still take issue with the tactic, and it’s not perfect, but it might be a compromise that other tech companies decide to adopt.

Data researcher: Rebecca Moody