The rising tide of child abuse on social media

In 2023, META flagged almost 72 million pieces of content under “child nudity and sexual exploitation”. Facebook reported 56.5 million pieces of content–a 44 percent decrease from 2022’s 101 million, while Instagram reported 15.4 million pieces–a 6 percent decrease from 2022’s 16.4 million.

Are we seeing a significant decline in online child sexual abuse material (CSAM)?

Not necessarily.

Initially, META’s figures make for positive reading but, as our research has uncovered, this vast reduction in figures may only highlight the growing discrepancies, underreporting, and lack of accountability when it comes to online child exploitation.

For example, the CyberTipline run by the National Center for Missing & Exploited Children (NCMEC) recorded a 4 million figure increase in the number of reports it received from 2022 to 2023. The majority of the reports the NCMEC receives are from electronic service providers (ESPs), like META. Furthermore, NCMEC received far fewer reports from Facebook in 2023 (just over 17.8 million compared to nearly 21.2 million in 2022) but far more from Instagram (11.4 million in 2023 compared to 5 million in 2022).

The lack of detailed information from ESPs on the reports they receive and how they deal with them makes it difficult to explain exactly what these increased reports to the NCMEC signify–particularly when some platforms notice a decline in CSAM reports. Yes, it could indicate that ESPs are getting better at referring cases to the NCMEC while also seeing a reduction in the number of cases on their platforms. But with ESP reporting methods, categorization of CSAM, and figures often changing, it’s hard to tell. What is crucial, however, is what’s happening to these reports once the NCMEC receives them.

Our research takes a deep dive into the figures reported by ESPs and the NCMEC to try and find out what happens to all of these pieces of flagged content, why there are so many different figures, and how things are (or aren’t) changing year on year.

What are big tech companies proactively doing to combat the problem? How many of these cases are referred to law enforcement? And how successful are individual countries in investigating these cases?

Tech giant content removals for child exploitation

As we’ve already seen, Facebook’s child exploitation content removals dropped 44 percent from 2022 to 2023. It was a similar story for Instagram with a 6 percent drop (16.4 million pieces of content in 2022 compared to 15.4 million in 2023).

TikTok also reported a 25 percent drop in removals from 2022 to 2023 (125.6 million pieces of content flagged compared to 168.5 million). However, TikTok changed how it reports CSAM from Q2 2023 onwards, so it’s hard to provide an exact comparison. That said, TikTok’s new reporting methods do give us an insight into the type of reports the platform receives.

Youth Safety & Well-Being reports are now broken into four categories: Nudity & Body Exposure, Sexually Suggestive Content, Youth Exploitation & Abuse, and Alcohol, Tobacco & Drugs. The lattermost isn’t something we’d cover in this study, but of the reports received in all four of these categories (from Q2 to Q4 of 2024), Nudity & Body Exposure accounts for the highest proportion (34%), followed by Alcohol, Tobacco & Drugs (28%), Sexually Suggestive Content (24%), and Youth Exploitation & Abuse (18%).

The number of reports rose significantly in Q3 and Q4 of 2023 (reaching 30.3 million and 31.1 million respectively, from 22.2 million in Q2).

Elsewhere, Snapchat disclosed an increase in CSAM reports across both account removals (rising by 41 percent from 406,017 to 572,762) and content removals (increasing by 25 percent from 1.3 million to 1.6 million). Meanwhile, LinkedIn saw a sharp decrease in its CSAM removals (dropping by 78 percent to 433) despite seeing a staggering 757 percent increase in child exploitation content from 2021 to 2022 (226 pieces compared to 1,937).

YouTube was a mixed story. The number of accounts flagged for child safety rose by 12 percent, (from 358,490 in 2022 to 401,211 in 2023) and the amount of content flagged increased by 79 percent (from 6.3 million to 11.3 million). But flagged comments experienced a significant decrease of 44 percent, dropping from 396 million to 221 million.

Please note: in this update, Twitter has been removed from our comparisons. This is due to inconsistent reporting that we are unable to compare with previous years.

What does this mean?

Even though reporting figures decreased, the majority of platforms continue to receive an onslaught of reports of child sexual abuse material. The dip in numbers might seem positive, but one also has to ask whether this is due to lower numbers of CSAM or less CSAM being flagged/reported by each platform.

YouTube has already provided us with data for Q1 2024, which suggests 2024 will be another astronomical year for CSAM content. The number of flagged accounts and content in Q1 2024 increased by 36 percent and 62 percent respectively, while flagged comments decreased by 29 percent.

What happens to the content that’s removed and the people who’ve posted it?

US companies are required by law to report CSAM to the NCMEC or they risk a fine. In February 2023, President Biden signed the REPORT Act, which imposed higher fines for online service providers that failed to report online child sexual abuse. First offenders could now receive a minimum fine of $600,000 with a maximum penalty of $1 million given to re-offenders.

How thorough the company is at referring cases to the NCMEC, however, depends on its systems and protocols for detecting such material.

For example, a 2021 report found that Facebook would not report ‘images of girls with bare breasts to the NCMEC. Nor will images of young children in sexually suggestive clothing.’ Interviewees (Facebook content moderators) were also quoted as describing a new “bumping up” policy–something none of them agreed with. The policy comes into play when a content moderator cannot determine whether or not the subject in a flagged CSAM photo is in fact a minor (category B) or an adult (category C). When this occurs, the moderators have been told to assume that the subject is an adult. This, therefore, means fewer images will be reported to the NCMEC.

The report also critiques Facebook’s use of the Tanner scale– which since our last study, remains unchanged. It was developed in the mid-1900s when researchers (led by James Tanner) studied a large number of children through adolescence, photographing them at their various stages of puberty to create five definitive stages. Facebook’s content moderators then use this chart to categorize the subjects according to which stage of puberty they are at.

However, as Krishna points out, ‘CSAM has to do with a child’s age, not pubertal stage. Two children of the same age but in different stages of puberty should not be treated unequally in the CSAM context, but relying on the Tanner scale means that more physically developed children are less likely to be identified as victims of sexual abuse… Even worse, the scale likely has a racial and gender bias. Because the subjects of the study were mostly white children, the Tanner scale does not account for differences in bodily development across race—nor does it attempt to.’

This suggests that the vast number of content flagged by Facebook may only scratch the surface of the problem. But how many of these pieces of content are being referred to NCMEC and what happens after?

As we have already noted, Facebook referred 17,838,422 pieces of content to the NCMEC in 2023, which is just over 31.6 percent of the total flagged on the platform. Instagram referred 11,430,007 to the NCMEC, which equates to 74.2 percent of its 2023 total.

Where are all the other reports going?

Facebook suggests all of the child exploitation posts it flags are referred to the NCMEC. Therefore, the number that goes unreported to the NCMEC could be duplicated pieces of content/accounts. Facebook did release some statistics in 2021 which suggested 90 percent of the content it reported to NCMEC was “the same as or visually similar to previously reported content.” And of its latest 5.2 million referrals to NCMEC, 5.1 million were of “shared or re-shared photos and videos that contained CSAM.”

Facebook’s lower NCMEC reporting rate compared to its content removal rate isn’t as significant as some of the other top-reporting electronic service providers (ESPs), however.

In 2023, YouTube saw almost 11.3 million pieces of content flagged as CSAM. Google submitted a total of 478,580 reports to the NCMEC (solely from YouTube) in the same reporting period. That’s 4.25 percent of the flagged content that’s been referred.

Likewise, TikTok flagged just under 111.5 million pieces of CSAM content in 2023. It referred 590,376 to NCMEC–0.53 percent of its flagged content.

Even though ESPs are mandated to report CSAM to the NCMEC, this does beg the question as to how they determine what content they should and should not refer to the organization.

Equally, as ESPs don’t have to notify or send data to prosecutors or police in the country the material/offender originates from. This places an enormous amount of pressure on non-profit organization NCMEC–which had just 84 analysts in 2023.

Which countries are hotspots for child exploitation material?

Once the NCMEC receives the reports from ESPs, it uses the geolocation of the reports (which are provided by the ESPs) to refer them to the relevant law enforcement agency in that country. NCMEC’s referral to law enforcement is described as “voluntary.” Once NCMEC refers the case to the agency, it is up to them to investigate. There is no specific requirement. Plus, even though NCMEC may be aware of if and when the law enforcement agency looks into the data sent, they don’t know any more than that.

In 2023, the number of reports referred to these agencies increased by 13 percent, climbing to 36.2 million from 32 million in 2022. Interestingly, some countries saw vast increases in the number of reports received, while others noted a significant decline.

For example, of the countries we explore in more detail below, India saw the biggest increase in reports (rising by over 57 percent), followed by France (37%), Tunisia (35%), Germany (26%), and Japan (20%). In contrast, Australia (-59%) Poland (-54%), Vietnam (-44%), United Kingdom (-44%), and Canada (-37%) saw the biggest decreases.

The top ten countries for referred reports from NCMEC are:

  1. India: 8,923,738 reports–a 57 percent increase on 2022’s 5.7m referrals
  2. The Philippines: 2,740,905 reports–a 6 percent increase on 2022’s 2.6m referrals
  3. Bangladesh: 2,491,368 reports– a 16 percent increase on 2022’s 2.1m referrals
  4. Indonesia: 1,925,549 reports–a 2.5 percent increase on 2022’s 1.88m referrals
  5. Pakistan: 1,924,739 reports–a 7 percent decrease on 2022’s 2.1m referrals
  6. United States: 1,132,270–a 28 percent decrease on 2022’s 1.6m referrals
  7. Saudi Arabia: 833,909– a 38 percent increase on 2022’s 602,745 referrals
  8. Turkey: 817,503– a 196 percent increase on 2022’s 276,331 referrals
  9. Algeria: 762,754– a 4 percent increase on 2022’s 731,167 referrals
  10. Iraq: 749,746–a 17 percent decrease on 2022’s 905,883 referrals

At least three of these countries (Iraq, Turkey, and Bangladesh) have inadequate legislation and/or procedures to combat online child pornography. For example, the Turkish penal code doesn’t contain the concept of child pornography and instead includes it under crimes of obscenity. Similarly, in Iraq, laws do not prohibit child pornography. In Bangladesh, the Digital Security Act is described as failing to contain any provisions on the online abuse of children.

When cybercrimes of this nature are investigated, those involved suggest law enforcement agencies lack the infrastructure and competence to adequately investigate them.

Furthermore, in the countries where investigations are initiated and recorded by police authorities, the figures are a fraction of the ones referred from NCMEC. This is even before considering that the police won’t be dealing solely with cases from the NCMEC but cases from within the country and/or referred from other organizations.

How much content is successfully dealt with by local law enforcement agencies?

Due to a lack of information surrounding the number of investigations looked into from cases that are referred from NCMEC, it is difficult to determine exactly how many of the cases are successfully investigated and lead to convictions. However, our researchers looked through the top 50 countries (by number of referred cases in 2023) to see how many cases of online child exploitation were investigated by relevant authorities in each country.

While we can’t accurately state what percent of cases from the NCMEC were investigated, the figures give us some insight into how many cases were investigated per 100,000 cases referred from NCMEC.

The below table describes the figures we were able to find. In some cases, only figures for 2021 or 2022 were available. In these cases, the figures have been compared in the same year.

The above highlights the dramatically low number of online child abuse cases investigated per year, particularly in Asian countries. Across all of the countries, a total of 2,651 cases of online child exploitation per 100,000 reports submitted by the NCMEC were investigated.

In this update, Germany once again appeared to have the most proactive rate for investigating online child exploitation with a rate of 80,719 reports investigated by local law enforcement to 100,000 cases referred by the NCMEC. Germany’s proactive figure did decline this year. However, as noted above, Germany did receive a far higher number of reports from the NCMEC this year.

Puerto Rico had the next highest rate of investigation (33,837 to 100,000).

France, which also received a far higher number of NCMEC reports, also saw a significant decline in its rate of investigations to NCMEC reports (dropping from 20,955 per 100,000 in our previous study to just 4,129 per 100,000 in this study).

The United States, which received fewer NCMEC reports from 2022 to 2023, also declined in its investigation rate. Its figure fell from 19,424 per 100,000 cases in 2022 to 16,488 per 100,000 cases in 2023. Equally, the IC3 only investigated 2,361 reports of online child exploitation this year compared to 2,587 last year (across all states and US territories). The Internet Crimes Against Children task force investigated 184,700 cases in 2023.

Are ESPs “passing the buck” when it comes to online child exploitation?

What the above demonstrates is that a significant number of reports from the NCMEC appear to be disappearing into an unregulated, unlegislated, and uninvestigated black hole in many areas. While the likes of Germany may be actively looking into the majority of cases, in some of the worst-hit countries for this content, the systems are unable to cope with the sheer volume of reports they receive.

ESPs may do their bit by referring cases to the NCMEC, and the NCMEC may also play their role in referring these cases to the relevant law enforcement agency. But after that, many of these cases are simply slipping through the net with no authority having any responsibility to ensure the reports are adequately investigated. Plus, as we saw with Facebook’s content moderation policy, there are serious question marks over how well this material is spotted and reported in the first place.

If the NCMEC sees the same increase it did from 2022 to 2023, it could see figures exceed 40 million reports in 2024.

Whatever the figures may be for 2024, one thing’s for sure–many of the specialist organizations and law enforcement agencies around the world are simply overwhelmed by the number of online child exploitation reports they’re receiving. And that does leave us with one significant question–shouldn’t ESPs be doing more to cut off this content at the source?

Methodology and limitations

We used content removal figures from each of the following social networks’ latest available transparency reports:

  • Facebook – from Q3 2018 to Q1 2024
  • Instagram – from Q2 2019 to Q1 2024
  • Youtube – from Q3 2018 to Q1 2024
  • Twitter – from July 2018 to December 2021
  • TikTok – from July 2019 to Q4 2023
  • Reddit – from Q1 2018 to Q4 2023
  • Snapchat – from July 2019 to December 2023
  • Discord – from July 2020 to Q4 2023
  • LinkedIn – from January 2019 to December 2023

This year we added two new transparency reports:

  • Twitch – from January to December 2023
  • Pinterest – from January 2021 to June 2023

Some reports do not cover an entire year, as reporting might have started or ended mid-year. The overall figures for the year are only based on the time frame that child abuse was recognized as a category so as to provide a fair percentage.

Snapchat previously only covered account removals, but for H1 and H2 of 2023 began providing individual pieces of content removed.

Twitter ended its traditional transparency reporting in H1 2022 and introduced DSA Transparency Reports at the end of 2023. It also provides some figures via Transparency Reports in India. However, due to the lack of consistency across these reports, we have excluded Twitter in this update.

TikTok changed its reporting category from “Minor Safety” to “ Youth Exploitation & Abuse” from Q2 2023 onwards which may have impacted figures.

Discord introduced a new category from Q3 2023 onwards, “Platform Manipulation,” which was discarded to keep results fair.

Pinterest and Twitch provided sufficient data for this study, while Tumblr still does not release transparency data.

Transparency reports really only became a trend among social media companies in 2018, so we don’t have a ton of historical data to go by. Furthermore, changes in content moderation policies might skew the numbers.

In the United States, the IC3 tracks reports across the US which includes territories such as Puerto Rico, Guam, and the Minor Outlying Islands. This year we have only included 50 states and Washington D.C. in US figures. Other territories are individually reported.

What tech companies are doing about it

Content removals are typically the result of either an automated filter or users of a service flagging content they think is inappropriate. In some cases, human moderators might be used to judge whether something should be removed, but it would be all but impossible for moderators to examine every single thing posted on a social network.

Child sexual abuse photography is illegal in the United States (and pretty much all of the world) and is not protected under the First Amendment. However, under Section 230 of the Communications Decency Act, social media companies are protected from liability when their users post something illegal. So they can’t be sued when their users post child abuse content. This law continues to face scrutiny and sees numerous amendments proposed each year but Big Tech companies and free speech advocates lobby to kill them each time they’re introduced.

Still, social media companies certainly don’t want to be associated with child abuse, so they do what they can to remove such content as quickly as possible. So far, their tactics are largely reactive, not proactive. Pre-screening content would likely be too burdensome and come with serious privacy concerns.

Recently, Apple started hashing image files on users’ iCloud storage to see if they match those in a law enforcement database of child abuse images. This allows Apple to scan users’ storage for child porn without actually viewing any of the users’ files. Some privacy advocates still take issue with the tactic, and it’s not perfect, but it might be a compromise that other tech companies decide to adopt.

Facebook was criticized for its rollout of default end-to-end encryption in December 2023, which many critics have argued will let child abusers “hide in the dark.” The National Crime Agency estimates that encrypting messages will lead to a sharp reduction in abuse referrals to the NCMEC. Chief Executive of the Internet Watch Foundation (IWF), Susie Hargreaves OBE, also said: “This catastrophic decision to encrypt messaging services, without demonstrating how protection for children won’t be weakened, will lead to at least 21 million reports of child sexual abuse going undetected. Meta is effectively rolling out the welcome mat for paedophiles.” However, privacy advocates (including Comparitech) have argued in favor of end-to-end encrypted messaging as a necessity for user privacy and freedom of speech.

For Q1 of 2024, Facebook has reported 15.2 million instances of CSAM. This is a decrease from the 18.1 million reports in Q4 of 2023 and 18.7 million in Q3 of 2023, suggesting less content is being flagged as CSAM due to the encryption of messages–but there isn’t enough evidence to suggest this is the case. Facebook did report an overall reduction in CSAM in 2023 before end-to-end encrypted messages were introduced.

The impact of artificial intelligence may have on CSAM reporting

AI-generated child sexual abuse material (AIG-CSAM) also presents a new challenge to ESPs. For the first time, NCMEC provided figures on AIG-CSAM, receiving 4,700 reports via its CyberTipline in 2023. The Internet Watch Foundation (IWF) also discovered 20,254 AI-generated images posted to one dark web CSAM forum in just one month.

Nevertheless, while AI has the potential to intensify the problem of CSAM, it may also be part of the solution. For example, Google and META are both looking toward AI as a tool to find and combat CSAM. This includes tools developed by the likes of Thorn. These utilize machine learning algorithms to scan vast amounts of data, identify potentially harmful content, and alert authorities, often far more efficiently than manual methods.

Data researcher: Charlotte Bond