Skip to content
Article Article November 14th, 2019
Justice • Innovation

Can AI help save our children from online sexual abuse?

Article highlights


@yalda_aoukar of @BcktFoundation explores how AI is being used to address the problem of increased reports of suspected online child abuse.

Share article

What should tech firms + public & private actors + law enforcement do to protect children from online sexual abuse? @yalda_aoukar explores.

Share article

Online sexual abuse of children is a growing problem.@yalda_aoukar recommends co-ordination between key players to address this.

Share article

Partnering for Learning

We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.

Partner with us

This year marks the thirtieth anniversary of the United Nations Convention on the Rights of Children (UNCRC). While much progress has been made to promote our children's rights in the real world, safeguarding children's rights online has been seriously neglected. 

The exponential rise of the online sexual abuse of children over the last two decades is nothing short of a humanitarian crisis.

In 2018, there were 18.4 million reports of suspected online abuse of children - 6,000 times the number reported in 1998 - while in the last decade there has been a hundredfold increase in files containing child sexual abuse material (CSAM). A shocking and very timely investigation conducted by the New York Times showed that in a single year tech companies reported over 45 million images of children being sexually exploited online. The Internet Watch Foundation found that 39% of CSAM online is of victims under the age of 10, and 43% depict acts of extreme sexual violence.

The online sexual abuse of children is facilitated by the progress in technology, which is providing low-cost platforms for peer-to-peer sharing of files, fast livestreaming capabilities, and end-to-end user encryption, so that perpetrators can remain anonymous.

Even more alarmingly, digital tech enables the creation of online communities where criminals can connect with peers and share “best practice” for their crimes. In such circumstances, it is unsurprising that 99% of CSAM goes undetected and only 1% of sex trafficking victims are rescued. 

The overlapping roles of law enforcement, policymakers, and big tech

Law enforcement agencies worldwide have been swamped by the sheer volume of data being circulated online, and are still largely relying on human analysts to classify CSAM. A paper recently published by the National Center for Missing and Exploited Children (NCMEC) described a system at “breaking point”, with reports of abusive images “exceeding the capabilities of law enforcement to take action”. Law enforcement agencies are finding themselves having to prioritise resources by allocating the bulk of them to younger and younger victims. 

Yet legislators and policymakers have failed to hold the big tech companies to account.

A story that grabbed media attention was that of Backpage, a classified ads website that became the largest marketplace for buying and selling sex, including the advertising and sale of minors. It took eight years, countless court cases, and hundreds of underage victims before federal law enforcement agencies were able to seize and shut down the platform in 2018. Backpage had previously defended itself from prosecution by citing the Communications and Decency Act s230, which holds that service providers are not liable for third party content. 

Broadcasters have strict rules and liability when it comes to the content they publish, and there are very real economic and legal ramifications for a utility provider that pollutes our ecosystem. Yet the same regulatory standards do not apply to the big tech firms, which act with impunity over the toxic content they release into our online ecosystem.

  • Facebook Messenger was responsible last year for nearly 12 million reports of CSAM, but Facebook's move towards full encryption of Messenger could provide yet another safe haven for perpetrators, allowing them to leave zero digital trace behind.
  • Apple does not scan its cloud storage, and it encrypts its messaging app, making detection virtually impossible.
  • Amazon does not even look for the images in its cloud service.
  • Dropbox, Google and Microsoft's consumer products scan for illegal images only when someone shares them, not when they are uploaded.
  • Tumblr has often provided slow and equivocal responses to law enforcement agencies about its users' behaviour.

Might AI be able to help turn back the tide?

While technology is the source of the problem, it can also be the solution we are striving for. New forms of technology such as AI can pave the way to better prevention, detection and prosecution of these crimes. It can also reduce the burden on those engaged in reviewing the very disturbing CSAM images, especially those from the dark web.

The NCMEC paper suggests that future advancements in machine learning may be the only way to catch up with increasingly tech-savvy criminals. The United Nations Interregional Crime and Justice Research Institute is developing AI tools to prevent and detect crimes against vulnerable children, for example to identify child pornography images. There are several AI techniques that can help combat CSAM online, of which computer vision, predictive AI, and natural language processing are especially relevant.

Computer vision tools include image classification software, such as Griffeye's, which scans images for aspects such as age and nudity, and assesses body motion to determine whether videos are explicit, while facial recognition is used to identify known CSAM victims and offenders. Predictive AI can develop models for complex sex trafficking investigations that involve unstructured data. Thorn's Spotlight tool, for example, analyses web traffic and other data about sex ads on escort sites in order to identify potential victims, monitor potential sex trafficking networks, and generate leads for law enforcement.

By analysing text, natural language processing can alert investigators to suspicious or abusive language online. Language generation techniques can be used to create chatbots that mimic individuals' conversation and tone and can engage in online dialogues.

Childsafe.ai has developed the world's first AI platform capable of observing millions of conversations and intercepting them for child abuse content, using chatbots, particularly when a transaction between a buyer of sex from minors and a seller is about to occur. However, tech firms like Childsafe.ai face a challenging environment, especially regarding the adoption of their solutions by law enforcement agencies and big tech platforms.

As long as agencies' metrics focus on detection and arrest rather than prevention - a far more comprehensive and meaningful indicator of success - preventive tools will struggle to gain traction, certainly in the US.

Stakeholders must work together to protect children

Tech firms need to bring in their AI know-how, allocating more resources to develop and expand AI solutions. Public authorities and private actors need to collaborate closely, not only on tools that improve detection but also on those that disrupt online abuse the moment it occurs. 

Among the most visible players using AI and other new technologies are nationally-based child sexual abuse hotlines.

The NCMEC, for example, runs a CyberTipline that functions as a global clearinghouse for CSAM, making its reports available to US law enforcement and more than 100 law enforcement agencies worldwide.

Law enforcement agencies need to share intelligence on perpetrators and criminal activity trends, and civil society actors must offer their deep understanding of issues faced by victims. They should establish new forms of collaboration across sectors and borders to beat the massive global communities of perpetrators online.

There are also many regulatory and political challenges. Until we change the incentive schemes and legislation regulating technology, any solution to combat the online sexual abuse of children will be limited in scope. Stakeholders need to redefine legal frameworks and cooperation agreements to enable the secure use and sharing of data. Backed by supportive regulations and advanced AI software, they can work together to safeguard vulnerable children and defend their rights under the UNCRC.

 

Yalda Aoukar is the chairperson of the Bracket Foundation, which recently published a white paper entitled Artificial Intelligence: Combating Online Sexual Abuse of Children. The Bracket Foundation has also led workshops that aim to increase understanding of the challenges presented by abuse and to help align the different stakeholders in addressing the problem.

Written by:

Yalda Aoukar Chairperson, Bracket Foundation
View biography
Share this article: