Canadian Centre for Child Protection warns of “growing wave” of online abuse material since the launch of public AI tools
Why It Matters
The Canadian Centre for Child Protection has been advocating for reforms to the country’s Criminal Code that would better protect victims in cases of sexually explicit deepfakes. At the same time, as the number of exploitative images on the internet continues to climb, the organization is considering ways that it can use AI to support its research.

Note: Future of Good is interested in exploring the impact of AI-generated or doctored images on non-profits, charities, and frontline staff. That could include being targeted with false images and dealing with misinformation, among other things. If this is something you are interested in speaking about, please reach out to sharlene@futureofgood.co.
A web crawler designed to track and report child sexual abuse material (CSAM) on the internet is seeing a large proliferation in cases.
Project Arachnid, a tool developed by the Canadian Centre for Child Protection (C3P) and deployed globally, has seen a “growing wave” of such images, said Jacques Marcoux, director of research and analytics at the C3P.
In particular, the C3P team observed a sharp rise “the moment that all of these AI tools went public, free and open-sourced about three years ago,” Marcoux added.
The web crawler works by matching images it comes across on the internet with an existing database of known and vetted CSAM imagery, through a unique “fingerprint”, Marcoux said.
The problem with AI-generated imagery, however, is that it cannot be matched with anything in C3P’s database because each image is new.
“If an image of child sexual abuse imagery was recently created and uploaded online, and we find it and extract its fingerprint, we won’t have a match, because that corresponding fingerprint is not in our database,” Marcoux said.
“So we refer to images, therefore, as known images and unknown images.” The latter, he added, if suspected to be CSAM, are added to a queue for manual review by a person, as the web crawler itself is not designed to distinguish between real and AI-generated CSAM images.
Canadian legislation must catch up to deepfakes
Project Arachnid “works with images that are known,” Marcoux said. “The problem with AI is that AI creates images out of the blue that previously, up until that moment, did not exist.”
He added that there are different categories of AI-generated images, including those generated on public platforms and those produced when individuals train their own large language models to output CSAM in the likeness of a specific person.
Several researchers have highlighted differences in girls’ and boys’ experiences of being online.
In January, C3P released data that showed an “alarming” spike in extreme online violence targeting girls. Research published by Internet Matters found that girls were more likely to be contacted by strangers on the internet than boys, and more likely to have received abusive or upsetting messages from people they know.
At C3P, an AI-generated image of CSAM is treated with the same level of severity and urgency as a real image, Marcoux said.
The organization has been advocating for reform to the country’s Criminal Code to cover the distribution of sexually explicit deepfakes as an offence.
Bill C-16 would mandate that any organization providing an internet service, including online and social media platforms, must report any CSAM or exploitation material to law enforcement in Canada.
“We are very concerned by what we see with regards to Grok, and the cavalier approach that a lot of tech companies or tech company leaders have shown with regards to how they roll out this technology, and very limited desire to have safeguards in place,” Marcoux said.
“I think the crazed, well-funded race to just capture market share is probably one of the main causes of this, and not having had any legal framework in place in advance is a huge problem.”
C3P runs a national tip line, Cybertip.ca, where people and victims can report online sexual exploitation of children. They also offer guidance and steps that victims can take to regain control if an abusive or exploitative photograph of them has been published online.