April 23rd 2024
- First-of-its kind new analysis shows three to six-year-old children being manipulated into sexual activities, including penetrating themselves, bestiality, and sadism or degradation, via webcams and camera devices.
- ‘Opportunistic’ predators strike while children are online on phones and devices often used within the family home.
- IWF welcomes Ofcom consultation on new detection methods, but says companies should act immediately rather than wait for regulations to take effect.
- Call for children under six to be warned about online dangers.
- Record number of companies taking services from IWF to stop child sexual abuse images circulating online.
Children under six are being manipulated into “disturbing” acts of sexual abuse while parents think they are playing safely on household devices, as a new report highlights the need for more protections online.
New data from the Internet Watch Foundation (IWF) reveals thousands of images and videos of three to six-year-old children who have been groomed, coerced and tricked into sexually abusive acts, are now being found on the open internet. The analysis, published today, shows for the first time* how three to six year old children are now being targeted by “opportunistic” internet predators who manipulate them into sexual activities. The abuse, which analysts have seen ranging from sexual posing and masturbation, to sadism, degradation, and even sexual acts with animals, is directed by perpetrators and often recorded without the child’s knowledge. This so called “self-generated” child sexual abuse imagery, where a perpetrator is remote from the victim, is then shared far and wide on dedicated child sexual abuse websites.
The IWF, which is the UK’s front line against online child sexual abuse, welcomes Ofcom’s upcoming consultation on the use of automated content classifiers driven by artificial intelligence and machine learning techniques to detect illegal and harmful content, including previously undetected child sexual abuse material. However, it urges companies in and out of scope of the Online Safety Act to introduce these measures immediately, rather than waiting for the regulations to take effect later this year.
Susie Hargreaves OBE, Chief Executive of the IWF, said: “The opportunistic criminals who want to manipulate your children into disturbing acts of sexual abuse are not a distant threat – they are trying to talk to them now on phones and devices you can find in any family home.
“If children under six are being targeted like this, we need to be having age appropriate conversations, now, to make sure they know how to spot the dangers. A whole society approach is needed.
“The Online Safety Act also needs to work because these online harms are getting worse. It is imperative that we all take this threat seriously and that we are all doing our bit to prevent the spread of new and previously unseen child sexual abuse imagery. We can’t afford to wait until these codes come in. The harms are happening to children now, and our response must be immediate.”
The IWF is ready to work with companies to develop solutions which can keep platforms safe, with unparalleled tagged datasets and knowledge which can help build new and effective tools.
Security Minister, Tom Tugendhat said: “This deeply disturbing report shows that predators are targeting younger and younger victims. My message to parents is to speak to your children about their use of social media, because the platforms you presume safe may pose a risk. It’s vital that technology companies implement stronger safeguards to prevent abuse, and work with us to bring predators to justice and keep our children safe.”
Secretary of State for Science, Technology and Innovation Michelle Donelan said: “As this shocking report shows, there is truly no time to lose to keep our children safe online. We are one of the first countries in the world to put in place laws which will protect children from illegal, harmful, and age-inappropriate content. This is a vital step in protecting children from sexual exploitation and abuse and achieving our shared goal of making the UK the safest place to be online.
“We’ve consistently been at the forefront of child protection and will continue to build on this work. But we are very clear; companies should not wait and should act now to protect children.”
Ian Critchley, NPCC lead for Child Protection said: “The work of the IWF is crucial in the identification, removal and reporting of child sexual abuse material. This latest report shows how offenders are gaining access to even younger children, which is simply unimaginable for us all. But this isn’t just the responsibility of parents and carers – the biggest change though we must see is from the tech companies and online platforms. Companies are still failing to protect children and continue far too often to put profit before child safety.
“I welcome the Online Safety Act, but it should not have required this developing legislation to change the negligible approach to child safety by too many companies.”
According to research published last week by Ofcom, about two in five parents of five to seven year-olds (42%) say they use social media sites and apps together with their child, while a third (32%) report that their child uses social media independently. The IWF says this only highlights the need for companies to be required to take action now.
Ms Hargreaves added: “We need a full society approach to make sure children are not groomed like this in the first place, but we also need to see measures in place to make sure this imagery cannot spread on the open web. We stand ready to help Ofcom and the technology sector find solutions.”
In 2023, nearly all of the webpages the IWF discovered (92% – or 254,070 URLs) contained self-generated images or videos where the victim had been coerced, blackmailed, or groomed into performing sexual acts over a webcam for an internet predator in a remote location.
Today’s first-of-its-kind analysis gives a startling insight into three to six year old children who were abused in this way. IWF analysts discovered 2,401 individual self-generated images and videos of children in this age category that were hashed this year. Of these 91% were of girls.
Analysts witnessed abuse happening in domestic locations including bathrooms and bedrooms, kitchens and dining rooms. They saw soft toys, games, books and bedding featuring cartoon characters appearing in the background of imagery depicting some of the most extreme kinds of sexual abuse.
The most extreme (Category A) forms of sexual abuse featuring three to six year olds who had been groomed or coerced this way were featured in 15% of these images and videos (356 images and videos).
The provision of IWF datasets and technical and tools to companies in scope of regulation will be vital to the successful implementation of the Online Safety Act.
Every instance of this imagery found by IWF analysts was hashed – a process where the image is assigned a digital fingerprint – and added to the IWF’s Hash List which is distributed to technology companies to prevent the upload, sharing and storage of this imagery on their services.
More than 200 companies from across the world currently partner with IWF to disrupt and stop the spread of child sexual abuse imagery online.
The IWF is discovering more child sexual abuse imagery online than ever before in its history. Overall in 2023, the IWF found 275,652 web pages containing child sexual abuse – a record-breaking amount. Each web page can contain thousands of images or videos.
As well as highlighting the targeting of younger victims, today’s annual report also reveals child sexual abuse online is getting more and more extreme.
Today’s analysis shows there was a 22% increase in webpages containing Category A child sexual abuse material found in 2023, rising from 51,369 URLs in 2022, to 62,652 in 2023. This makes 2023 the most extreme year on record.
The trend for the past three years has seen increases in Category A material: between 2021 and 2023, the IWF has seen a 38% increase in Category A imagery.
* This study analysed individual images and videos – with granular data provided through the IWF’s bespoke new hashing tool IntelliGrade. This is the first time a whole year’s worth of data on self-generated imagery among three to six year olds has been published – throwing light on a growing and disturbing problem.