The mother of a girl whose photo was used in AI-generated naked images says hundreds of parents have told her their children are also victims.

Miriam Al Adib's daughter was one of several children from a Spanish village who had indecent images created using photos of them fully clothed. She says parents around the world claim their children have also been targeted.

The Internet Watch Foundation said it was “not surprising” the practice was so widespread. The town of Almendralejo hit headlines in September after more than 20 girls, aged between 11-17 had AI-generated indecent images shared online without their knowledge.

Mrs Al Adib was among a group of parents who created a support group for those affected, which she said led to many other parents contacting her with their own concerns.

“Hundreds of people have written to me saying ‘how lucky you have been [to have support] because this same thing has happened to us, it happened to my daughter, or it happened to me, and I haven't had any support',” she told Wales Live.

“If any girl is affected, please tell your parents.” Ms Al Adib said mothers and fathers of those affected in her village had started a group to help support each other and their children.

She added: “This helped many girls to come forward to also say what had happened to them. It is important to know, because many girls are not able do not dare to talk about this with their parents.”

She said the combination of access to social networks, pornography and artificial intelligence was a “weapon of destruction”. The UK's first AI safety summit last week heard Home Secretary Suella Braverman commit to clamp down on AI-generated child sexual abuse material.

The UK government said: “AI-generated child sexual exploitation and abuse content is illegal, regardless of whether it depicts a real child or not.

“The Online Safety Act will require companies to take proactive action in tackling all forms of online child sexual abuse – including grooming, live-streaming, child sexual abuse material and prohibited images of children – or face huge fines.”

Susie Hargreaves, chief executive of the Internet Watch Foundation, said child sexual abuse material generated through AI needs to be addressed “urgently”. She said she was concerned there could be a “tsunami” of images created in the future.

“That's because it's not something that's about to happen. It is happening,” she said. In their October 2023 report, the foundation found that in just one month more than 20,000 AI-generated images were found on one forum which shares child sexual abuse material.

Comments included congratulations for creators on the realism of pictures, and users saying they had created images from pictures they'd taken of children in a park.

Dr Tamasine Preece, who leads health and wellbeing at Bryntirion Comprehensive school in Bridgend, said things like social media and smart phones mean her role has changed “immeasurably” since she started teaching.

She said it was “absolutely vital schools play a pivotal role” in working with children about topics like the dangers of AI. Wales Live showed her an advert which claims an app can generate nude photos, which she described as “heart-breaking'.

“We as adults can bring them out into the foreground in a safe way rather than these subjects being taboo and discussed amongst themselves sharing misinformation,” she added. The Lucy Faithful Foundation, which works with offenders to tackle child sexual abuse, said it was bracing itself for an “explosion” of child sexual abuse material created by AI.

— CutC by bbc.com

Leave A Reply

Exit mobile version