Advertisement
Categories: NewsTechWorld

Unicef Urges Global Action Against AI-Generated Child Sexual Imagery

Advertisement

The United Nations Children’s Emergency Fund (UNICEF) expressed growing alarm over reports of AI-generated sexualized images involving children, urging governments and the AI industry to prevent such content. In a statement, UNICEF stated that “the harm from deepfake abuse is real and urgent,” adding that children cannot wait for legal frameworks to catch up.

Deepfakes—images, videos, or audio generated or manipulated using artificial intelligence to look real—are being used increasingly to produce sexualized material involving children through techniques like ‘nudification,’ where AI tools strip or alter clothing in photos to create fabricated nude or sexually explicit images. UNICEF described this situation as unprecedented and presented significant new challenges for prevention, education, legal frameworks, and support services for children.

The organization noted that current prevention efforts were insufficient given the potential misuse of AI-powered image or video generation tools that produce sexualized material. A recent large-scale research conducted by UNICEF alongside child rights group ECPAT and Interpol highlighted that across eleven countries, at least 1.2 million children reported having their images manipulated into sexually explicit deepfakes through AI tools in the past year.

In some of these countries, this equates to one in every twenty-five children—literally, one child per classroom. The research also showed that up to two-thirds of children in these countries expressed concern about the possibility of AI-generated fake sexual images or videos.

While UNICEF welcomed efforts by those developing AI tools who are implementing safety-by-design approaches and robust guardrails to prevent misuse, it stressed that many AI models lack adequate safeguards. Furthermore, when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly, the risks can be compounded.

UNICEF has called for urgent actions to address escalating threats of AI-generated child sexual abuse material (CSAM). All governments should expand definitions of CSAM to include AI-generated content and criminalize its creation, procurement, possession, and distribution. UNICEF also urged AI developers to implement safety-by-design approaches and robust guardrails to prevent misuse.

The organization emphasized that digital companies must prevent the circulation of AI-generated child sexual abuse material—not merely remove it after the fact—and invest in detection technologies to ensure such material can be removed immediately upon reporting by victims or their representatives.

Advertisement
News Desk

Recent Posts

France Seizes Over Four Tons of Cocaine in Two Separate Operations

France's navy seized over four tons of cocaine from a ship in the south Pacific…

1 hour ago

Russia Expels German Diplomat Amidst Berlin’s ‘Spy Mania’ Accusations

MOSCOW: Russia expelled a German diplomat on Thursday in response to Germany's "groundless" expulsion of…

1 hour ago

Indian Police Launch Investigation After Sisters’ Suspected Suicide

World Indian police investigate suspected suicide of three young sisters Local media reported the sisters,…

1 hour ago

King Charles Hosts Exclusive Dinner for High-Profile Imam at Windsor Castle

The King hosted a private dinner at Windsor Castle last night to mark the first…

2 hours ago

Jamaat-e-Islami Leader Urges UN Resolution on Kashmir at Karachi Rally

Munim Zafar, secretary of Jamaat-e-Islami Karachi, delivered an address on Thursday, expressing solidarity with the…

2 hours ago

India Firmly Plan Trip to Colombo for Pakistan World Cup Match

India captain Suryakumar Yadav announced that his team would proceed with their scheduled Twenty20 World…

2 hours ago