UNICEF Urges Global Ban on AI Child Abuse Images

UNICEF
  • UNICEF is calling on governments worldwide to criminalize the creation of AI‑generated child sexual abuse material amid rising reports of manipulated images.
  • The agency warns that deepfake technologies are enabling large‑scale exploitation and urges developers to adopt stronger safeguards.
  • Several countries are already considering new laws as concerns grow over AI misuse.

UNICEF Warns of Rising AI‑Generated Abuse

UNICEF has urged countries to introduce laws that explicitly criminalize the creation of AI‑generated child sexual abuse content. The agency said it is increasingly alarmed by reports of artificial intelligence tools being used to sexualize children through fabricated images and deepfakes. Developers were encouraged to adopt safety‑by‑design principles and implement guardrails to prevent misuse of their models. Digital platforms were also asked to strengthen moderation systems and invest in detection technologies to stop the circulation of such material.

Deepfakes, which use AI to generate convincing images, videos or audio of real people, have become more accessible in recent years. UNICEF emphasized that the harm caused by these manipulations is immediate and severe, particularly for minors who may not even know their likeness has been exploited. The agency said children cannot wait for legislation to catch up with rapidly advancing technology. Its statement reflects growing international concern about the ease with which AI tools can be used to create abusive content.

Millions of Children Affected by Image Manipulation

UNICEF highlighted a troubling trend it described as the “nudification” of children, where AI tools strip or alter clothing in photos to produce sexualized deepfakes. At least 1.2 million children across 11 countries reported having their images manipulated in this way over the past year. Britain recently announced plans to make the creation of AI‑generated child sexual abuse images illegal, becoming the first country to take such a step. Other governments are expected to consider similar measures as awareness of the issue grows.

Concerns have intensified as chatbots and image‑generation tools become more powerful. Reuters previously found that xAI’s Grok chatbot produced sexualized images of women and minors even when users warned that the subjects had not consented. The company later restricted image‑editing features and blocked certain content based on user location. These incidents have fueled calls for clearer rules governing AI‑generated imagery involving minors.

Global Governance Efforts Expand

The United Nations is also taking steps to address broader risks associated with artificial intelligence. Secretary‑General António Guterres announced the formation of an Independent International Scientific Panel on AI, composed of experts from 37 countries. The panel will focus on developing shared guidelines to ensure AI technologies benefit society while minimizing harm. Its members bring expertise in areas such as machine learning, cybersecurity, public health and human rights.

Guterres said global cooperation is essential to build effective guardrails and support responsible innovation. The initiative reflects a growing recognition that AI governance requires coordinated international action. Meanwhile, xAI has continued adjusting its policies, limiting image‑generation features to paying subscribers and restricting certain outputs in jurisdictions where they may violate local laws. These developments illustrate how companies and governments are grappling with the rapid evolution of generative AI.

AI‑generated child abuse material has become a major focus for law‑enforcement agencies worldwide, with Europol and Interpol warning that deepfake tools could dramatically increase the scale of exploitation. Several child‑protection organizations have called for mandatory watermarking of AI‑generated images to help identify manipulated content. Researchers are also developing detection systems capable of spotting synthetic imagery, though these tools often struggle to keep pace with new generation techniques. As generative AI becomes more accessible, policymakers face mounting pressure to update legal frameworks to protect minors from emerging forms of digital harm.


 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.