US, China Decline Military AI Pledge

Ruben Brekelmans
  • A global summit on military AI governance ended with only 35 of 85 participating nations signing a non‑binding declaration on responsible AI use.
  • The United States and China chose not to endorse the document, reflecting geopolitical tensions and differing strategic priorities.
  • The agreement highlights growing concern that rapid advances in AI could outpace safeguards meant to prevent accidents or escalation.

Major Powers Withhold Support for AI Governance

A military AI summit held in A Coruña, Spain, concluded with just over a third of attending countries signing a declaration on responsible use of artificial intelligence in warfare. The United States and China, the world’s two most influential military powers, opted not to join the agreement. Delegates said strained transatlantic relations and uncertainty about future geopolitical alignments made some governments hesitant to commit to joint principles. The outcome underscores how strategic competition complicates efforts to establish global norms for emerging technologies.

The declaration reflects growing concern that rapid advances in AI could outpace existing rules governing military deployment. Officials worry that autonomous systems could increase the risk of miscalculation or unintended escalation. Dutch Defence Minister Ruben Brekelmans (pictured) described the situation as a “prisoner’s dilemma,” with governments torn between adopting responsible limits and maintaining strategic advantages. He noted that rapid progress by Russia and China adds urgency to both innovation and oversight.

Only 35 of the 85 participating nations endorsed the document. Signatories included Canada, Germany, France, the United Kingdom, the Netherlands, South Korea and Ukraine. Their support signals a willingness to pursue shared standards even without backing from the largest military actors.

What the Declaration Calls For

The declaration outlines 20 principles intended to guide the development and deployment of military AI systems. These include affirming human responsibility over AI‑enabled weapons and ensuring clear chains of command and control. Governments are encouraged to share information about national oversight mechanisms when security considerations allow. The document also emphasizes the importance of risk assessments, rigorous testing and training for personnel operating AI‑driven capabilities.

This year’s agreement is more detailed than the “blueprint for action” endorsed at previous summits in The Hague and Seoul. Those earlier documents, supported by around 60 nations including the United States, were more general and carried no legal weight. Despite remaining non‑binding, the new declaration introduces more concrete expectations. Some countries were reluctant to endorse it for precisely that reason, according to Yasmin Afina of the U.N. Institute for Disarmament Research.

The framework aims to reduce the likelihood of accidents involving autonomous systems. It also seeks to promote transparency among nations deploying AI in military contexts. Supporters argue that even voluntary commitments can help shape future norms. Critics counter that without participation from major powers, the impact may be limited.

Geopolitical Tensions Shape Participation

The absence of the United States and China reflects broader geopolitical dynamics. Both countries are investing heavily in military AI and may be wary of constraints that could limit their strategic flexibility. Their decision not to sign does not necessarily indicate opposition to responsible AI use, but it highlights the difficulty of achieving consensus in a competitive environment. Some delegates suggested that upcoming political developments could further influence transatlantic cooperation.

Governments worldwide are grappling with the rapid evolution of AI technologies. Many fear that autonomous systems could be deployed before adequate safeguards are in place. The summit’s organizers hope that continued dialogue will encourage more nations to adopt shared principles over time. They argue that early agreements, even among a subset of countries, can lay the groundwork for broader frameworks.

The summit also highlighted the role of smaller and mid‑sized nations in shaping AI governance. Countries such as the Netherlands and South Korea have been vocal advocates for responsible military AI standards. Their participation helps maintain momentum even when major powers remain cautious. The organizers plan to continue refining the principles at future meetings.

The Responsible AI in the Military Domain (REAIM) initiative began in 2023 and has grown into one of the leading international forums on military AI governance. While participation varies, the discussions have helped clarify areas where nations agree — such as the need for human oversight — and where significant gaps remain. Future summits may explore legally binding options, though experts say political alignment will be essential before such measures can succeed.


 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.