The Third Summit on Responsible AI in the Military Domain (REAIM)

Countries will come together this week in A Coruña, Spain to hold the third Summit on Responsible AI in the Military Domain (REAIM). Taking place from Feb. 4 to 5, the Summit follows the first such gathering in The Hague in February 2023 and a second REAIM Summit in Seoul in September 2024, bringing together representatives from government, industry, academia, and civil society. The third REAIM Summit will operate alongside the more formal and inclusive (from a State perspective) United Nations General Assembly process that will have its first informal meeting in Geneva in June.

Unlike previous summits, the United States will not attend REAIM this year, despite being a key founder of the initiative under the Biden administration. The absence of the United States and Russia will lead some to question the third summit’s potential impact. For others, however, the summit could still be significant because it brings together a wide range of States, non-governmental organizations, academia, and industry.

The Summit’s Objectives

The REAIM Summit aims to ensure that awareness and understanding of responsible AI in the military domain remains a priority for policymakers across the globe. The summit will be structured around three interlinked clusters, starting with foundational understanding, then real-world applications, and ending with the evolving architecture of governance.

Cluster 1, entitled “Understanding AI and Ensuring Responsible AI in the military Domain at the Technical Level,” aims to establish a shared understanding of what responsible AI must entail from a technical perspective. Discussions will focus on mechanisms to ensure systems are explainable, traceable, testable, and aligned with human control and judgment.

Cluster 2 on “Applications of AI in the Military Domain” is the bridge between technical safeguards and governance. It will discuss how AI is being applied by armed forces, for both combat and non-combat purposes – from autonomous weapons to smart logistics, from cyber operations to peacekeeping.

Cluster 3 is called “Governance of AI in the Military Domain in Motion.” It aims to build on the technical and practical lessons by discussing norm development, ethical frameworks, and transparency. The focus will be on how responsible governance can be operationalized through confidence-building measures, public engagement, education, and multilateral coordination. Importantly, it will also discuss synergies with other similar processes.

The summit will have public and closed sessions. Day 1 will be made up of plenary and parallel sessions. Each Cluster will have a timeslot, with six parallel side events. Day 2 will have a High-Level Segment, where ministers will finalize the “Pathways for Action” outcome document. For non-government participants there will be further plenary sessions and side events. The summit will culminate in a high-level final session with statements and the announcement on the outcome document and which States have endorsed it.

The Summit’s Underpinnings

This year’s REAIM Summit builds on the last summit’s  outcome document called Blueprint for Action (which was a follow-up of the first summit’s Call for Action). The document has 20 actions, split across three areas: 1) the impact of AI on international peace and security; 2) implementing responsible AI in the military domain; and 3) envisaging future governance of AI in the military domain.

Over 60 countries endorsed the Blueprint for Action. Notably China did not support the text. The main reason it objected was due to language on maintaining human involvement for decisions concerning nuclear weapons employment.

Although the Blueprint for Action represented an important step forward in building awareness of the issues, it was light on how to operationalize the various actions. Organizers of the third REAIM Summit have recognized this inadequacy and are planning to devote the outcome document – the Pathways to Action – to the implementation of legal and policy principles.

The summit will also build upon the work of the Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM), which had a two-year mandate from the Dutch government. The Commission published a report entitled Responsible by Design in September 2025.The report seeks to translate the REAIM Summit declarations into actionable guidance. It thus serves as a useful framing document for the third REAIM Summit and its stated aim of operationalizing the actions from the previous two summits. The report contains recommendations for the different sets of stakeholders: States, militaries, and industry, and it advocates a layered approach to global governance of AI in the military domain.

The U.S. Pivot, a Paradigm Shift

The main difference between the upcoming summit and the two previous ones is that the United States will not actively participate. This is part of a larger policy change by the United States. The Artificial Intelligence Strategy for the Department of War released on Jan. 9 only mentions responsible AI in a subtitle and says that “utopian idealism” should be replaced by “hard-nosed realism.” The strategy limits any constraints regarding the use of AI to “any lawful use,” which is relatively unspecific.

Similarly, the Trump administration’s new National Defense Strategy calls the “rules-based international order” a “cloud-castle” abstraction. The document confirms President Donald Trump’s recent statement that he does not “need international law.” Taken together, it is clear that the current U.S. administration is not interested in international norm-building, nor supporting others in implementing principles of responsible AI.

In October 2025, the United States voted against the new resolution on responsible AI in the military domain at the U.N., which it had previously supported. In its explanation of its vote, the Trump administration stated that the resolution “risks starting down the unwelcome and unhelpful path of creating a global governance regime designed to institute centralized control over a critical technology and the United States of America’s warfighters.” It went on to oppose the resolution on the grounds that determining the future of AI at the U.N. was a gross violation of national sovereignty and that doing so would stifle innovation.

The current U.S. position stands in stark contrast to its previous international leadership for responsible AI. The United States was actively involved in the first two REAIM Summits. It had also launched its own complementary initiative, the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, at the 2023 Hague Summit. That process saw a plenary meeting and the establishment of working groups. With the change of administration in 2025, however, the process stopped.

The change in the United States’ position is a paradigm shift for the REAIM process as it means that the summit has lost a high-profile supporter. Other countries can drive the process. But unless China steps up, which seems rather unlikely, no major military power is strongly engaged in the process. Achieving a global norm cascade has thus become even harder, which is further complicated by the fact that a great portion of the world’s AI technology comes from the United States.

Nevertheless, the REAIM Summit can pick up the void regarding operational issues of responsible AI left by the discontinuance of the U.S. Political Declaration. As such, the summit can continue to strengthen the international sharing of knowledge and practices for achieving responsible AI by States and other stakeholders. This can lead to norm clarification and internalization among stakeholders.

Related U.N. Processes

The REAIM Summit is intertwined with other normative processes, notably at the U.N. Immediately following the 2024 REAIM summit, the co-hosts brought their initiative to the General Assembly by introducing a new resolution to its First Committee entitled “Artificial intelligence in the military domain and its implications for international peace and security.” The resolution created a new agenda item and requested the U.N. Secretary-General (UNSG) to seek the views of States and civil society on the topic and produce a report. The resolution was supported by a large majority of States.

In an August UNSG report, the secretary-general recommended the “establishment of a dedicated and inclusive process to comprehensively tackle the issue of AI in the military domain and its implications for international peace and security.” The REAIM organizers followed up on this recommendation at the First Committee in October with a resolution mandating a three-day meeting in Geneva in 2026 for informal exchanges on the report. This meeting, which is scheduled for June, is not a new formal U.N. process, but would appear to be paving the way for an Open-Ended Working Group (OEWG).

Bringing the AI in the military domain discussion to the General Assembly could be seen as a move away from the summit approach to a more inclusive approach (from a State perspective) through the U.N. However, it seems that the REAIM sponsors want the summits to continue, with the U.N. track running in parallel.

The move to bring the responsible AI discussion into the General Assembly is similar to what has happened with the so far only formal, intergovernmental process related to the military use of AI: the Group of Governmental Experts on Lethal Autonomous Weapon Systems (GGE on LAWS). At the General Assembly’s First Committee in 2023, a new resolution entitled “Lethal Autonomous Weapons Systems” was introduced by Austria, which requested the General Assembly to seek the views of States and civil society. The respective report came out in August 2024. However, this did not turn out to be a launch pad for a new General Assembly process. The two subsequent resolutions adopted, in 2024 and 2025, modestly called for two days of consultations in New York and for the GGE on LAWS to complete its mandate, which ends at the end of 2026.

What happens next to the GGE on LAWS is highly relevant to what happens to the REAIM process. A new mandate for the GGE to start negotiating a legally binding instrument would mean no need for a General Assembly process. If the next mandate of the GGE on LAWS falls short of a full negotiating mandate, however, the calls to pursue a treaty through the General Assembly will be loud. If States go down the General Assembly route, then it is unlikely that States would also agree to another formal process on the broader question of AI in the military domain. The question then would be whether to merge the two processes or for REAIM to take a backseat to allow a focus on LAWS.

The trend of bringing topics related to AI in the military domain to the General Assembly continued in October, when Mexico introduced a new resolution entitled “Possible risks of the integration of artificial intelligence into command, control and communications systems of nuclear weapons.” The resolution called for the issue to be urgently addressed in disarmament fora. It was not supported by any of the States that possess nuclear weapons. The resolution did not create a new process, but it has put the issue on the General Assembly’s agenda.

As we anticipated in our article ahead of the Seoul summit, the General Assembly is now playing a greater role in this area. This shift is a response to the criticisms that the processes outside the General Assembly do not sufficiently engage all States. Another reason for creating U.N. processes is that it is likely to be easier to adopt legally binding instruments without the need for consensus rule that required at the GGE on LAWS.

Outlook

The fact that the summit track is continuing in parallel with the General Assembly process means that it will continue to play a significant role as a forum where industry, academia, and policy experts can discuss AI in the military domain with governments and militaries.

Summit participants will need to discuss how to make best use of this forum and how to continue to foster norms and operationalize responsible AI in the military domain. Future REAIM summits could stay as an annual focal point for stakeholders to check in with each other on latest developments and share insights across professional fields and disciplines. More ambitiously, the summit process could increase its scope of activities by creating a permanent year-round multi-stakeholder dialogue, free from the restrictions of U.N. formality, which enable the annual summits to establish global norms.

The third Summit will consolidate much policy, scholarly, and operational progress that has been achieved over the last years by the REAIM process. With the U.S. change of position, the goal should now be to maintain momentum and set a strategic vision for how to best advance responsible AI at the global level.

FEATURED IMAGE: A hand points at a futuristic digital map (via Getty Images)

Great Job Tobias Vestner & the Team @ Just Security for sharing this story.

NBTX NEWS
NBTX NEWShttps://nbtxnews.com
NBTX NEWS is a local, independent news source focused on New Braunfels, Comal County, and the surrounding Hill Country. It exists to keep people informed about what is happening in their community, especially the stories that shape daily life but often go underreported. Local government decisions, civic actions, education, public safety, development, culture, and community voices are at the center of its coverage. NBTX NEWS is for people who want clear information without spin, clickbait, or national talking points forced onto local issues. It prioritizes accuracy, transparency, and context so readers can understand not just what happened, but why it matters here. The goal is simple: strengthen local awareness, support informed civic participation, and make sure community stories are documented, accessible, and treated with care.

Latest articles

spot_img

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Leave the field below empty!

spot_img
Secret Link