Artificial Intelligence and Nuclear Weapons

IFSH Discussion Event on the Sidelines of the NPT PrepCom in Geneva

The Second Session of the Preparatory Committee for the 2026 Non-Proliferation Treaty Review Conference (NPT PrepCom) is taking place at the United Nations in Geneva from July 22 to August 2. IFSH took recent AI developments as an occasion to contribute to the conference's side events with a discussion event on "AI and Nuclear Decision-Making" on the first day of the conference. Following welcoming remarks by Susanne Riegraf, the Deputy Commissioner of the German Federal Government for Disarmament and Arms Control (Federal Foreign Office), three brief presentations comprehensively covered the topic.

Alice Saltini, Policy Fellow at the European Leadership Network (ELN), initially outlined what is currently known about applications of AI in NC³ and the apparent plans of nuclear weapon states in this regard. Wilfred Wan, Program Director of the Weapons of Mass Destruction Programme at the Stockholm International Peace Research Institute (SIPRI), complemented this with his assessment of the security policy implications. Jana Baldus, Researcher at the Peace Research Institute Frankfurt (PRIF), concluded the debate by discussing arms control possibilities in various forums that could address the particular risks of AI in NC³.

Lydia Wachs, a doctoral candidate at Stockholm University, moderated the subsequent discussion with the audience. Anja Dahlmann and Theres Klose from the IFSH office in Berlin organized the event, which was attended by around 80 people.

 

Will algorithms soon decide on the use of nuclear weapons? Recent technological advances in artificial intelligence (AI) have led to concerns that AI-infused systems could be integrated into certain nuclear operations. Particularly the potential use of AI in nuclear command, control, and communications (NC³), but also for remote reconnaissance purposes, has become a matter of increasing debate. Some argue that AI could support rapid decision-making in a crisis and give decision-makers a clearer picture. Others argue that AI may lead to hasty decisions, based on non-verifiable data.