Multiple nuclear arms control treaties have collapsed in recent years, but analogies associated with them have returned as possible inspiration to manage risks stemming from artificial intelligence (AI) advancement.

Some welcome nuclear arms control analogies as an important aid to understanding strategic competition in AI, while others see them as an irrelevant distraction, weakening the focus on new frameworks to manage AI’s unique and unprecedented aspects.

The focus of this debate is sometimes too narrow or overly selective when a wider examination of arms control geopolitics can identify both irrelevant and valuable parallels to assist global security governance for AI.

Great power leaders frequently equate AI advancement with arms racing, reasoning that powers lagging behind will soon see their great power status weakened. This logic serves to intensify competition, risking a spiral into more unsafe AI practices.

The global norm institutionalization that established nuclear taboos can also stigmatize unethical AI practices. Emphasizing reciprocal risk reduction offers pragmatic starting points for great power management of AI safety.

This research is part of the Reignite Multilateralism via Technology (REMIT) project, funded by the European Union’s Horizon Europe research and innovation programme under Grant Agreement No. 101094228.

Research Fellow