GROW YOUR TECH STARTUP

RAND wargames to see if AI could wipe out humanity with pathogens, geoengineering & nukes

May 6, 2025

SHARE

facebook icon facebook icon

The RAND Corporation wargames scenarios to see if AI could contribute to human extinction by facilitating nuclear war, creating and deploying pathogens, and malicious geoengineering.

According to three simulations conducted in the new RAND report called “On the Extinction Risk from Artificial Intelligence,” AI is currently unlikely to wipe out humanity on its own; however, it could still cause considerable devastation if it were programmed to do so, if it were given enough access to critical systems, and if it were granted decisionmaking powers.

“The capabilities of artificial intelligence (AI) have accelerated to the point at which some experts are advocating that it be taken seriously as a credible threat to human existence”

RAND, On the Extinction Risk from Artificial Intelligence, May 2025

In arriving at their conclusions, the RAND authors war-gamed three scenarios in which AI could pose an extinction-level threat to humanity and what capabilities it would require to get there — a lot of which would involve direct human intervention, naivete, and/or stupidity.

  1. Nuclear War: We explored various ways that AI might lead to the use of nuclear weapons, and we could find no plausible way for AI to overcome existing constraints to cause extinction.
  2. Biothreats and Pathogens: An extinction threat would require that AI be capable of acquiring, designing, processing, weaponizing, and deploying pathogens to initiate a pandemic. AI would then need to take follow-up actions to reach isolated groups and exterminate surviving human communities.
  3. Malicious geoengineering: Geoengineering could threaten extinction through the mass manufacturing of gases with extreme global warming potential, thereby heating the earth to uninhabitable temperatures.

The create a true extinction threat to humanity in any of the three scenarios, the AI would require:

  1. The objective to cause human extinction.
  2. The ability to integrate with key cyber-physical systems.
  3. The ability to survive and operate without human maintainers.
  4. The ability to persuade or deceive humans to take actions and avoid detection.

Let’s briefly go through the three scenarios one-by-one.

AI and Nuclear War

The report looks at three ways that AI could be leveraged to propagate nuclear war.

  1. Intentional integration of AI into nuclear decisionmaking:
    • The first way AI could cause nuclear weapon use in the near-term future is if a government deliberately introduced AI models into the decision chain.
  2. AI deception and disinformation that causes nuclear use:
    • The disinformation could be “soft,” such as messaging about adversaries and political systems that lead decisionmakers to believe that a nuclear strike is necessary. The disinformation could also be “hard,” such as deceiving human decisionmakers with manipulated technical data or signals.
  3. Unauthorized AI control of nuclear decisionmaking:
    • If AI enters the nuclear decision chain at a future point when nuclear safeguards are weak, then advanced AI will most likely interact with nuclear weapon–related or other systems through cyber interfaces, not physical interfaces.

So, in order for AI to create nuclear Armageddon, humans would need to give it decisionmaking powers, the AI would have to deceive the humans that gave it those powers, and it would have to be given access to the cyber systems that control the nukes.

However, the authors conclude, “We explored various ways that AI might lead to the use of nuclear weapons, and we could find no plausible way for AI to overcome existing constraints to cause extinction.”

AI, Biothreats, and Pathogens

AI could rapidly figure out ways to create new viruses, and once given control over robotics and software systems, it could unleash these pathogens on targeted populations.

Requirements AI would need to wipe out humanity:

  1. AI would require the objective to cause human extinction:
    • An extinction threat requires an actor to perform a series of steps to create, weaponize, and deploy a biological threat that could realistically threaten extinction.
  2. AI would require a robust way of interacting with the physical world:
    • To have physical capabilities, AI would require the assistance of humans or the availability of generally capable robotics.
  3. AI would require a way to survive without human maintainers:
    • Human maintainers would no longer be available as the pandemic spread, and AI would still need to take follow-up actions to ensure that the pandemic would spread to isolated human communities (or find other ways to kill the few who survived).

The authors describe a scenario in which AI could take control of drones loaded with a virus to spray on populations; however, they argue that once enough people were eliminated, “the infrastructure that AI currently depends on for functionality would almost certainly shut down.”

This is true of all scenarios at the present as well — if enough people were killed, there’d be no one left to keep the power running, the data centers flowing, or the AI working — or so the theory goes.

On AI and pathogens, the authors conclude, “We were not able to determine whether this scenario presents a likely extinction risk for humanity, but we cannot rule out the possibility.”

AI and Malicious Geoengineering

On the topic of malicious geoengineering, the RAND authors looked at how AI could somehow quietly manufacture and stockpile chemicals and greenhouse gases over time and then shoot them all into the atmosphere to cause widespread global heating that would kill off billions of people.

For AI to be an extinction threat in this scenario, it would need three minimum capabilities.

  1. The AI would require the ability to orchestrate chemical manufacturing at a large scale.
  2. The AI would require the ability to obscure its intent, its actions, and the effects of those actions.
  3. The AI would require the objective to cause extinction, because geoengineering at the scale needed to cause an extinction threat could not be done unintentionally; it would require hiding actions from human observers.

On malicious geoengineering without AI, the authors conclude “this scenario does present a true extinction threat and a potential falsification of our hypothesis” because “geoengineering could threaten extinction through the mass manufacturing of gases with extreme global warming potential, thereby heating the earth to uninhabitable temperatures.”

At the same, the authors admit “it is unclear how AI might be instrumental in causing this effect.”

“Realistically, a capable adversarial actor might choose to employ multiple methods together to extinguish humanity”

RAND, On the Extinction Risk from Artificial Intelligence, May 2025

“In the three scenarios examined in this study — nuclear weapons, pathogens, and geoengineering — human extinction would not be a plausible outcome unless an actor was intentionally seeking that outcome. Even then, an actor would need to overcome significant constraints to achieve that goal”

RAND, On the Extinction Risk from Artificial Intelligence, May 2025

There is also the possibility that the three scenarios could be carried out in combination, which would increase the probability of an extinction event.

For example, the authors write that “it might not be necessary for an engineered pathogen to remain more than 99 percent lethal to humans if the release of the pathogen were paired with the launch of nuclear weapons at any surviving human population centers.”

Additionally, “Societal collapse and drastic reduction in the human population will make us less resilient to future natural catastrophes. Thus, there could be a high risk of extinction even with a viable surviving human population, simply because that population will be far more vulnerable to the next catastrophe.”

However, the authors acknowledge that we humans are resilient creatures, and that even though AI could bring humanity to near-extinction, there could still be pockets of populations that survive, that build up resistances to their harsh conditions, and who could start anew to keep the human race going.

In the end, the authors conclude that “nuclear weapons, pathogens, and geoengineering — human extinction would not be a plausible outcome unless an actor was intentionally seeking that outcome. Even then, an actor would need to overcome significant constraints to achieve that goal.”

World War III may be fought with the help of cutting-edge AI, but will World War IV still be fought with sticks and stones?


Image Source: AI-generated by Grok

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending