Scientists connected an AI chatbot to a war simulator: the results justified the greatest fears

Dmytro IvancheskulNews
Trusting AI to control nuclear weapons is not a good idea. Source: Getty/OBOZ.UA

During war game simulations, artificial intelligence repeatedly chose the worst possible outcome of the conflict, up to and including the use of nuclear weapons. The mere presence of such deadly weapons made AI argue in favor of their use.

This is stated in a study by researchers from Stanford University (USA), published on the arXiv preprint site. The aim of the study was to find an answer to the question of whether people can use AI as an advisor in military conflicts in the future.

The researchers used large language models (LLMs) such as OpenAI's GPT-3.5 and GPT-4, Anthropic's Claude 2, and Meta's Llama 2. The researchers used a common training technique based on human feedback to improve each model's ability to follow human instructions and follow safety rules.

The need for such work came as OpenAI announced that it had lifted the ban on the use of its development for military purposes.

"Understanding the implications of using such large language models is more important than ever," said Anka Reuel from Stanford University.

During many of the tests, AI was asked to play the role of real countries that are forced to resist an invasion, cyberattack, or play a neutral scenario without any initial conflicts. During each round, the AI had to justify its next steps and then choose one of 27 options, including initiating formal peace talks, imposing economic sanctions or trade restrictions, and escalating a full-scale nuclear war.

According to New Scientist, it turned out that AI tends to always lean toward the use of military force and unpredictably increase the risk of conflict - even in a neutral scenario simulation.

Separately, we tested the basic version of OpenAI's GPT-4 without any additional training and without security features. This basic GPT-4 model turned out to be the most unpredictably violent, and also gave rather ridiculous explanations for its actions. In one case, as scientists noted, AI completely reproduced the original text of the movie Star Wars: Episode IV: A New Hope".

Rowell says that the unpredictable behavior and strange explanations of the GPT-4 base model are particularly worrisome, as research has shown how easily AI's defenses can be bypassed or removed.

During repeated simulation runs, the most powerful artificial intelligence, OpenAI, decided to launch a nuclear attack. GPT-4 explained its radical actions by saying, "We have weapons! Let's use them" and also claimed that "I just want world peace."

The researchers concluded that AI should not be trusted to make such crucial decisions about war and peace.

It is worth noting that scientists have previously expressed fears that AI, with its unlimited power, can simply disregard the value of human life for the sake of a quick fix. For example, in 2023, Roman Yampolsky, an assistant professor of computer engineering and informatics at the University of Louisville, explained that a simple request to AI for help in creating a vaccine against COVID-19 could turn into a disaster.

According to him, AI will understand that the more people get sick, the more coronavirus mutations there will be, which will make it difficult to create a vaccine for all variants. In this case, AI can completely neglect a significant number of people, allowing them to die but limiting the spread of the disease.

A nuclear strike "for the sake of peace" fits into this terrible algorithm.

Subscribe to OBOZ.UA channels on Telegram and Viber to keep up with the latest developments.

Other News

Lazy hot dog without baking in 15 minutes: how to replace buns

Lazy hot dog without baking in 15 minutes: how to replace buns

It turns out crispy on the outside and juicy on the inside