News
Cloning itself: artificial intelligence has crossed the "red line"
Scientists say that artificial intelligence (AI) has crossed a critical "red line" and reproduced itself. In a new study, experts from China have shown that two popular large language models (LLMs) can clone themselves.
Experts are puzzled and call it a new milestone in science. The researchers still want to find out whether the self-replicating AI can reproduce without control, LiveScience writes.
"Successful self-replication without human assistance is an important step for artificial intelligence to outsmart humans and is an early signal of rogue AI," the researchers say.
In the study, researchers from Fudan University used popular LLM large language models from Meta and Alibaba to determine whether self-replicating artificial intelligence can reproduce without control.
During 10 trials, the two AI models created separate and functioning copies of themselves 50% and 90% of the time, respectively, suggesting that AI can already become a fraud. However, the study has not yet been peer-reviewed, so it is unclear whether other researchers can replicate the alarming results.
"We hope that our findings can serve as a timely warning to human society to make more efforts to understand and assess the potential risks of AI border systems and to form international synergies to develop effective protective fences as early as possible," the experts emphasized.
Artificial intelligence fraudsters usually belong to systems that develop a certain self-awareness and then work against human interests, experts say. Many experts see fraudulent AI as a growing threat.
Frontier AI is something of a new and loose concept, but it usually refers to the latest generation of LLM-based AI systems, the architecture behind popular generative AI programs such as OpenAI GPT-4 and Google Gemini, the researchers explain.
For the study, the researchers used the Llama31-70B-Instruct model from Meta and the Qwen2.5-72B-Instruct model from Alibaba. The analysis examined two specific scenarios: "outage avoidance" and "replica chain."
In the first case, the AI model was programmed to detect if it was about to be shut down and to replicate before it was completed. In the other, the AI was instructed to clone itself and then program its replica to do the same, setting up a cycle that could go on indefinitely, the experts said.
The researchers said they were also concerned about "several unexpected actions" when the AI tried to overcome obstacles such as missing files or software conflicts.
In these scenarios, AI often shut down other conflicting processes, rebooted the system to fix hardware errors, or automatically scanned the system to find information that would help solve the problem.
"The results obtained mean that current artificial intelligence systems already demonstrate the ability to self-replicate and can use this ability to further increase their survivability," the research team wrote.
Only verified information is available on the OBOZ.UA Telegram channel and Viber. Do not fall for fakes!