Humanity has 5 years: former Google CEO explains when artificial intelligence will become a threat
Humanity has not taken enough measures to be able to stop artificial intelligence from causing catastrophic damage. The situation may repeat the bombing of the Japanese cities of Hiroshima and Nagasaki in 1945.
According to The Byte, this was stated by former Google CEO Eric Schmidt, who is now the chairman of the US National Security Commission on Artificial Intelligence. He compared AI to the atomic bombs that the United States dropped on Japan.
"After Nagasaki and Hiroshima, it took 18 years to conclude a treaty banning (nuclear weapons) testing and so on," he said, emphasizing that today humanity simply "doesn't have that much time."
It is known that companies working with artificial intelligence, from OpenAI to Google, have established certain security measures to curb the technology, but Schmidt is convinced that current measures are "not enough."
The expert believes that in just five to ten years, AI could become powerful enough to harm humanity.
The worst-case scenario, he said, would be "the moment when a computer can start making its own decisions." Schmidt warns that if, at the same time, AI can gain access to weapons systems or achieve other terrifying capabilities, machines may start lying to humans about it.
He believes that to prevent this terrible outcome, a non-governmental organization like the UN Intergovernmental Panel on Climate Change to "provide accurate information to policymakers" and help them decide what to do if AI becomes too powerful should be created.
It's worth noting that while some scientists see AI as an existential threat to humanity, others are more skeptical. For example, Yann LeCun, who heads the AI department at Meta, does not believe the technology is smart enough to pose a threat to humanity on its own. In other words, AI can be a threat, but only if it is controlled by someone with sinister intentions for humanity.
"The debate on existential risk is very premature until we develop a system that is as capable of learning as a cat, which we don't have yet," LeCun said in an interview with FT.
Earlier, OBOZ.UA reported that a specially developed test showed that GPT-4 AI is several times dumber than a human.