Artificial intelligence must be stopped: scientists confirm Elon Musk's fears
Artificial intelligence can pose a fatal threat to humanity, and the point is not even about lost jobs, but about the fact that AI may eventually be beyond human control, and even the smartest of us will not be able to predict how it will behave in a given situation. That is why analysts are already calling for a limit to the level AI can reach or even a halt to its development, as it is obvious that this could end very badly for humanity.
The other day, commenting on a Twitter post by Mark Andreessen, an American entrepreneur and AI advocate, Elon Musk, an American engineer, inventor, and billionaire, asked: "How many years do we have before AI kills us all?" Although he did not receive an answer as such, Elon Musk probably expressed the question that worries many people who follow the development of technology and understand the consequences of its dominant position.
Warnings against the development of artificial intelligence have been voiced by science fiction writers and scientists even before people started working on real AI. It is worth mentioning Isaac Asimov with his laws of robotics that were supposed to save people from overly advanced intelligence, or James Cameron's Terminator, which showed what awaits humanity if AI wins.
According to Popular Science, in 1993, computer scientist and science fiction writer Vernor Vinge suggested that within 30 years, humanity would create technologies that would become the basis of artificial intelligence. As it turned out, he was absolutely right. But this is not his only prediction. Vinge also warned that soon after AI would be able to surpass artificial intelligence, "the human era would end".
Vinje was also one of the first to talk about the consequences of artificial intelligence reaching the singularity (OBOZREVATEL has described what it is in detail here). In short, when artificial intelligence surpasses the level of human intelligence, we will simply not be able to predict or even understand its actions, as it will be much smarter than any human. And this leads us to the fact that we cannot resist AI, because its level of development allows it to calculate our steps and neutralise any of our actions before they are taken.
Vinge wrote that the emergence of AI with capabilities that surpass the human brain will change the world beyond recognition.
"(It will be) a change comparable to the beginning of human life on Earth," warned Windge.
Modern scientists also share Windge's concerns. For example, computer scientist Roman Yampolsky of the University of Louisville notes that "as soon as machines take over science and engineering, progress becomes so fast that you can't keep up."
He already sees signs of this in his field, when AI researchers publish an incredible number of papers and even an industry expert finds it difficult to keep track of the actual stage of AI development.
"It is developing too fast," the scientist is convinced.
It is still a matter of perspective, but if AI is created with intelligence commensurate with human intelligence, it will be able to carry out a self-revolution. Not that it will happen on its own, but people will not be able to resist the opportunities. All you have to do is ask an existing AI to create a better version of itself, and you won't need years of coding and testing to create an AI that surpasses humans - it will actually create itself.
However, the problem, as scientists point out, is that people do not always understand the logic of AI behaviour, and if it reaches a new stage of development, everything will become much more complicated.
Yampolsky suggests that the inability to reliably predict what AI can do will also lead to a loss of control, which could have catastrophic consequences for humanity.
However, it should be noted that a survey conducted by AI Impact think tank in mid-2022 showed that not everyone shares such concerns. 47% of researchers consider it unlikely or impossible that AI will reach the singularity and humanity will lose control over it.
Samir Singh, a computer scientist at the University of California, Irvine, notes that talks about the future of AI and its dangers distract from the problems that exist now.
In particular, it is already well known that large linguistic models (such as ChatGPT) can produce racist, sexist, and factually incorrect results. In addition, from a legal perspective, content generated by AI often runs afoul of copyright and data privacy laws. Not to mention that AI is already causing job losses and displacement in certain industries.
Already, there is a certain split among AI researchers, and while some are calling for a full green light for AI, others are proposing to limit its computing capabilities from the outset, and still others are calling for a slowdown or suspension of the technology's development in order to know exactly what humanity is dealing with. In particular, Elon Musk and a large group of researchers have called for a halt to AI experiments for at least six months.
Yampolsky, who also supports the calls for a pause, however, believes that we need to act more radically. He is convinced that stopping for six months or any other period of time will not change the situation.
"The only way to defeat (AI - Ed.) is not to do it," the scientist stressed.
Earlier, OBOZREVATEL also reported that artificial intelligence needs to create the fourth law of robotics by Asimov.