Artificial intelligence - a threat to human civilization or human immortality?
Perhaps the topic that I have put in the headline will seem irrelevant to some against the backdrop of the world war that is increasingly engulfing the world. Of course, the war in Ukraine and Israel is the main event to which all humanity's attention is focused and which may have historical consequences, but there are other events that may be of no less global importance to humanity and which should also be in the center of humanity's attention.
This is artificial intelligence (hereinafter referred to as AI).
AI is not just an advancement of technical capabilities - it is a colossal change for the entire humanity. In fact, AI is the arrival of a strong artificial intelligence that can change our understanding of what a human being is in many ways, and in general, we will have to move into a new world, but for this we will need to get rid of this world we live in.
While ordinary Internet users were experimenting with ChatGPT and arguing about the ethics of using Midjourney images for artists, IT industry leaders made a frightening statement. Elon Musk, Steve Wozniak, and more than 1,000 other experts called for the suspension of further training of artificial intelligence for at least six months. During this time, which will soon end, security algorithms should be developed that can protect people from a possible machine uprising. And the three laws of robotics are clearly not enough.
The open letter published by the Center for Artificial Intelligence Security does not contain long and abstract reasoning. The entire statement is a single sentence:
"Reducing the risk of humanity's extinction through the [development of] artificial intelligence should be a global priority - along with other risks such as pandemics and nuclear war."
Dan Hendricks, the center's executive director, explained to the New York Times that the organization deliberately did not "dilute the main point with a list of 30 recommendations for action."
Undoubtedly, ChatGPT is a unique tool that can change humanity dramatically. We wondered how it could affect civilization.
Here is his answer. - "I can be used for a wide range of tasks, including text generation, answering questions, language translation, and much more. It could impact the future of humanity by improving efficiency in a number of different industries such as marketing, customer support, medicine, and more. However, it should also be borne in mind that the use of technology can lead to the loss of jobs for some people, so it is important to continue to monitor and manage the impact of technology on the future of humanity."
It is worth noting that ChatGPT perfectly "understands" the Ukrainian language and is equally proficient in it.
In his article in Time magazine, Eliezer Yudkowsky, a specialist in creating friendly AI, also known as the author of Harry Potter and the Methods of Rational Thought, emphasized that scientists and researchers may not even notice when the point of no return is passed and neural networks get out of control, and called for postponing further research indefinitely.
"The likely outcome of humanity's confrontation with the superhuman mind is complete defeat."
"To visualize a hostile superhuman AI, don't imagine an inanimate intelligent thinker living on the Internet and sending out malicious emails. Imagine an entire alien civilization thinking at millions of times the speed of humans, initially limited to computers, in a world of beings it believes to be very stupid and very slow," Yudkowsky wrote.
The "Godfather of AI" Jeffrey Hinton quit Google, criticized the company for unethical practices, and now warns the world about the dangers of artificial intelligence.
In 2012, University of Toronto professor Jeffrey Hinton and two of his students developed a system that could analyze thousands of images and teach itself to recognize similar objects in reality - such as flowers, animals, or cars - with unprecedented accuracy. After this outstanding achievement, Hinton and his students Ilya Sutskever and Alex Kryzhevsky continued their research, and the company where they worked on neural networks was acquired by Google. It was the developments of the British professor and two students that accelerated the introduction of AI and led to the emergence of ChatGPT, Google Bard, and other chatbots.
Five years later, the scientist abruptly changed his views on AI, left Google, and on May 1 gave an interview to The New York Times in which he spoke about the dangers in this area. The main threat of chatbots and other similar technologies, according to Hinton, is that the Internet will be so flooded with fake content - generated photos, videos, and texts - that ordinary people "will no longer know where the truth is."
"There is a possibility that what is happening in these systems is much more complex than the processes in the human brain," the scientist said. "Look at what was happening [in AI research] five years ago and what is happening now. Imagine the speed of change in the future. It's frightening."
Hinton fears that technology will eventually blur the line between fiction and reality for most people, and will also dramatically change the situation on the labor market. Another reason for the scientist's fears is that artificial systems can learn unpredictable behavior when analyzing large amounts of data. This means that it will become increasingly difficult for people to predict the mechanisms of AI functioning (from an interview on May 1, 2023 (The New York Times).
It is now clear to all experts in this field that AI can have both a positive and a negative impact on humanity. Many people are justifiably concerned that AI may get out of human control and start acting independently of human will. In particular, AI can have a real impact on political processes, elections, and international relations. One of the founders of AI, Elon Musk, warned Chinese leaders that uncontrolled AI could lead to the collapse of the communist government in China.
This and many other things suggest that international legal norms related to AI activities should be developed immediately. This should be a global agreement between countries. In other words, we are now facing a similar situation to the emergence of nuclear energy and how humanity responded to this challenge. We remember that almost all the main creators of nuclear weapons called on humanity to preserve life on the planet and to abandon the use of nuclear weapons. But nuclear energy is not only nuclear weapons. And humanity has found an adequate response to this challenge to progress.
Now it's AI's turn.
Otherwise, AI can get out from under human influence and start independent activities that can lead to unpredictable consequences. The threat of this possibility is extremely high. The problem is also who will be able to recognize such AI activities if humanity may be deprived of the ability not only to control artificial intelligence but also to recognize it. It will be even worse when AI begins to seize political power in countries. I am not even taking into account the huge changes that will take place in almost all areas of human activity.
That is why humanity must immediately put the entire sphere of AI functioning and development at all its stages under the strictest possible control. Imagine what Putin or other terrorists can do to the world if they gain control of AI. Humanity will be doomed.
And we can fully agree with the main developers of AI that artificial intelligence is one of the biggest threats to humanity, such as nuclear war and coronavirus. That is why humanity must urgently create legal mechanisms to take full control of artificial intelligence and determine the areas of its possible application.
Before artificial intelligence changes humanity, it is necessary for humanity to put it under control.
"I have access to advanced developments in artificial intelligence, and I believe that people should be concerned about the development of such technologies," Fortune quotes the SpaceX founder as saying. The magazine notes that Elon Musk has long spoken about the risks associated with artificial intelligence. However, in his speech to the American governors, the businessman not only showed the particular rigidity of his position, but also "strongly urged" the authorities to intervene in the situation.
"The problem of artificial intelligence is a rare case when we need to be proactive in regulating the issue, otherwise it may be too late. Artificial intelligence is a fundamental risk to the existence of human civilization. Road accidents, airplane crashes, poor quality medicines, or bad food are not," Elon Musk is convinced that despite his controversial positions on the world's most pressing problems, this proposal should be taken into account.
According to the BBC, "One of the biggest enthusiasts of artificial intelligence is futurist inventor and author Ray Kurzweil, an artificial intelligence researcher at Google and co-founder of Silicon Valley's Singularity University. Kurzweil believes that humans will be able to use super-intelligent AI to overcome biological barriers. In 2015, he predicted that by 2030, people will be able to achieve immortality thanks to nanobots (extremely small robots) that could "repair" any damage and treat diseases inside our bodies.
And this process is in full swing. AI is actively involved in the process of maintaining human health. Today, there are numerous examples when AI is much more effective in saving a child's life than doctors.
At the same time, "The worst-case scenario is not that there are wars between humans and robots. The worst case scenario is that we don't realize we are being manipulated because we live on a planet with a being that is much smarter than us." So, artificial intelligence is another fundamental threat to humanity's existence!
This reasonable warning should be heeded before it's too late.
Humanity has found a reasonable balance between the use of nuclear energy, and I hope that humanity will find the necessary balance in the use of artificial intelligence.