Currency
What will happen when artificial intelligence reaches the singularity and will it be able to kill people
The various artificial intelligences (from chatbots to neural networks to autopilots in cars) that are rapidly fascinating the world seem to be fun for humanity and something that will improve our lives in the not-too-distant future. But perhaps when AI reaches the singularity, people will be in trouble.
Popular Mechanics talked to industry experts about this. They also told how not to turn artificial intelligence into an enemy.
Experts speculate that the singularity moment may arrive as early as 2030. After that, artificial intelligence will surpass human intelligence and it will be very difficult to calculate the consequences of its activities.
What is the Singularity
The singularity is the point at which machine intelligence will match or surpass human intelligence. This concept was once considered by Stephen Hawking, Bill Gates, and other scientists. In particular, the English mathematician Alan Turing back in the 50s developed the Turing test, which aimed to find out whether machines can think for themselves and whether they can reach a level of communication where a person will not be able to understand whether he is talking to the AI or to another person. ChatGPT has already proven that the AI is capable of maintaining human-level conversation.
University of California at Berkeley doctoral student Ishane Priyadarshini explains that the main problem with AI is that its intelligence is effectively unlimited, whereas human intelligence is fixed, since we cannot simply add more memory to ourselves to become smarter.
When might the singularity be reached?
Experts believe that statements about reaching the singularity soon are speculative at best. Priyadarshini believes that the singularity already exists in bits and pieces, but that the moment when AI will surpass human intelligence completely will not come soon enough. Nevertheless, people have already seen moments of the singularity. Like when IBM's Deep Blue supercomputer first defeated human chess champion Garry Kasparov in 1997.
Experts suggest that we can talk about reaching the singularity when AI can translate language at the same level or better than humans.
However, Priyadarshini believes that the best indicator that AI has become smarter than humans will be the moment when machines start to understand memes that are currently unattainable for them.
What happens when AI reaches the singularity?
The problem is that people are too "dumb" to guess what would happen if the AI got superintelligence. To make predictions, we need a human being who will also have superintelligence. So humanity can only speculate about the implications of the singularity using our current level of intelligence.
"You have to be at least as intelligent to be able to speculate on what the system will do... if we're talking about systems that are smarter than humans (super-intelligent), it's impossible for us to foresee inventions or solutions," said Roman Yampolsky, associate professor of computer engineering and computer science at the University of Louisville.
As for whether AI could become an enemy to humans, Priyadarshini said, it's hard to make predictions here, too. It will all depend on whether there are no contradictions in its code.
"We want self-driving cars, we just don't want them running red lights and colliding with passengers," Priyadarshini says, explaining that bad code could make SHI see running red lights and people as the most efficient way to get to a destination on time.
AI researchers know that we can't 100% eliminate bias from the code, she said, so creating a completely unbiased AI that can do no wrong will be a challenge.
Can AI hurt people?
So far, the AI has no feelings, so it is guided only by its own knowledge. So it probably won't become uncontrollable anytime soon and won't try to escape from human control simply because it has no such motivation.
However, as Yampolsky explains, AI's uncontrollability may arise from the way it is designed by humans and what paradoxes may form in its code.
"We have no way to detect, measure or evaluate whether systems experience internal states... But that's not necessary for them to become very capable and very dangerous," the scientist explained.
Priyadarshini also supports his colleague, arguing that the only reason the AI might run riot is because the code is inconsistent.
"It (SHI. - Ed.) may not have any motives against humans, but a machine that thinks humans are the root cause of certain problems may think that way," the scientist explained.
However, if the AI becomes intelligent enough to be self-aware and gain inner feelings, perhaps it will find a motive to dislike humanity.
Again, a poorly designed task could lead to unintentional killing. As an example, Yampolsky cites the situation where the AI would be asked to create a vaccine for humans against a conditional COVID.
The system would know that the more people who get COVID, the more the virus would mutate, making it difficult to develop a vaccine for all variants.
"The system thinks ... maybe I can solve this problem by reducing the number of people, so the virus can't mutate as much," the scientist says, suggesting that an AI with no concept of morality might choose a better solution, even if it hurts some people.
How can we prevent a singularity catastrophe?
We will never be able to rid artificial intelligence of all its unknowns. These are unintended side effects that humans can't anticipate because they don't have superintelligence.
"We're really looking at a singularity that will lead to a lot of rogue machines. If it reaches the point of no return, it can no longer be fixed," Priyadarshini warns.
Previously OBOZREVATEL also told that it is necessary to create the fourth law of robotics "Asimov" for artificial intelligence.
Subscribe to OBOZREVATEL channels in Telegram and Viber to keep up with the latest developments.