Polski
русский
Українська

What will happen when AI reaches singularity and will it kill people

Dmytro IvancheskulNews
When AI gains superintelligence, humanity will no longer be able to predict its actions

The various artificial intelligences (from chatbots and neural networks to autopilots in cars) that are rapidly taking over the world seem to be entertainment for humanity and something that will improve our lives in the not-too-distant future. However, when AI reaches singularity, people may be in trouble.

This is the subject of an article by Popular Mechanics, which talked to industry experts. They also explained how to avoid turning artificial intelligence into an enemy.

Experts suggest that the moment of singularity may come by 2030. After that, artificial intelligence will surpass human intelligence and it will be very difficult to calculate the consequences of its activities.

What is singularity?

The singularity is the moment when machine intelligence equals or surpasses human intelligence. This concept was once considered by Stephen Hawking, Bill Gates, and other scientists. In particular, the English mathematician Alan Turing developed the Turing test back in the 50s to find out whether machines are capable of thinking independently and whether they can reach a level of communication when a person is unable to understand whether they are talking to an AI or another person. ChatGPT has already proved that AI is capable of maintaining a human-level conversation.

Ishaani Priyadarshini, a doctoral student at the University of California, Berkeley, explains that the main problem with AI is that its intelligence is virtually unlimited, while human intelligence is fixed as we cannot simply add more memory to ourselves to become smarter.

When the singularity will be achieved?

Experts believe that statements about the imminent achievement of the singularity are speculative at best. Priyadarshini believes that the singularity already partially exists, but the moment when AI surpasses human intelligence in its entirety will not come soon. Nevertheless, people have already seen moments of singularity. Let us remind the moment when IBM's Deep Blue supercomputer defeated a human chess player, world chess champion Garry Kasparov, in 1997 for the first time.

Experts suggest that it will be possible to talk about achieving the singularity when AI can translate language at the same level or better than humans.

However, Priyadarshini believes that the best indicator that AI has become smarter than humans will be when machines start to understand memes that are currently beyond their reach.

What will happen when AI reaches singularity?

The problem is that humans are too "dumb" to predict what will happen if AI gains superintelligence. To make predictions, we need a human who will also have superintelligence. Therefore, humanity can only speculate on the consequences of the singularity using our current level of intelligence.

"You have to be at least as smart as you are to be able to predict what the system will do... if we are talking about systems that are smarter than humans (superintelligent), then it is impossible for us to predict inventions or solutions," Roman Yampolsky, an associate professor of computer engineering and computer science at the University of Louisville, said.

As for whether AI can become an enemy for humans, Priyadarshini believes that it is also difficult to make predictions. It will all depend on whether its code contains contradictions or not.

"We want self-driving cars, we just don't want them to pass red lights and collide with passengers," Priyadarshini says, explaining that bad code can make AI see running red lights and people as the most efficient way to get to their destination on time.

According to her, AI researchers know that we can't eliminate bias from code 100% of the time, so creating a completely unbiased AI that can do no wrong will be a challenge.

Can AI harm people?

Currently, AI has no feelings, so it is guided only by its own knowledge. Thus, it is unlikely that it will become uncontrollable in the near future and will not try to escape from human control, simply because it does not have such a motivation.

However, as Yampolsky explains, the uncontrollability of AI can arise from the way it is created by humans and the paradoxes that can form in its code.

"We have no way to detect, measure, or evaluate whether systems experience internal states. But this is not necessary for them to become very capable and very dangerous," the scientist explained.

Priyadarshini also supports his colleague, arguing that the only reason that can lead to an AI rebellion is code inconsistency.

"It (AI - Ed.) may not have any motives against humans, but a machine that believes that humans are the root cause of certain problems may think so," the scientist explained.

However, if an AI becomes intelligent enough to become self-aware and acquires internal feelings, it may have a motive to dislike humanity.

Again, a poorly defined task could lead to unintentional killing. As an example, Yampolsky cites the situation when AI is asked to create a vaccine for humans against the conditional COVID-19.

The system will know that the more people get COVID-19, the more the virus will mutate, making it difficult to develop a vaccine for all variants.

"The system thinks that it can solve this problem by reducing the number of people so the virus can not mutate as much," the scientist says, suggesting that AI, which has no concept of morality, can choose a more effective solution even if it hurts some people.

How can we prevent a singularity catastrophe?

We will never be able to rid artificial intelligence of all its unknowns. These are unintended side effects that humans cannot predict because they do not have superintelligence.

"We are really looking at a singularity that will lead to the emergence of many rogue machines. If it reaches a point of no return, it will be impossible to fix," Priyadarshini warns.

Subscribe to OBOZREVATEL on Telegram and Viber channels to keep up with the latest developments

Other News

Krynky are almost completely destroyed, but Ukrainian Armed Forces continue to hold the line on the left bank of the Dnipro –  Tavria Brigade

Krynky are almost completely destroyed, but Ukrainian Armed Forces continue to hold the line on the left bank of the Dnipro – Tavria Brigade

Soldiers continue to perform combat missions in the temporarily occupied part of Kherson region
The most popular desserts in the world – easy to make at home

The most popular desserts in the world – easy to make at home

Just the words macaroon, eclair or tiramisu make your mouth water