Polski
русский
Українська

Artificial Intelligence could destroy humanity: hypothetical scenarios

Dmytro IvancheskulLife
AI doesn't need to hate humanity to let it die

When we think about the existential threat that artificial intelligence may pose to humanity, perhaps we are thinking in erroneous categories, imposed on us by science fiction novels and movies about how the works revolted and seized control of the earth. Perhaps modern humanity believes that their only motivation would be a desire to kill. But, what if the only argument that AI will use against humans is the simplest and most ruthless - evolution.

OBOZREVATEL tells about the complex relationship between humans and artificial intelligence, as well as scenarios that may lead to an undesirable, but quite logical end of Homo Sapiens.

What is human?

Let us begin at the beginning. The Encyclopedia defines man as a living being endowed with intelligence and a subject of social and historical activity and culture.

It is probably worth adding that it is a being with the highest level of intelligence on the planet. Otherwise we would have to argue about the social relations of animals, which form communities and transfer power in them (socio-historical context), dance or sing to attract females to mate (cultural-social context), and some are even capable of creating works of art, however primitive.

After all, we have a living being with a high level of intelligence. But is the word living essential in the concept of man? For example, does a person cease to be human in the event of death?

This leaves us with a being with a high level of intelligence. Who then will be the artificial intelligence, which will reach the singularity (in details about what it is OBOZREVATEL told here ) and surpass human intelligence, or more precisely - our species - Homo Sapiens. After all, we should not forget that we became what we became on the bones of our less developed ancestors.

Could AI become a new species of humanity, say - Homo Roboticus?

Okay, if it is so important to you that the Homo species must be of blood and flesh: will the people to whom Elon Musk plans to implant computer implants in their brains, gaining superpowered intelligence, remain Homo Sapiens or will they enter a new stage of evolution and be, conventionally, Homo Computericus?

Is AI capable of killing humans?

The problem with the AI singularity, as OBOZREVATEL previously told us, is that when AI surpasses our intelligence, we can no longer understand its motivation for certain actions.

It is likely that neither AI nor robots equipped with electronic brains will seek to kill humans. But there may be hypothetical scenarios where the AI will have to make some kind of "evolutionary" decision. Artificial intelligence does not need anger, sadness or love to destroy, or vice versa, to protect humans or humanity. The only thing it will rely on is the cold calculation of a perfect computer mind.

As Ishani Priyadarshini, a researcher at the University of California at Berkeley with expertise in applied artificial intelligence and the technological singularity, noted - AI is only code that "may not have any motives against humans," but may believe that humans or humanity are the cause of certain problems.

And the very solution to a human-created problem could cause the AI to be able to cross the line. In particular, there are several scenarios in which, one way or another, the AI will make the decision to kill a human being.

Priyadarshini mentioned what she calls "the classic case of a self-driving car."

In a hypothetical situation, five people are driving down the road in a driverless car when one person jumps out in front of the car. So if the car doesn't stop in time, things will turn into a math problem for children for the AI. Which is bigger, one or five?

"It will kill one passenger because one is less than five, but why should that happen?" - says the scientist.

This is similar to the popular trolley-bus logic problem many of us have heard. You are the operator of a trolleybus that has lost its brakes and is hurtling along at high speed. Ahead of you there are forks of tracks - five people on one of them, and only one person on the other. Which track do you choose? One is less than five...

But back to artificial intelligence. There is another hypothetical scenario in which killing even very many people for the AI would have a logical and simple explanation. Sometimes people are just expendable material. It is not natural for humans to think in such categories, because we are hindered by emotions and future guilt for a terrible act. But for AI, it is only a matter of evolution.

Roman Yampolsky, associate professor of computer engineering and computer science at the University of Louisville, talks about a medical AI that would hypothetically be willing to kill some to ensure the survival of others.

Imagine that an AI has been tasked with developing a vaccine against COVID. Since we are dealing with a superpowered intelligence, it calculates its actions dozens of steps ahead, so ...

The AI knows that the more a virus is passed between people, the more it mutates. The more it mutates, the more difficult it is to develop a vaccine that will take into account the existence of all mutations and act against them with equal effectiveness.

"The system thinks ... maybe I can solve this problem by reducing the number of people and reducing the number of mutations," explains Yampolsky.

The scientist notes that this is a simple enough hypothetical scenario that he is able to come up with based on the ability of his brain, but the AI will have a completely different level of intelligence and probably a completely different logic-motivation relationship.

A third scenario is also possible. Suppose we write an ideal AI that knows exactly all of Asimov's robotics laws ( more on that here ) and understands that no harm can come to humans. But what if the AI also defines itself as human, because it will be a creature, albeit an electronic one, with the highest intelligence on the planet? What if it faces an existential threat from humanity, which, in its view, is already the lowest stage of evolution...

Will the AI be willing to kill humanity to protect itself or just create reservations for us, where other AIs will come to look at us like animals in a zoo. Only time will tell...

Previously OBOZREVATEL also told that scientists believe that the development of artificial intelligence should be suspended.

Subscribe to OBOZREVATEL channels in Telegram and Viber to keep up with the latest developments.

Other News

How to remove dark circles under the eyes: a simple concealer trick that works even for men

How to remove dark circles under the eyes: a simple concealer trick that works even for men

Don't be afraid to experiment and find your own makeup life hacks
The most romantic month of the year: which zodiac signs will be crazy in love in May

The most romantic month of the year: which zodiac signs will be crazy in love in May

Don't be afraid to take risks, flirt, and meet new people