Currency
AI Fukushima is inevitable: scientists name the pitfalls of artificial intelligence and warn about the danger
Scientists have been working with artificial intelligence for years. However, the latest generation of algorithms has shown that there are many pitfalls in AI.
Some scientists are convinced that AI Fukushima is inevitable. Google DeepMind CEO Demis Hassabis emphasized that to prevent it and make a breakthrough, researchers must identify the right problems, collect the right data, build the right algorithms, and apply them correctly, TheGuardian reports.
Artificial intelligence is "not a magic bullet," Demis Hassabis said. "But if we get it right, it should be an incredible new era of discovery and a new golden age, maybe even a kind of new renaissance," he said.
Instead, Siddhartha Mukherjee, a cancer researcher at Columbia University in New York, emphasizes the pitfalls of AI if the guiding programs fall into the wrong hands. "I think it's almost inevitable, at least in my lifetime, that there will be some version of Fukushima AI," he said, referring to the nuclear accident caused by the 2011 Japanese tsunami.
However, most AI researchers are optimistic. For example, in Nairobi, nurses are testing ultrasound scans of pregnant women using artificial intelligence without requiring years of training. And the London-based Materiom company uses artificial intelligence to develop 100% biomaterials, bypassing petrochemicals, experts say.
According to the scientists, artificial intelligence has changed medical imaging, climate models, and weather forecasts, and is learning to hold plasma for nuclear fusion.
Demis Hassabis and his colleague John Jumper received the Nobel Prize for a program that predicts the structures and interactions of proteins. It is used in biomedical science, in particular for drug development.
The Swiss pharmaceutical company Novartis went further. In addition to developing new drugs, AI speeds up the involvement in clinical trials, reducing a potentially multi-year process to months.
However, according to scientists, a huge challenge for researchers is the black box problem: many AI programs can make decisions but not explain them, making it difficult to trust the systems.
But this may change, Hassabis said, due to the equivalent of an AI brain scan. "I think that in the next five years, we will get out of the black box era we are in now," the scientist emphasized.
Only verified information is available on the OBOZ.UA Telegram channel and Viber. Do not fall for fakes!