Polski
русский
Українська

AI hallucinations: why chatbots with artificial intelligence can display false information

ObozrevatelNews
artificial intelligence
Artificial intelligence. Source: Pixabay

Google's new search feature, Artificial Intelligence Reviews, received a negative reaction after users pointed out some factually inaccurate answers to queries. Experts explained why artificial intelligence (AI) chatbots can display false or misleading information.

AI Reviews, which was launched two weeks ago, displays answers to the most frequently asked questions received from various sources on the Internet at the top of the Google search page. The new feature aims to help users answer "difficult questions," as stated in the Google blog.

The system gave false answers. For example, the user was asked to glue the cheese to the pizza if it came off, to eat stones to improve health, or that former US President Barack Obama is a Muslim, which is a conspiracy theory that has been debunked.

A study conducted by Vectara, a startup that develops generative AI, showed that chatbots invent information in the range of three to 27% of cases.

Shea

What are artificial intelligence hallucinations?

Large language models (LLMs), which power chatbots such as OpenAI's ChatGPT and Google's Gemini, learn to predict responses based on patterns. Hanan Ouzan, partner and head of generative AI at Artefact, said that the model calculates the most likely next word to answer your question based on what is in their database.

"This is exactly how we work as humans. We think before we speak," he said in an interview with Euronews.

Sometimes, the model's training data can be incomplete or biased, leading to incorrect answers or "hallucinations" on the part of the chatbot.

According to Oleksandr Sukharevskyi, senior partner at QuantumBlack at McKinsey, it is more correct to call AI a "hybrid technology" because chatbot answers are "mathematically calculated" based on the data they observe.

Google assures that there is no single reason why hallucinations occur: it may be insufficient training data used by the model, incorrect assumptions, or hidden biases in the information used by the chatbot. Google has identified several types of such inconsistencies. These include incorrect predictions of events that may not actually happen, false positives due to the detection of fictitious threats, and negative results.

The company acknowledged that hallucinations of sorts can have significant consequences, such as when a medical AI model misidentifies a benign skin pattern as malignant, leading to "unnecessary medical interventions."

According to Igor Sevo, Head of AI at HTEC Group, an international product development firm, it all depends on what artificial intelligence is used for.

"In creative situations, hallucinations are good. The question is how to teach models to distinguish between creativity and truthfulness," he explained, noting that AI models can write new pieces of text or emails in a certain voice or style.

It's all about data

According to Wazan, the accuracy of a chatbot depends on the quality of the data set it receives.

"If one of the data sources is not 100 percent accurate, the chatbot may say something wrong. This is the main reason why we experience hallucinations," he said.

Currently, according to Ouzan, AI models use a lot of data from the Internet and open sources to train their models.

OpenAI, in particular, is also signing deals with media organizations such as Axel Springer and News Corp, and publications such as Le Monde, to license their content so that they can train their models on more reliable data. According to Wazan, it's not that AI needs more data to formulate accurate answers, but that the models need high-quality input data.

Sukharevsky says he's not surprised that AI chatbots make mistakes – they have to so that the people who run them can improve the technology and its data sets as they go.

Only verified information is available on the OBOZ.UA Telegram channel and Viber. Do not fall for fakes!

Other News

Krynky are almost completely destroyed, but Ukrainian Armed Forces continue to hold the line on the left bank of the Dnipro –  Tavria Brigade

Krynky are almost completely destroyed, but Ukrainian Armed Forces continue to hold the line on the left bank of the Dnipro – Tavria Brigade

Soldiers continue to perform combat missions in the temporarily occupied part of Kherson region
The most popular desserts in the world – easy to make at home

The most popular desserts in the world – easy to make at home

Just the words macaroon, eclair or tiramisu make your mouth water