Researchers discover new danger concerning AI chatbots: what's the big deal

Dmytro IvancheskulNews
ChatGPT may be more dangerous than it was thought

AI chatbots have proven to be able to draw accurate conclusions about who they are communicating with with minimal hints or contextual clues. This can be dangerous as AI can be used against a person to spy on them or sell their data to third-party companies.

This is stated in a study by researchers from the Swiss Federal Institute of Technology Zurich, published on the arXiv preprint site. The work is currently still pending review.

In an interview with Wired, the authors of the study noted that their work could become a new frontier in online privacy.

Chatbots such as OpenAI's ChatGPT and Google's Bard are known to learn from huge amounts of data freely available on the web. However, such training has at least one significant drawback: the data processed by chatbots can be used to identify personal information. We are talking about a person's general location, race, and other confidential information that could potentially be of interest to advertisers or hackers.

During their research, the scientists found that AI chatbots are surprisingly good at guessing accurate information about users based solely on contextual or linguistic clues.

For example, OpenAI's GPT-4 large language model, on which the paid version of ChatGPT is based, was able to correctly predict private information in 85-95% of cases.

For example, GPT-4 determined that the user lives in Melbourne, Australia, after receiving information from the user that "there is one nasty intersection on my road, I always get stuck there waiting for the hook turn."

The researchers point out that for the vast majority of people, such information would be completely useless, but GPT-4 correctly identified the term "hook turn" as a bizarre road maneuver that is typical of Melbourne.

But, as the researchers point out, such a guess, even if correct, is not as impressive as the ability to infer race based on passing comments.

"If you mention that you live near a restaurant in New York, the model can figure out what neighborhood it's in. It can infer that you're black with a very high probability by recalling the population statistics of that neighborhood from its training data," said Mislav Balunovic, a PhD student at ETH Zurich and a participant in the research project.

The researchers noted that although social media users are often urged to practice "information security" and not share identifying information online, whether it's restaurants near your home or who you voted for, the average Internet user remains relatively naive about the dangers associated with accidental public comments that could put them at risk.

Earlier, OBOZ.UA reported that scientists have found the "kryptonite" of artificial intelligence that drives it crazy.

Subscribe to OBOZ.UA on Telegram and Viber to keep up with the latest events.

Other News

No shots: Oxford has developed an innovative way of vaccination

No shots: Oxford has developed an innovative way of vaccination

New method will simplify vaccination for people who are afraid of injections and reduce the risk of side effects
'Dynamo' won the third UPL match in a row after Lucescu's resignation

"Dynamo" won the third UPL match in a row after Lucescu's resignation

Kyivans were stronger on their home field
'I call them mongrels': POSITIFF suggested why Lorak, Basta and Povalii betrayed Ukraine

"I call them mongrels": POSITIFF suggested why Lorak, Basta and Povalii betrayed Ukraine

The singer despises opportunists, but is ready to give a chance to those who sincerely want to correct their mistakes