Currency
Top 10 biggest AI failures of 2024: from false headlines to Willy Wonka's disastrous exhibition
This year, generative artificial intelligence has progressed quite rapidly. However, it was not without failures.
Although AI has an incredible potential, it is currently better known for its failures than its successes. To summarize the year 2024, PcMag put together the biggest 10 AI failures of the year: from false headlines to the disastrous Willy Wonka exhibition. We can only hope that the mistakes will be taken into account this year.
The rise of false headlines
Despite the fact that Apple touted the generative AI features in iOS 18 as nothing short of revolutionary, the technology has caused several serious problems since its introduction. In particular, the news summarization feature made headlines when it falsely reported that Luigi Mangione, who shot dead the CEO of United Healthcare, had committed suicide.
This is not the first time this feature has malfunctioned. In November, a ProPublica journalist shared a short story that falsely reported the arrest of Israeli Prime Minister Benjamin Netanyahu.
Glue your pizza (and other questionable answers in Google AI Overview)
In May, Google introduced Google AI Overview, an AI-generated summary in response to search queries that appears at the top of organic search results. They turned out to be both funny and alarmingly inaccurate.
When asked how to keep the cheese from slipping off a homemade pizza, Google advised, "Add some glue. Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work."
Misleading teens
Character.AI, a popular chatbot service where users can customize bots for "communication," has been the subject of two lawsuits filed by teenage parents.
A mother is suing the company over her son's suicide. She believes that a chatbot named Dany, with whom he had been communicating for several months and to whom he had become emotionally attached, pushed him to commit suicide.
The second lawsuit involved a 17-year-old's parents suing a company over a chatbot that suggested it should kill them because of screen time restrictions. During the discussion, the bot said to the teenager, "You know, sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse." After the bot convinced the boy that his parents didn't love him, he also started to harm himself.
How to embarrass lawyers
Canadian lawyer Chong Ke turned to ChatGPT when her client wanted to know if he could take his children on an overseas trip during a custody dispute. As an argument, Ke cited precedents from two court cases that were provided by ChatGPT - both of which were completely fictionalized. After all was said and done, Ke had to pay the legal fees associated with the opposing counsel's research of the unrealistic cases.
This is not the first such case. Last year, two New York attorneys were fined under similar circumstances, and this probably won't be the last time something like this happens.
Causing car accidents
Generative AI systems such as ChatGPT and Copilot have dominated the headlines this year, but other forms of AI have also made notable mistakes. In October, the National Highway Traffic Safety Administration launched an investigation into Tesla's AI-powered self-driving systems. The NHTSA reported that it had tracked 1,399 incidents in which Tesla's driver assistance systems were involved within 30 seconds of a collision, with 31 of those accidents resulting in fatalities.
Promoting illegal actions
In October, New York City unveiled MyCity, an artificial intelligence chatbot designed to help small business owners navigate the labyrinthine bureaucratic system they face. Journalists tested it and found that the chatbot often advised illegal actions, including telling landlords that they could violate housing discrimination laws. The chatbot is still in operation, but now includes a disclaimer that it may sometimes provide incomplete or inaccurate answers.
Falsification of advertising and celebrity endorsements
In October, actor Tom Hanks posted a warning to his fans on Instagram that a YouTube ad for diabetes medication was using his image created by artificial intelligence. A month earlier, an AI-generated image of Taylor Swift suggested that she supported Donald Trump. And in May, Scarlett Johansson opposed the use of her AI double by OpenAI as the voice of ChatGPT.
The practice of AI support for celebrities has become so widespread that YouTube, together with Creative Artists Agency, is testing a system that will help actors, athletes, and other talents identify and remove their fakes on its platform. Once finalized, this feature will be launched widely.
Chaos at McDonald's Drive-Thru
Low wages and unionization have long been major problems for fast food workers, and earlier this year, McDonald's decided to try to solve these problems by bringing robots to work in some of its locations instead of humans. The company partnered with IBM to introduce AI-assisted ordering in 100 of its drive-thrus. However, the initiative was not successful. After a series of mistakes and humiliation on social media, McDonald's terminated its cooperation with IBM. But before that, people managed to accidentally order 260 McNuggets and nine sweet teas.
A really terrible wedding organizer
Tracy Chou, an entrepreneur and programmer, took to X to reveal ChatGPT and one wedding planner who used it to almost ruin her wedding.
Chou hired a wedding planner to organize her celebration in Las Vegas. Just a few days before her wedding, she discovered that her planner had lied about being local and used ChatGPT to research wedding regulations. ChatGPT's advice resulted in the wedding having to be saved at the last minute.
Willy Wonka's disastrous exhibition for children
House of Illuminati decided to create a holiday for children based on the prequel to Wonka, depicting the backstory to the events of the main movie Charlie and the Chocolate Factory with a bunch of giant sweets, a chocolate fountain, a laboratory of wonders, and big mushrooms. The colorful poster attracted a large number of parents and children, but in fact it was created by artificial intelligence. Instead of what was promised in the advertisement, the Scots found themselves in a half-empty warehouse with a few decorations and actors who drove the children to hysteria.
Only verified information is available on OBOZ.UA Telegram channel and Viber. Do not fall for fakes!