Humanity in a new danger: what crimes could be pulled off with the help of AI

The age of artificial intelligence brings new challenges to humanity. We have been using AI tools for a long time - from algorithms on websites to cars with traffic sign recognition technology. AI is a robust tool for improving efficiency, processing and sorting large amounts of data.
However, it's not all that great. Skeptics are sounding the alarm: the latest technology could be used by fraudsters for their criminal purposes. What kind of crimes can be pulled off with the help of AI, The Conversation reported.
"Phishing hook."
There is growing public concern that criminals using AI will be able to swipe personal data from people. Google's ChatGPT and Bard are already being actively used to write various texts. Artificial intelligence creates effective marketing messages and can help criminals look more believable when they contact potential victims.
Phishing is a numbers game: approximately 3.4 billion spam emails are sent every day. Spam phishing emails are usually easy to recognize, so users delete them immediately. However, AI, experts say, can be a supportive link in the process of getting information from the victim. Users may simply not be able to distinguish between spam and normal emails, and attackers will take advantage of this.
Automated interactions
One of the first functions of artificial intelligence was to automate interactions between customers and services. We are talking about text messages in chatbots and help desks. AI has enabled faster responses to customers and optimized business efficiency. Criminals can use the same tools to create automated interactions with more potential victims. For example, fraudsters pretend to be bank employees trying to find out card information - an ancient and unfortunately effective scheme.
Deepfake technology
Deepfake's technology is already being actively used by criminals. "AI is really good at creating mathematical models that can be 'taught' on large amounts of real-world data," the scientists note. With the help of Deepfake, criminals create fake videos, audio, text messages, voice imitations, etc.
Gaining access to passwords
Criminals call this technology "brute forcing." AI helps you pick many combinations of characters to crack passwords. That's why long, complex passwords are more secure and harder to guess using this method. Brute forcing works much faster if the attackers already know something about the victim: names of family members or pets, dates of birth, etc.
Earlier OBOZREVATEL told that according to the latest research AI can take away work from 27% of people.
Subscribe to OBOZREVATEL channels in Telegram, Viber and Threads to keep up with the latest news.