With each passing month, deepfakes are becoming more realistic, and crypto scams are becoming more inventive. Recently, one of these cases ended with the loss of more than $ 2 million when hackers posed as the founder of the Plasma project. They used a fake AI-generated audio recording and convinced the victim to install malware. Everything looked so plausible that even an experienced user got caught.
And this is no longer uncommon. Artificial intelligence makes fraud not only more technologically advanced, but also accessible — even for those who previously could not program or conduct complex schemes. Today, anyone can create a "smart" phishing website or virus with a simple request to an AI chat.
Deepfakes have become especially dangerous. In the first quarter of 2025 alone, they caused about $200 million in damage to the crypto industry. The availability of AI tools and the low technical threshold make these attacks widespread — it's enough to know whose voice to fake and what to say.
But it's not just deepfakes that are a problem. Recently, security experts came across a malware called ENHANCED STEALTH WALLET DRAINER, supposedly created entirely by AI. The code was complex and effective, but the name was primitive, which indicates the low level of criminals themselves. It turns out that even an inexperienced hacker can now cause serious damage just by using AI correctly.
The bright side is that protection is also developing. At one of the hacker contests, it was revealed that even the most advanced AI agents have vulnerabilities. More than a million hacking attempts revealed tens of thousands of violations, including data leaks. This means that as long as we have a team of people who understand cybersecurity, we have a chance.
In the case of Plasma, the attack would not have worked if the victim had not ignored the defense mechanisms. This proves once again that technology is important, but awareness and vigilance are more important. It is people who remain the main barrier between security and cyber threat.
So that's the question I want to ask you:
If an AI can fake anyone's voice and even write malicious code, how can we even be sure that the person on the other end of the line is real?