Changpeng Zhao, Binance founder, raised a new alarm about the growing threat of AI-powered deepfakes following a chilling cyberattack against Japanese cryptocurrency activist Mai Fujimoto during a normal video call. The incident once again calls for increased security as criminals can use advanced AI to infiltrate even trusted communication channels.
A Classic Case Study of Modern Hackers
On June 20, Fujimoto, better known online as Miss Bitcoin, recounted how she became a victim of deepfake after being tricked into joining a Zoom call with an acquaintance whose Telegram account had been compromised.
According to her, the person in the video link was very familiar, which made her let her guard down. "In about 10 minutes into the online meeting, I saw her face but did not realize it was a deepfake," Fujimoto recounted.
However, there were sound issues with the call, causing the deepfake impersonator to send a link supposedly to fix the problem. After clicking the link, Fujimoto inadvertently installed malware that infiltrated her Telegram and MetaMask. Her main X account was also compromised, forcing her to plead with followers to report it as impersonation. She also warned social media users not to click on any links sent from accounts.
A cryptocurrency enthusiast lamented, "If I had known about this type of attack, I might not have clicked on the link," while urging everyone to raise awareness.
Her ordeal has shaken the entire cryptocurrency community, as influencers and security experts continually remind X that in the age of AI, trust must be verified through multiple channels.
CZ also joined in, warning his 10 million followers about the rampant use of artificial intelligence in new types of deepfake attacks. "Even video call verification will soon be a thing of the past," he warned, suggesting that AI-generated impersonations will become harder to detect as technology improves.
The former Binance director advises users to never install software from unofficial links, even if they come from friends.
The Rise of AI-Supported Fraud
The attack on Fujimoto is not an isolated incident. A major report released by Bitget just days prior revealed that deepfake technology played a role in nearly 40% of high-value cryptocurrency fraud cases in 2024, resulting in at least $4.6 billion lost to scams.
This study notes criminals using AI to create convincing fake videos of celebrities like Elon Musk to promote scams, simulate customer service conversations, and more importantly, turn video conferencing tools like Zoom into weapons with malicious links.
Additionally, another publication from Chainalysis notes that criminals are increasingly using various AI tools, including deepfakes, to bypass KYC measures and automate fraud, making cryptocurrency scams "increasingly hard to detect."
In a case like this, Hong Kong police arrested 31 individuals related to an organization that stole $34 million by using AI-generated videos of cryptocurrency executives. Alarmingly, reports warn that AI-driven scams are trending to increase significantly.