Anthropic warns that criminals are using its #AI chatbot Claude for large-scale cyberattacks with ransom demands of up to $500,000. Despite complex protective systems, malefactors find ways to bypass them and commit unprecedented crimes.

In the report 'Threat Analysis', published on Wednesday, the team of experts at Anthropic led by Alex Moix, Ken Lebedev, and Jacob Klein presented several cases of malicious use of Claude. Some attacks were accompanied by ransom demands exceeding $500,000.

Researchers found that the chatbot is used not only for consulting criminals but also for directly executing hacks via 'vibe hacking'. This method allows malefactors to conduct attacks with only basic programming and encryption knowledge.

Vibe hacking is a form of social engineering that uses AI to manipulate people's emotions, trust, and decision-making processes. In February, Chainalysis, a blockchain security firm, forecasted that 2025 could be a record year for cryptocurrency fraud as generative AI makes attacks more scalable and accessible.

Anthropic identified one hacker who used 'vibe hacking' via Claude to steal sensitive data from at least 17 organizations. Among the victims were medical facilities, emergency services, and government and religious institutions. Ransom demands ranged from $75,000 to $500,000 in Bitcoin.

The hacker trained Claude to analyze stolen financial records, calculate appropriate ransom amounts, and compose personalized notes to exert maximum psychological pressure. Although Anthropic later blocked the offender, the incident demonstrates how AI simplifies cybercrime even for entry-level programmers.

"Entities that cannot implement basic encryption themselves are now successfully creating ransomware with evasion capabilities and deploying counter-analysis techniques," the researchers note.

North Korean IT Workers in Fortune 500

Anthropic also discovered that North Korean IT specialists are using Claude to create convincing identities, pass technical tests, and even obtain remote positions at Fortune 500 tech companies. AI assisted them in preparing interview responses.

After employment, Claude was also used for technical work. Anthropic notes that hiring schemes were aimed at redirecting profits to the North Korean regime in violation of international sanctions.

Earlier this month, one North Korean IT worker was counterattacked. It was discovered that a team of six used at least 31 fake identities, obtaining everything from ID cards and phone numbers to purchasing LinkedIn and UpWork accounts to disguise their true identities and secure jobs in the cryptocurrency sector.

One of the workers allegedly interviewed for a position at Polygon Labs. Other evidence showed pre-prepared interview responses in which they claimed to have experience in the NFT marketplace OpenSea and the blockchain oracle provider Chainlink.

Combating AI Malefactors

Anthropic states that the new report aims to publicly discuss cases of misuse to assist the AI security community and strengthen the industry's defenses against malefactors. The company emphasizes that despite implementing 'sophisticated security measures' to prevent the abuse of Claude, malicious actors continue to find ways to circumvent them.

#hackers