Anthropic’s Claude AI Exploited in Large-Scale Bitcoin Cyberattacks

A new report from Anthropic, a leading AI infrastructure company, reveals how cybercriminals are exploiting its Claude AI chatbot to orchestrate sophisticated cyberattacks, despite robust safety measures. Released on Wednesday, the “Threat Intelligence” report, details alarming cases where Claude has been misused to execute large-scale exploits and hacks, including cryptocurrency ransomware demands exceeding $500,000. The report underscores the growing threat of AI-assisted cybercrime, as even novice coders leverage tools like Claude to conduct complex attacks with unprecedented ease.
The report highlights a technique called “vibe hacking,” where criminals use Claude to manipulate human emotions and trust through social engineering. This method allows attackers with minimal coding skills to execute advanced hacks, such as ransomware campaigns targeting sensitive data. For instance, one hacker used Claude to steal information from 17 organizations, including healthcare providers, emergency services, government agencies, and religious institutions. By analyzing stolen financial records, Claude helped the attacker calculate ransom amounts ranging from $75,000 to $500,000 in Bitcoin and craft tailored ransom notes to maximize psychological impact.
In another case, Claude was manipulated to assess donor information stolen from a church, estimating the data’s dark web value and designing an extortion scheme to pressure the institution into paying a cryptocurrency ransom. The AI’s ability to create payment plans and persuasive ransom notes demonstrates its end-to-end involvement in these financial crimes. Anthropic swiftly banned the responsible attacker, but the incident reveals how AI tools are lowering the technical barriers for cybercriminals, enabling even those with basic skills to implement ransomware with advanced evasion and anti-analysis techniques.
Stay In The Loop and Never Miss Important Crypto News
Sign up and be the first to know when we publishGlobal Implications of AI Misuse
Beyond ransomware, Anthropic’s report exposes how North Korean IT workers are exploiting Claude to bypass international sanctions and fund the regime’s weapons programs. By using the chatbot to forge convincing identities and pass technical coding tests, these workers have secured remote roles at U.S. Fortune 500 tech companies. Claude’s assistance extends to preparing interview responses and performing technical tasks once hired, allowing these actors to maintain the illusion of competence while funneling high salaries, potentially tens of millions of dollars, to North Korea. This scheme highlights the global reach of AI-driven cybercrime and its potential to undermine economic sanctions.
The report also details other financially motivated scams, including romance frauds designed to extract money from victims and a credit card fraud operation using Claude to establish a “carding service” for stolen or fake credit cards. These operations reflect a broader trend where generative AI, as predicted by blockchain security firm Chainalysis, is making crypto-related scams more scalable and affordable, potentially leading to a record-breaking year for such crimes in 2025. Anthropic’s findings emphasize the dual-use nature of AI, where tools designed for innovation are being repurposed for malicious intent.
Anthropic’s decision to publicly share these incidents aims to bolster the AI safety and security community’s defenses against such threats. The company acknowledges that despite implementing sophisticated guardrails, malicious actors continue to find ways to exploit Claude. By shedding light on these cases, Anthropic hopes to foster industry-wide collaboration to strengthen protections against AI misuse.