“Agentic AI systems are being weaponized.”

That’s one of the first lines of Anthropic’s new Threat Intelligence report, out today, which details the wide range of cases in which Claude – and likely many other leading AI agents and chatbots – are being abused.

First up: “Vibe-hacking.” One sophisticated cybercrime ring that Anthropic says it recently disrupted used Claude Code, Anthropic’s AI coding agent, to extort data from at least 17 different organizations around the world within one month. The hacked parties included healthcare organizations, emergency services, religious institutions, and even government entities.

“If you’re a so …

Read the full story at The Verge.