Anthropic, the corporate behind the favored AI mannequin Claude, mentioned in a brand new Threat Intelligence report that it disrupted a “vibe hacking” extortion scheme. Within the report, the corporate detailed how the assault was carried out, permitting hackers to scale up a mass assault in opposition to 17 targets, together with entities in authorities, healthcare, emergency companies and non secular organizations.
(You may learn the complete report in this PDF file.)
Anthropic says that its Claude AI know-how was used as each a “technical marketing consultant and energetic operator, enabling assaults that might be harder and time-consuming for particular person actors to execute manually.” Claude was used to “automate reconnaissance, credential harvesting, and community penetration at scale,” the report mentioned.
Making the findings extra disturbing is that so-called vibe hacking was thought of a future menace, with some consultants believing it was not yet possible. What Anthropic shared in its report might represent a major shift in how AI fashions and brokers are used to scale up large cyberattacks, ransomware schemes or extortion scams.
Individually, Anthropic has additionally not too long ago been coping with different AI points, namely settling a lawsuit by authors claiming Claude was educated on their copyrighted supplies. One other firm, Perplexity, has been coping with its personal safety points as its Comet AI browser was shown to have a major vulnerability.
