When Anthropic published a report Wednesday detailing genAI attacks that entirely bypassed humans, as opposed to human attackers using AI tools as aids, it was the realization of what many CISOs have long anticipated.
But it shows that preparations for AI-only attacks need to be accelerated as the detectable patterns from human attacks become irrelevant.
When attack groups consist of “solopreneurs or an extremely small numbers of employees executing at this level,” said Rob Lee, chief of research at the SANS Institute, “it represents a complete shift in that [attackers] who are not restrained by nation-state politics or potential attribution will feel more emboldened to do scalable damage and can logarithmically increase the number of ransomware attacks at levels we have not seen before.”
Another cybersecurity consultant, Brian Levine, a former federal prosecutor who today serves as the executive director of a directory of former government and military specialists called FormerGov, observed that there’s a window of opportunity in which defenders can head off this kind of attack.
“I think there’s going to be a time period — it may be short — where the AI might be effective at facilitating cyber attacks, but not so focused on concealing its tracks or the identity of the person behind the keyboard,” Levine said. “That may be an opportunity to use that fact to apprehend all of these amateurs who are relying entirely on the AI to get it right.”
Levine also sees a mainstream workforce phenomenon hitting attackers. “We are seeing the criminal world continue to mirror the legitimate business world. Criminals are going to get laid off from their criminal jobs,” he said. “There has been a lot of specialization in these attack groups: Someone to write the code, someone in charge of encryption, defeating the antivirus, making sure that the cards are legitimate numbers, money launderers. There have been a lot of specialized roles. What will happen with those people?”
AI is replacing attackers
The Anthropic report said that the company is finding more evidence that genAI tools are no longer helping cyberattackers so much as they are replacing them.
“A cybercriminal used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe. This threat actor leveraged Claude’s code execution environment to automate reconnaissance, credential harvesting, and network penetration at scale, potentially affecting at least 17 distinct organizations in just the last month, across government, healthcare, emergency services, and religious institutions, “ the report said.
It added, “the operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually. This approach, which security researchers have termed vibe hacking, represents a fundamental shift in how cybercriminals can scale their operations. The actor demonstrated unprecedented integration of artificial intelligence throughout their attack lifecycle, with Claude Code supporting reconnaissance, exploitation, lateral movement, and data exfiltration.”
Anthropic is far from alone in these discoveries; security firm ESET reported a similar development in an analysis of some ransomware the day before Anthropic released its findings.
Phil Brunkard, an executive counselor at Info-Tech Research Group UK, agrees with the Anthropic report, but said that he is far more worried about what the report did not find.
“There is potentially a lot of this activity we’re not seeing. Anthropic being open about their platform being used for malicious activities is significant, and OpenAI has recently shared the same as well. But will others open up about what is already likely happening?” Brunkard asked. “Or maybe they haven’t shared because they don’t yet have effective controls in place? We need to know the answer to ‘What are the big AI vendors doing to prevent their code from being weaponized for targeted cybercrime?’ And are open-source models creating even more exposure?”
Much more to worry about
As encouraging as these reports are, Brunkard said, there is far more to worry about.
“Yes, OpenAI and Anthropic have both confirmed that their platforms were misused and that they’re taking steps to detect and ban bad actors. But that’s still reactive,” he said. “The real challenge is moving upstream. If the tools are powerful enough to run an attack from start to finish, you need to know who is using them and why.”
Asked what CISOs should do differently to defend against these AI-only attacks, few experts had anything concrete to suggest.
“Runtime AI defense will need to keep pace with the evolution of attacker infrastructure created with modern AI tools,” said Will Townsend, VP/principal analyst for Moor Insights & Strategy. “The good news is that many cybersecurity solution providers are embracing things like automated red teaming, prompt injection prevention, input validation, threat intelligence integration and other techniques to bolster defense. DNS security controls can also proactively identify suspect domains and others that can be weaponized in the future to deliver AI infused malware payloads.”
Another Moor analyst added that it is critical for enterprise CISOs to keep their focus on the newest threats.
“AI enables criminals to move beyond script kiddies to a much more scalable business model with agentic thugware. Enterprises worried about quantum security should not ignore the more urgent threat of AI-assisted hacks,” said Bill Curtis, analyst in residence. “One tactic for escaping the mean streets of black hat versus white hat AI gang warfare is to disconnect mission-critical systems from the internet. Hence, the importance of air-gapped datacenters. It’s not a big deal yet, but watch this space.”
No Responses