Ignoring AI in the threat chain could be a costly mistake, experts warn

Tags:

As AI adoption accelerates across enterprises — and among digital adversaries — a heated debate has erupted over whether AI’s role in the cyber threat chain should be a top concern for CISOs.

A vocal handful of experts, along with one cybersecurity vendor, insist that warnings about AI-enhanced threats are exaggerated hype pushed by cyber-intel firms and AI companies eager to sell new defensive tools.

“You have all these people worrying about hypothetical scenarios in which AI just magically bypasses all cybersecurity policies and technologies,” Marcus Hutchins, principal threat researcher at Expel, tells CSO. “What you actually have is executives moving away from tried and tested cybersecurity policies, tools, and mitigations, and gravitating toward generative AI products that are unproven and most likely aren’t going to work when it actually comes down to it.”

But most frontline practitioners and veteran threat-intel leaders sharply disagree. They argue that AI-assisted threats are not speculative — they’re already here — and that dismissing them puts organizations at risk as increasingly agile adversaries experiment with AI to speed and scale their attacks.

“We are absolutely seeing AI used in capabilities that traditional malware doesn’t have,” Steve Stone, SVP of threat discovery and response at SentinelOne, tells CSO. “We see AI being used to refine malware much quicker, used as a sidekick to generate code, or deployed for social engineering. Across the attack lifecycle, attackers are using AI.”

Two recent research reports underscore the view that AI is a growing — and potentially more dangerous — part of the cyberattack cycle, and suggest that CISOs might be running out of time to assess how well they can defend against adversaries who currently hold a significant speed advantage.

Evidence of AI usage in the attack chain is mounting

Although many leading cybersecurity and AI companies, including Microsoft and OpenAI, have issued reports detailing how AI can enhance cyberattacks, two recent research reports add weight to this view, suggesting that adversaries are moving beyond AI for simple productivity gains and beginning to integrate it more directly into their operational tooling.

On Nov. 5, Google Threat Intelligence Group (GTIG) released a report concluding that threat actors have entered a new operational phase of AI abuse, extending beyond the traditional productivity use of AI to create better phishing emails or write code faster, and are using tools that dynamically alter behavior mid-execution. According to the report, “government-backed threat actors and cyber criminals are integrating and experimenting with AI across the industry throughout the entire attack lifecycle.”

Google identified five recent malware samples that were developed using AI, including the first use of “just in time” AI in experimental malware families, such as PROMPTFLUX and PROMPTSTEAL, that use large language models (LLMs) during execution.

“Productivity tools are probably, in terms of the overall picture, the biggest slice of the pie that we’re seeing today, in terms of how [threat actors] are using LLMs and other gen AI tools for enabling their own capabilities,” Billy Leonard, GTIG’s global head of analysis of state-sponsored hacking and threats, tells CSO.

Leonard sees a day coming soon when threat actors engage in prompt injection, where they manipulate an AI’s model input to leak information or generate harmful content. So far, the AI-assisted attacks his group has witnessed don’t reach these highly sophisticated levels.

But, he warns, “we should expect to start seeing threat actors deploying their own AI agents, which gets us closer to that sort of autonomous system [attacks that some fear]. There are a number of open-source tools now for doing AI red teaming and other things. Threat actors are likely using those for non-red teaming purposes. Over the next 12 months, we should start to see more of that.”

The Google report came under initial criticism as fostering needless fear by Hutchins and other researchers, although Hutchins, for one, later retracted his complaints, suggesting how uncharted the new AI cyber threat terrain is.

“The research report we released was used as both the talking point for the AI [cyber threat] is garbage camp as well as the sky is falling AI viewpoint,” Leonard says. “They both pointed to the same report and the same findings as their justification for their side of the argument. It’s like, alright, you got to pick a side.”

Just a week after GTIG issued its report, on Nov. 13, AI company Anthropic issued a bombshell report in which it claimed to have discovered the first orchestrated cyber espionage by a Chinese state-sponsored group that manipulated the company’s Claude Code tool into trying to infiltrate around 30 global targets, succeeding in a small number of cases.

The attack relied on several features of AI models that did not exist, or were in much more nascent form, just a year ago, according to Anthropic, even though much of the attack involved traditional human intervention at various stages during the process. Anthropic said it is sharing this case publicly to help “those in industry, government, and the wider research community strengthen their own cyber defenses.”

Critics of AI-enabled threat reports quickly seized on Anthropic’s decision not to release indicators of compromise (IOCs), claiming the omission undercuts the value of the research.

But experienced threat leaders say this criticism misunderstands the nature of AI-driven attacks — and the realities of disclosure.

“Researchers always want to see all the IOCs,” Morgan Adamski, PwC principal and former executive director of US Cyber Command, tells CSO. “But there might be very specific reasons those weren’t included. Detailing how an adversary actually conducted it could essentially give the playbook to our adversaries.”

Rob T. Lee, chief AI officer at the SANS Institute, is even more blunt. “Anthropic is not a cybersecurity company like Mandiant or Google, so give them a break. And what indicators of compromise are actually going to help defenders? If they were very clear about how they detected this, that’s on their end. So what are they supposed to do — release IOCs only they can use? It’s ridiculous.”

For its part, Anthropic is playing its cards close to the vest. “Releasing IOCs, prompts, or technical specifics can give threat actors a playbook to use more widely,” the company tells CSO. “We weigh this tradeoff case by case, and in this instance, we are sharing directly with industry and government partners rather than publishing broadly.”

How CISOs could cut through the confusion

The conflicting narratives around AI threats leave many CISOs struggling to reconcile hype with operational reality.

Given the emergence of AI-enabled cyber threats amid pushback from some cyber experts who contend these threats are not real, Sophos CEO Joe Levy tells CSO that AI is becoming a “Rorschach test, meaning that however individuals will choose to look at it, that is the pattern that they will find there.”

However, Levy cautions that leaders need to take a more balanced view of the situation. “There is indeed novelty in the use of AI and the threat of agentic AI being used in a much more scalable way by attackers than we’ve seen through previous forms of either manual attacks or even automated attacks,” he says. “That element of it is certainly real. But I don’t think to this point we’ve seen a significant escalation that inhibits our ability to use our current set of defenses to the same level of effectiveness.”

PwC’s Adamski stresses that CISOs should be prepared to turn around new defenses on a dime, given how fast the new AI era will be. “From a defensive perspective, it’s going to have to be seconds,” she says.

She also believes it’s important to dispel any confusion that AI threats are not real. “The bottom line is that it is an emerging technology and capability that our adversaries can leverage. It exists, and we know that there are people out there testing it, deploying it, and quite honestly being successful in its use,” she says.

Clyde Williamson, senior product security architect at Protegrity, agrees that it’s dangerous to assume attackers won’t exploit generative AI and agentic tools. “Anybody who has that hacker mindset when presented with an automation tool like what we have now with generative AI and agentic models, it would be ridiculous to assume that they’re not using that to improve their skills,” he tells CSO.

Jimmy Mesta, CTO and co-founder of RAD Security, says CISOs should be preparing their boards now for difficult budget decisions. “Boards will have to be presented with the options of being insecure or being secure, what it’s going to cost, and what it’s going to take,” he tells CSO. “CISOs aren’t going to be able to walk in and say we must do everything to 100%. There will be more trade-offs than ever.”

Even as CISOs prepare for the coming wave of AI-assisted attacks, they must maintain focus on cybersecurity fundamentals, Alexandra Rose, global head of government partnerships and director of CTU threat research at Sophos, tells CSO. “We come back to the basics so often because they’re the most effective at stopping what we see — from every level of sophistication, including threat actors experimenting with AI,” she says.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *