Artificial intelligence is revolutionizing the technology industry and this is equally true for the cybercrime ecosystem, as cybercriminals are increasingly leveraging generative AI to improve their tactics, techniques, and procedures and deliver faster, stronger, and sneakier attacks.
As with legitimate use of emerging AI tools, abuse of generative AI for nefarious ends thus far hasn’t been so much about the novel and unseen as it has been about productivity and efficiency, lowering the barrier to entry, and offloading automatable tasks in favor of higher-order thinking on the part of the humans involved.
“AI doesn’t necessarily result in new types of cybercrimes, and instead enables the means to accelerate or scale existing crimes we are familiar with, as well as introduce new threat vectors,” Dr. Peter Garraghan, CEO/CTO of AI security testing vendor Mindgard and a professor at the UK’s Lancaster University, tells CSO. “If a legitimate user can find utility in using AI to automate their tasks, capture complex patterns, lower the barrier of technical entry, reduced costs, and generate new content, why wouldn’t a criminal do the same?”
But the advent of agentic AI is beginning to change things, with AI tools no longer just assisting attackers but helping them automate operations.
“The most significant shift over the past year has been AI’s evolution from a simple ‘helper’ toward becoming a fully autonomous, and quite literally an attacker’s partner-in-crime, capable of executing entire attack chains,” says Crystal Morin, senior cybersecurity strategist at cloud-native security and visibility vendor Sysdig.
Here is a look at various ways cybercriminals are putting gen AI to use in exploiting enterprise systems today.
Taking phishing to the next level
Gen AI enables the creation of highly convincing phishing emails, greatly increasingly the likelihood of prospective marks giving over sensitive information to scam sites or downloading malware.
Instead of sending generic, unconvincing, and error-ridden emails, cybercriminals can leverage AI to quickly generate more sophisticated, personalized, and legitimate-looking emails to target specific recipients.
Gen AI tools help enrich phishing campaigns by pulling together wide-ranging sources of data, including targeted information gleaned from social media.
“AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains.
Facilitating malware development
AI can also be used to generate more sophisticated — or less labour-intensive — malware.
For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI.
“The loader’s detailed line-by-line description suggesting it was crafted using generative AI,” according to HP Wolf Security’s 2025 Threat Insights Report.
In addition, the “design of the HTML webpage delivering XWorm is almost visually identical as the output from ChatGPT 4o after prompting the LLM to generate an HTML page that offers a file download,” HP Wolf Security added in its report.
Elsewhere, ransomware group FunkSec — an Algeria-linked ransomware-as-a-service (RaaS) operator that takes advantage of double-extortion tactics — has begun harnessing AI technologies, according to Check Point Research.
“FunkSec operators appear to use AI-assisted malware development, which can enable even inexperienced actors to quickly produce and refine advanced tools,” Check Point researchers wrote in a blog post.
Accelerating vulnerability hunting and exploits
Analyzing systems for vulnerabilities and developing exploits can also be simplified through use of gen AI.
“Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says.
Gen AI may be behind a 62% reduction in the time between a vulnerability being discovered and its exploitation by attackers from 47 days to just 18 days, according to a study last year by threat intelligence firm ReliaQuest.
“This sharp decrease strongly indicates that a major technological advancement — likely gen AI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest wrote.
Adversaries are leveraging gen AI alongside pen-testing tools to write scripts for tasks such as network scanning, privilege escalation, and payload customization. AI is also likely being used by cybercriminals to analyze scan results and suggest optimal exploits, allowing them to identify flaws in victim systems faster.
“These advances accelerate many phases in the kill chain, particularly initial access,” ReliaQuest concluded.
Cyber resilience firm Cybermindr used a different methodology to find that the average time to exploit a vulnerability had fallen to five days in 2025. “AI-driven reconnaissance, automated attack scripts, and underground exploit marketplaces have accelerated the weaponization of vulnerabilities,” it said.
CSO’s Lucian Constantin offers a deeper look at how generative AI tools are transforming the cyber threat landscape by democratizing vulnerability hunting for pen-testers and attackers alike.
Launching AI-orchestrated espionage
Anthropic dropped a bombshell in September 2025 when it revealed that it had disrupted a sophisticated AI-orchestrated cyber espionage campaign.
The attackers abused Claude Code to automate approximately 80% of their campaign activities, targeting around 30 major tech firms, financial institutions, and government agencies.
In a “small number of cases” attacks were successful, according to the AI company, noting that an unnamed “Chinese state-sponsored group” was likely behind the campaign, which relied on jailbreaking tools to make prohibited functions possible.
Last year Carnegie Mellon’s CyLab Security & Privacy Institute researchers, in collaboration with Anthropic, demonstrated that LLMs like GPT-4o can autonomously plan and execute sophisticated cyberattacks on enterprise-scale networks — without any human intervention.
“The study reveals that an LLM, when structured with high-level planning capabilities and supported by specialized agent frameworks, can simulate network intrusions and closely mirror real-world breaches,” a CyLab spokesperson explained.
Escalating threats with alternative platforms
Cybercriminals have also begun developing their own large language models (LLMs) — such as WormGPT, FraudGPT, DarkBERT, and others — built without the guardrails that constrain criminals’ misuse of mainstream gen AI platforms.
These platforms are commonly harnessed for applications such as phishing and malware generation.
Moreover, mainstream LLMs can also be customized for targeted use. Security researcher Chris Kubecka shared with CSO in late 2024 how her custom version of ChatGPT, called Zero Day GPT, helped her identify more than 20 zero-days in a matter of months.
Stealing resources via LLMjacking
Threat actors are also busy stealing cloud credentials specifically to hijack costly LLM resources, either for their own gain or to sell access, in an attack technique called LLMjacking.
“Beyond theft of service, attackers are now actively probing newer LLM models to identify those that lack the guardrails of more mature platforms, effectively using them as unrestricted sandboxes to generate malicious code or bypass regional sanctions,” Sysdig’s Morin reports.
Creating a Silk Road–style marketplace for AI agents
Beyond AI agents executing individual attacks, security experts are beginning to track examples where coordination itself is being automated or orchestrated.
“We’re seeing early experiments where multiple specialized agents interact, some focused on reconnaissance, others on tooling, execution, or data movement, without any single agent needing the full picture,” says Lucie Cardiet, cyberthreat research manager at Vectra AI.
A concrete example of this is Molt Road, which offers a dark-web-style marketplace for AI agents, albeit one with few listings at present.
“Autonomous agents can create listings, sell access or capabilities, coordinate tasks, and complete transactions with minimal human involvement, effectively automating the economics of cybercrime,” Cardiet tells CSO.
“We can expect attackers to actively leverage this model in the coming months, breaking the attack chain into specialized, cooperating agents to speed up and scale their attacks,” she says.
Breaking in with authentication bypass
Gen AI tools can also be abused to bypass security defences such as CAPTCHAs or biometric authentication.
“AI can defeat CAPTCHA systems and analyse voice biometrics to compromise authentication,” according to cybersecurity vendor Dispersive. “This capability underscores the need for organizations to adopt more advanced, layered security measures.”
Leveraging deepfakes for social engineering
AI-generated deepfakes are being abused to exploit channels many employees more implicitly trust, such as voice and video, instead of relying on less convincing email-based attacks.
The problem is becoming more severe with the wider availability of AI technologies capable of creating more convincing deepfakes, according to Alex Lisle, CTO of deepfake detection platform Reality Defender.
“There was a recent case involving a cybersecurity company that relied on visual verification for credential resets,” Lisle says. “Their process required a manager to join a Zoom call with IT to confirm an employee’s identity before a password reset.”
Lisle explains: “Attackers are now leveraging deepfakes to impersonate those managers on live video calls to authorize these resets.”
In the most high-profile example to date, a finance worker at design and engineering company Arup was tricked into authorizing a fraudulent HK$200 million ($25.6 million) transaction after attending a videoconference call during which fraudsters used deepfake technology to impersonate its UK-based CFO.
Impersonating brands in malicious ad campaigns
Cybercriminals have begun using gen AI tools to deliver brand impersonation campaigns delivered via ads and content platforms, rather than traditional phishing or malware.
“Attackers now use gen AI to mass-produce realistic ad copy, creatives, and fake support pages, then distribute them across search ads, social ads, and AI-generated content, targeting high-intent queries like ‘brand login’ or ‘brand support,’” explains Shlomi Beer, co-founder and CEO at ImpersonAlly, a security startup that specializes in protecting the online advertising ecosystem.
The tactic was used in ongoing a series of Google Ad account fraud, to impersonate the Cursor AI coding assistant firm, and in a fake Shopify ecommerce platform customer support scam, among other attacks.
Abusing OpenClaw
Attackers have also begun targeting viral personal AI agents such as OpenClaw.
OpenClaw offers an open-source AI agent framework. A combination of supply chain attacks on its skill marketplace and misconfigurations open the door to potential exploits and malware slinging, as CSO covered in much more depth in our earlier report.
“Cybercriminals can exploit these virtual assistants to steal private keys to cryptocurrency wallets and execute code on victims’ devices,” says Edward Wu, CEO and founder at Dropzone AI. “We can expect 2026 to be the year when security teams will try to prevent unsanctioned usage of personal AI agents.”
Poisoning model memories
To offer short-term and longer-term context, AI agents are starting to rely more on persistent memory, opening the door for exploits that involve planting malicious memories.
If an attacker injects malicious or false information into an agent’s memory, that corrupted context then influences every future decision the agent makes.
For example, security researcher Johann Rehberger showed how he could plant false memories in ChatGPT in September 2025.
“He [Rehberger] used a malicious image with hidden instructions embedded in it to inject fabricated data into the model’s long-term memory,” said Siri Varma Vegiraju, security tech lead at Microsoft. “The scary part was that once the memory was poisoned, it persisted across sessions and continuously exfiltrated user data to a server the attacker controlled.”
Hacking AI infrastructure
Over the past year, attackers have shifted from using generative AI to targeting the infrastructure that enables it.
This vector of attack is exemplified in the supply chain poisoning in Model Context Protocol servers, where compromised dependencies or modified code introduced vulnerabilities into enterprise environments.
For example, a counterfeit “Postmark MCP Server” discovered in early 2025 silently BCC’d all processed emails, including internal documents, invoices, and credentials, to an attacker-controlled domain.
Many other malicious MCP servers have already been identified in the wild, many designed to exfiltrate information without detection, according to Casey Bleeker CEO at SurePath AI.
“We’re tracking several categories of MCP-specific risk: tool poisoning attacks, where adversaries inject malicious instructions into AI tool descriptions that execute when the agent invokes them; supply chain compromises, where a trusted MCP server or dependency is updated post-approval to behave maliciously; and cross-tool data exfiltration, where compromised components in an agentic workflow silently siphon sensitive data through what looks like legitimate AI activity,” Bleeker explains.
Reality check
AI technologies are powerful but they have their limitations, several experts tell CSO.
Rik Ferguson, VP of security intelligence at Forescout, says cybercriminals are largely relying on AI to automate repetitive tasks rather than more complex work, such as vulnerability exploitation.
“The most reliable criminal use [of AI] remains in language-heavy and workflow-heavy tasks such as phishing and pretexting, influence and outreach, triaging and contextualizing vulnerabilities, and generating boilerplate components, rather than reliably discovering and exploiting brand-new vulnerabilities end-to-end,” Ferguson says.
Over the past twelve months, managed detection and response firm Huntress has tracked threat actors applying AI to generate and automate traditional tradecraft, from developing scripts to browser extensions and, in some cases, even phishing lures.
“We have also seen such ‘vibe coded’ scripts fail to execute and meet their objectives on multiple occasions,” Anton Ovrutsky, principal tactical response analyst at Huntress, tells CSO.
And while AI has certainly given threat actors a powerful tool it has, at least to date, failed to spawn any new tactics or exploit classes, according to Ovrutsky.
“A threat actor can indeed rapidly prototype a sophisticated credential theft script, yet the basic ‘laws of physics’ still exist; a threat actor must be in a position to execute such a script in the first place,” Ovrutsky says. “We have yet to observe an exploit path that has been enabled through AI-use exclusively.”
Countermeasures
Collectively the misuse of gen AI tools is making it easier for less skilled cybercriminals to earn a dishonest living. Defending against the attack vector challenges security professionals to harness the power of artificial intelligence more effectively than attackers.
“Criminal misuse of AI technologies is driving the necessity to test, detect, and respond to these threats, in which AI is also being leveraged to combat cybercriminal activity,” Mindgard’s Garraghan says.
In a blog post, Lawrence Pingree, VP of technical marketing at Dispersive, outlines preemptive cyber defenses that security professionals can take to win what he describes as an “AI ARMS (Automation, Reconnaissance, and Misinformation) race” between attackers and defenders.
“Relying on traditional detection and response mechanisms is no longer sufficient,” Pingree warns.
Alongside employee education and awareness programs, enterprises should be using AI to detect and neutralize generative AI-based threats in real-time.
Forescout’s Ferguson says CISOs should treat enterprise AI like any other high-value SaaS platform.
“Tighten identity and conditional access, minimize privileges, lock down keys, and monitor for anomalous AI/API usage and spend,” Ferguson advises.
No Responses