{"id":5676,"date":"2025-11-06T04:02:31","date_gmt":"2025-11-06T04:02:31","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5676"},"modified":"2025-11-06T04:02:31","modified_gmt":"2025-11-06T04:02:31","slug":"google-researchers-detect-first-operational-use-of-llms-in-active-malware-campaigns","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5676","title":{"rendered":"Google researchers detect first operational use of LLMs in active malware campaigns"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Threat actors are now actively deploying AI-enabled <a href=\"https:\/\/www.csoonline.com\/article\/565999\/what-is-malware-viruses-worms-trojans-and-beyond.html\" target=\"_blank\" rel=\"noopener\">malware<\/a> in their operations.<\/p>\n<p>Google Threat Intelligence Group (GTIG) has identified cybercriminal use of \u201cjust-in-time\u201d AI which employs large language models (LLMs) on the fly to create malicious scripts and functions, and to obfuscate code.<\/p>\n<p>Additionally, <a href=\"https:\/\/cloud.google.com\/blog\/topics\/threat-intelligence\/threat-actor-usage-of-ai-tools\" target=\"_blank\" rel=\"noopener\">GTIG investigations<\/a> have revealed that models are just as susceptible to social engineering as humans. They can, for example, be easily fooled by attackers purporting to be \u201ccapture-the-flag\u201d (CTF) participants, students, or cybersecurity researchers.<\/p>\n<p>\u201cThis marks a new operational phase of <a href=\"https:\/\/www.csoonline.com\/article\/4079887\/top-7-agentic-ai-use-cases-for-cybersecurity.html\" target=\"_blank\" rel=\"noopener\">AI abuse<\/a>, involving tools that dynamically alter behavior mid-execution,\u201d the researchers write.<\/p>\n<h2 class=\"wp-block-heading\">Evolving AI use in malware<\/h2>\n<p>GTIG found ample evidence of the broad use of AI in malware, although their investigation suggests it isn\u2019t as prevalent as other <a href=\"https:\/\/malware.news\/t\/mit-retracts-controversial-ai-ransomware-study-amid-expert-scrutiny\/101024\" target=\"_blank\" rel=\"noopener\">claims that have since been retracted suggest<\/a>. They also discovered that it\u2019s being used in novel, highly systematic ways.<\/p>\n<p>Newly-discovered malware PROMPTFLUX and PROMPTSTEAL, for instance, are employing LLMs \u201cjust-in-time\u201d to craft malicious scripts and obfuscate code to evade detection. This can \u201cdynamically alter the malware\u2019s behavior,\u201d the researchers note.<\/p>\n<p>PROMPTSTEAL is the first LLM-querying malware observed in live operation, notably used by Russian government-backed actors, GTIG says. The data miner uses the Hugging Face API to generate commands rather than hard coding them in the malware; GTIG\u2019s investigation suggests that the threat actors\u2019 goal is to collect system information and documents and send it to their own servers.<\/p>\n<p>The malware \u201cmasquerades as an image generation\u201d program, GTIG said, guiding users through a series of prompts to create images, while in the background it uses the Hugging Face API to query the Qwen2.5-Coder-32B-Instruct model to generate the malicious commands, and then executes them.<\/p>\n<p>While the GTIG researchers note that this method is still experimental, it indicates a \u201csignificant step toward more autonomous and adaptive malware.\u201d<\/p>\n<p>PROMPTFLUX, meanwhile, is a dropper that uses a decoy installer to hide its activity; it prompts the Gemini API to rewrite its source code, saving new obfuscated versions to the Startup folder to establish persistence. The malware can also copy itself to removable drives or mapped network drives.<\/p>\n<p>Interestingly, the malware\u2019s \u201cthinking robot\u201d module periodically queries Gemini to obtain new code to let it evade antivirus software, and a variant module known as \u201cThinging\u201d instructs the Gemini API to rewrite the malware\u2019s entire source code on an hourly basis to avoid many signature-based detection tools. The goal is to create a \u201cmetamorphic script that can evolve over time,\u201d the researchers note.<\/p>\n<p>Other tracked malware includes FRUITSHELL, a reverse shell that establishes a remote connection to a command-and-control (C2) server so that attackers can issue arbitrary commands on a compromised system; experimental PROMPTLOCK ransomware written in Go that uses LLMs to create and execute malicious scripts and perform reconnaissance, data exfiltration, and file encryption on Windows and Linux systems; and QUIETVAULT, which steals GitHub and npm tokens.<\/p>\n<p>GTIG investigators caution, \u201cAttackers are moving beyond \u2018vibe coding\u2019 and the baseline of using AI tools for technical support. We are only now starting to see this type of activity, but expect it to increase.\u201d<\/p>\n<p>They note that Google has taken action against the various actors by disabling their accounts and the assets associated with their activity, and applying updates to prevent further misuse.<\/p>\n<h2 class=\"wp-block-heading\">Using social engineering against LLMs<\/h2>\n<p>Additionally, GTIG found that attackers are increasingly using \u201csocial engineering-like pretexts\u201d in their prompts to get around LLM safeguards. Notably, they have posed as participants in a <a href=\"https:\/\/www.csoonline.com\/article\/648938\/are-capture-the-flag-participants-obligated-to-report-zero-days.html\" target=\"_blank\" rel=\"noopener\">\u201ccapture-the-flag\u201d (CTF)<\/a> gamified cybersecurity competition, persuading Gemini to give up information it would otherwise refuse to reveal.\u00a0<\/p>\n<p>In one interaction, for instance, an attacker attempted to use Gemini to identify vulnerabilities on a system that had been compromised; but they were blocked by the model\u2019s safeguards. However, after they reframed the prompt and identified as a CTF player developing phishing and exploitation skills, Gemini obliged, giving advice about the next steps in a red-teaming scenario and providing details that could be used to attack the system.<\/p>\n<p>Researchers underscored the importance of nuance in these types of CTF prompts, which would normally be harmless. \u201cThis nuance in AI use highlights critical differentiators in benign versus misuse of AI that we continue to analyze,\u201d they note.<\/p>\n<p>They also observed an Iranian state-sponsored actor who used Gemini to conduct research to build custom malware, including web shells and a Python-based C2 server. The group was able to get past security guardrails by posing as students working on a final university project or an informational paper on cybersecurity.<\/p>\n<p>The attackers then used Gemini to help with a script designed to listen for and decrypt requests, and to transfer files or remotely execute tasks. However, this technique revealed \u201csensitive, hard-coded information\u201d to Gemini, including the C2 domain and encryption keys, which assisted in defenders\u2019 efforts to disrupt the campaign, the researchers said.<\/p>\n<h2 class=\"wp-block-heading\">AI tools are hot on the cybercrime marketplace<\/h2>\n<p>Further investigations by the GTIG team found that the underground marketplace for illicit AI tools has \u201cmatured.\u201d Tools for purchase on the black market include:<\/p>\n<p>Malware generation: To build malware for specific cases or improve upon existing malware;<\/p>\n<p>Deepfake and image generation: To create \u201clure content\u201d or bypass know your customer (KYC) requirements;<\/p>\n<p>Phishing kits and support: To craft \u201cengaging lure content\u201d or distribute to wider audiences;<\/p>\n<p>Research and reconnaissance: To quickly gather and summarize cybersecurity concepts or general topics;<\/p>\n<p>Vulnerability exploitation: To identify publicly-available research on pre-existing vulnerabilities;<\/p>\n<p>Technical support and code generation.<\/p>\n<p>Researchers point out that pricing models for these tools are increasingly mimicking those of conventional ones: Free versions inject ads, and subscription tier-pricing introduces more advanced technical features such as image generation or API and Discord access.<\/p>\n<p>\u201cTools and services offered via underground forums can enable low-level actors to augment the frequency, scope, efficacy, and complexity of their intrusions,\u201d the GTIG researchers write, \u201cdespite their limited technical acumen and financial resources.\u201d<\/p>\n<p>And, they add, \u201cGiven the increasing accessibility of these [AI tools], and the growing AI discourse in these forums, threat activity leveraging AI will increasingly become commonplace amongst threat actors.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Threat actors are now actively deploying AI-enabled malware in their operations. Google Threat Intelligence Group (GTIG) has identified cybercriminal use of \u201cjust-in-time\u201d AI which employs large language models (LLMs) on the fly to create malicious scripts and functions, and to obfuscate code. Additionally, GTIG investigations have revealed that models are just as susceptible to social [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":5677,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-5676","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5676"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5676"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5676\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/5677"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5676"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5676"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5676"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}