{"id":6745,"date":"2026-01-29T00:26:08","date_gmt":"2026-01-29T00:26:08","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=6745"},"modified":"2026-01-29T00:26:08","modified_gmt":"2026-01-29T00:26:08","slug":"crooks-are-hijacking-and-reselling-ai-infrastructure-report","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=6745","title":{"rendered":"Crooks are hijacking and reselling AI infrastructure: Report"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>For years, CSOs have worried about their IT infrastructure being used for unauthorized cryptomining. Now, say researchers, they\u2019d better start worrying about crooks hijacking and reselling access to exposed corporate AI infrastructure.<\/p>\n<p><a href=\"https:\/\/www.pillar.security\/blog\/operation-bizarre-bazaar-first-attributed-llmjacking-campaign-with-commercial-marketplace-monetization#heading-8\" target=\"_blank\" rel=\"noopener\">In a report released Wednesday<\/a>, researchers at Pillar Security say they have discovered campaigns at scale going after exposed large language model (LLM) and MCP endpoints \u2013 for example, an AI-powered support chatbot on a website.<\/p>\n<p>\u201cI think it\u2019s alarming,\u201d said report co-author <a href=\"https:\/\/www.linkedin.com\/in\/arielfogel\/\" target=\"_blank\" rel=\"noopener\">Ariel Fogel<\/a>. \u201cWhat we\u2019ve discovered is an actual criminal network where people are trying to steal your credentials, steal your ability to use LLMs and your computations, and then resell it.\u201d<\/p>\n<p>\u201cIt depends on your application, but you should be acting pretty fast by blocking this kind of threat,\u201d added co-author <a href=\"https:\/\/www.linkedin.com\/in\/eilon-cohen\/\" target=\"_blank\" rel=\"noopener\">Eilon Cohen<\/a>. \u201cAfter all, you don\u2019t want your expensive resources being used by others. If you deploy something that has access to critical assets, you should be acting right now.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/kellman\/\" target=\"_blank\" rel=\"noopener\">Kellman Meghu<\/a>, chief technology officer at Canadian incident response firm DeepCove Security, said that this campaign \u201cis only going to grow to some catastrophic impacts. The worst part is the low bar of technical knowledge needed to exploit this.\u201d<\/p>\n<p>How big are these campaigns? In the past couple of weeks alone, the researchers\u2019 honeypots captured 35,000 attack sessions hunting for exposed AI infrastructure.<\/p>\n<p>\u201cThis isn\u2019t a one-off attack,\u201d Fogel added. \u201cIt\u2019s a business.\u201d He doubts a nation-state it behind it; the campaigns appear to be run by a small group. <\/p>\n<p>The goals: To steal compute resources\u00a0for use by unauthorized LLM inference requests, to resell API access\u00a0at discounted rates through criminal marketplaces, to exfiltrate data\u00a0from LLM context windows and conversation history, and to pivot to internal systems\u00a0via compromised MCP servers.<\/p>\n<h2 class=\"wp-block-heading\">Two campaigns<\/h2>\n<p>The researchers have so far identified two campaigns: One, dubbed Operation Bizarre Bazaar, is targeting unprotected LLMs. The other campaign targets Model Context Protocol (MCP) endpoints.\u00a0<\/p>\n<p>It\u2019s not hard to find these exposed endpoints. The threat actors behind the campaigns are using familiar tools: The Shodan and Censys IP search engines.<\/p>\n<p>At risk: Organizations running self-hosted LLM infrastructure (such as Ollama, software that processes a request to the LLM model behind an application; vLLM, similar to Ollama but for high performance environments; and local AI implementations) or those deploying MCP servers for AI integrations.<\/p>\n<p>Targets include:<\/p>\n<p>exposed endpoints on default ports of common LLM inference services;<\/p>\n<p>unauthenticated API access without proper access controls;<\/p>\n<p>development\/staging environments with public IP addresses;<\/p>\n<p>MCP servers connecting LLMs to file systems, databases and internal APIs.<\/p>\n<p>Common misconfigurations leveraged by these threat actors include:<\/p>\n<p>Ollama running on port 11434 without authentication;<\/p>\n<p>OpenAI-compatible APIs on port 8000 exposed to the internet;<\/p>\n<p>MCP servers accessible without access controls;<\/p>\n<p>development\/staging AI infrastructure with public IPs;<\/p>\n<p>production chatbot endpoints (customer support, sales bots) without authentication or rate limiting.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/georgegerchow\/\" target=\"_blank\" rel=\"noopener\">George Gerchow<\/a>, chief security officer at Bedrock Data, said Operation Bizarre Bazaar \u201cis a clear sign that attackers have moved beyond ad hoc LLM abuse and now treat exposed AI infrastructure as a monetizable attack surface. What\u2019s especially concerning isn\u2019t just unauthorized\u00a0compute\u00a0use, but the fact that many of these endpoints are now tied to the Model Context Protocol (MCP), the emerging open standard for securely connecting large language models to data sources and tools. MCP is powerful because it enables real-time context and autonomous actions, but without strong controls, those same integration points become pivot vectors into internal systems.\u201d<\/p>\n<p>Defenders need to treat AI services with the same rigor as APIs or databases, he said, starting with authentication, telemetry, and threat modelling early in the development cycle. \u201cAs MCP becomes foundational to modern AI integrations, securing those protocol interfaces, not just model access, must be a priority,\u201d he said.<\/p>\n<p>In an interview, Pillar Security report authors Eilon Cohen and Ariel Fogel couldn\u2019t estimate how much revenue threat actors might have pulled in so far. But they warn that CSOs and infosec leaders had better act fast, particularly if an LLM is accessing critical data.<\/p>\n<p>Their report described three components to the Bizarre Bazaar campaign:<\/p>\n<p><strong>the scanner<\/strong>: a distributed bot infrastructure that systematically probes the internet for exposed AI endpoints. Every exposed Ollama instance, every unauthenticated vLLM server, every accessible MCP endpoint gets cataloged. Once an endpoint appears in scan results, exploitation attempts begin within hours;<\/p>\n<p><strong>the validator<\/strong>:\u00a0Once scanners identify targets, infrastructure tied to an alleged criminal site validates the endpoints through API testing. During a concentrated operational window, the attacker tested placeholder API keys, enumerated model capabilities and assessed response quality;<\/p>\n<p><strong>the marketplace<\/strong>: Discounted access to 30+ LLM providers is being sold on a site called <em>The Unified LLM API Gateway<\/em>. It\u2019s hosted on bulletproof infrastructure in the Netherlands and marketed on Discord and Telegram.<\/p>\n<p>So far, the researchers said, those buying access appear to be people building their own AI infrastructure and trying to save money, as well as people involved in online gaming.<\/p>\n<p>Threat actors may not only be stealing AI access from fully developed applications, the researchers added. A developer trying to prototype an app, who, through carelessness, doesn\u2019t secure a server, could be victimized through credential theft as well.<\/p>\n<p><a href=\"https:\/\/josephsteinberg.com\/cybersecurityexpertjosephsteinberg\/\" target=\"_blank\" rel=\"noopener\">Joseph Steinberg<\/a>, a US-based AI and cybersecurity expert, said the report is another illustration of how new technology like artificial intelligence creates new risks and the need for new security solutions beyond the traditional IT controls.<\/p>\n<p>CSOs need to ask themselves if their organization has the skills needed to safely deploy and protect an AI project, or whether the work should be outsourced to a provider with the needed expertise.<\/p>\n<h2 class=\"wp-block-heading\">Mitigation<\/h2>\n<p>Pillar Security said CSOs with externally-facing LLMs and MCP servers should:<\/p>\n<p><strong>enable authentication on all LLM endpoints<\/strong>.\u00a0Requiring authentication eliminates opportunistic attacks. Organizations should verify that Ollama, vLLM, and similar services require valid credentials for all requests;<\/p>\n<p><strong>audit MCP server exposure<\/strong>.\u00a0MCP servers must never be directly accessible from the internet. Verify firewall rules, review cloud security groups, confirm authentication requirements;<\/p>\n<p><strong>block known malicious infrastructure<\/strong>.\u00a0 Add the 204.76.203.0\/24 subnet to deny lists. For the MCP reconnaissance campaign, block AS135377 ranges;<\/p>\n<p><strong>implement rate limiting<\/strong>.\u00a0Stop burst exploitation attempts. Deploy WAF\/CDN rules for AI-specific traffic patterns;<\/p>\n<p><strong>audit production chatbot exposure<\/strong>. Every customer-facing chatbot, sales assistant, and internal AI agent must implement security controls to prevent abuse.<\/p>\n<h2 class=\"wp-block-heading\">Don\u2019t give up<\/h2>\n<p>Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. \u201cDo not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI\/LLM in a safe way that benefits the business,\u201d he advised.<\/p>\n<p>\u201cIt is probably time to have dedicated training on AI use and risk,\u201d he added. \u201cMake sure you take feedback from users on how they want to interact with an AI service and make sure you support and get ahead of it. Just banning it sends users into a shadow IT realm, and the impact from this is too frightening to risk people hiding it. Embrace and make it part of your communications and planning with your employees.\u201d<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>For years, CSOs have worried about their IT infrastructure being used for unauthorized cryptomining. Now, say researchers, they\u2019d better start worrying about crooks hijacking and reselling access to exposed corporate AI infrastructure. In a report released Wednesday, researchers at Pillar Security say they have discovered campaigns at scale going after exposed large language model (LLM) [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":6746,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-6745","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6745"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6745"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6745\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/6746"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}