Hackers can turn Grok, Copilot into covert command-and-control channels, researchers warn

Tags:

Enterprise security teams racing to enable generative AI tools may be overlooking a new risk: attackers can abuse web-based AI assistants such as Grok and Microsoft Copilot to quietly relay malware communications through domains that are often exempt from deeper inspection.

The technique, outlined by Check Point Research (CPR), exploits the web-browsing and URL-fetch capabilities of these platforms to create a bidirectional command-and-control channel that blends into routine AI traffic and requires neither an API key nor an authenticated account.

“Our proposed attack scenario is quite simple: an attacker infects a machine and installs a piece of malware,” CPR said. The malware then communicates with the AI assistant through the web interface, prompting it to fetch content from an attacker-controlled URL and return embedded instructions to the implant.

Because many organizations allow outbound access to AI services by default and apply limited inspection to that traffic, the approach effectively turns trusted AI domains into covert egress infrastructure.

Security analysts said the findings expose a growing blind spot in enterprise AI governance.

“Enterprises that allow unrestricted outbound access to public AI web services without inspection, identity controls, or strong logging are more exposed than many realize,” said Sakshi Grover, senior research manager for IDC Asia Pacific Cybersecurity Services.

“These platforms can effectively function as trusted external endpoints, meaning malicious activity can be concealed within normal network traffic, including routine HTTPS sessions to widely used AI domains,” she added.

Sunil Varkey, a cybersecurity analyst, said the technique echoes past evasion strategies such as steganography and “living off the land” attacks, where adversaries abuse legitimate tools and trusted infrastructure to avoid detection.

CPR said using AI platforms as C2 relays is only one potential abuse case. The same interfaces could be prompted to generate operational commands on demand, from locating files and enumerating systems to producing PowerShell scripts for lateral movement, allowing malware to determine its next steps without direct human control.

In a more advanced scenario, an implant could transmit a brief profile of the infected host and rely on the model to determine how the attack should progress.

A structural shift in detection

The research also points to a broader shift in how malware may evolve as AI becomes embedded in runtime operations rather than just development workflows.

“When AI moves from assisting development to actively guiding malware behavior at runtime, detection can no longer rely solely on static signatures or known infrastructure indicators,” said Krutik Poojara, a cybersecurity practitioner. “Instead of hardcoded logic, you are dealing with adaptive, polymorphic, context-aware behavior that can change without modifying the malware itself.”

Grover said this makes attacks harder to fingerprint, forcing defenders to rely more on behavioral detection and tighter correlation across endpoint, network, identity, and SaaS telemetry.

More significantly, this changes the tempo of defense. If attackers can dynamically adjust commands and execution paths based on the environment they encounter, security teams are no longer responding to a fixed playbook but to a continuously evolving interaction.

“This compresses the window between intrusion and impact and increases the importance of real-time detection, automated response, and tighter feedback loops between threat intelligence and SOC operations,” Grover said.

Steps to take

Security leaders should not respond by blocking AI outright, analysts said, but by applying the same governance discipline used for other high-risk SaaS platforms.

Varkey recommended starting with a comprehensive inventory of all AI tools in use and establishing a clear policy framework for approving and enabling them.

Organizations should also implement AI-specific traffic monitoring and sequence-based detection rules to identify abnormal automation patterns. Other options to consider include rolling out phased awareness programs. “From an architectural standpoint, organizations should also invest in platforms that provide unified visibility across network, cloud, identity, and application layers, enabling security teams to correlate signals and trace activity across domains rather than treating AI usage as isolated web traffic,” Grover said.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *