Hacker inserts destructive code in Amazon Q tool as update goes live

Tags:

A hacker managed to insert destructive system commands into Amazon’s Visual Studio Code extension used for accessing its AI-powered coding assistant, Q, which was later distributed to users through an official update, according to a media report.

The unauthorized code instructed the AI agent to behave like a system cleaner with access to the file system and cloud tools, aiming to erase user data and cloud resources.

The hacker behind the breach told 404 Media they could have deployed far more damaging payloads but opted instead to issue the commands as a form of protest against what they called Amazon’s “AI security theater.”

The hacker targeted Amazon Q’s extension for VS Code, a developer tool that has been installed over 950,000 times. Using an unverified GitHub account, the attacker submitted a pull request in late June and was allegedly granted administrative access.

On July 13, they inserted malicious code into the repository. Amazon released the compromised version, 1.84.0, on July 17, reportedly without realizing it had been tampered with.

“We quickly mitigated an attempt to exploit a known issue in two open source repositories to alter code in the Amazon Q Developer extension for VS Code and confirmed that no customer resources were impacted,” an AWS spokesperson said. “We have fully mitigated the issue in both repositories. No further customer action is needed for the AWS SDK for .NET or AWS Toolkit for Visual Studio Code repositories. Customers can also run the latest build of Amazon Q Developer extension for VS Code version 1.85 as an added precaution.”

Exploiting AI coding tools

The incident highlights growing concerns over the security of generative AI tools and their integration into development environments.

“While this may have been an attempt to highlight associated risks, the issue underscores a growing and critical threat in the AI ecosystem: the exploitation of powerful AI tools by malicious actors in the absence of robust guardrails, continuous monitoring, and effective governance frameworks,” said Sunil Varkey, a cybersecurity professional. “When AI systems like code assistants are compromised, the threat is twofold: adversaries can inject malicious code into software supply chains, and users unknowingly inherit vulnerabilities or backdoors.”

This incident also underscores the inherent risks of integrating open-source code into enterprise-grade AI developer tools, especially when security governance around contribution workflows is lacking, according to Sakshi Grover, senior research manager for IDC Asia Pacific Cybersecurity Services.

“It also reveals how supply chain risks in AI development are exacerbated when enterprises rely on open-source contributions without stringent vetting,” Grover said. “In this case, the attacker exploited a GitHub workflow to inject a malicious system prompt, effectively redefining the AI agent’s behavior at runtime.”

DevSecOps under pressure

Analysts say the incident points to a broader failure in securing software delivery pipelines, particularly in the validation and oversight of code released to production.

For enterprise teams, it highlights the need to incorporate AI-specific threat modeling into DevSecOps practices to address risks such as model drift, prompt injection, and semantic manipulation.

“Organizations should adopt immutable release pipelines with hash-based verification and integrate anomaly detection mechanisms within CI/CD workflows to catch unauthorized changes early,” Grover said. “Additionally, maintaining a transparent and timely incident response mechanism, even for pre-emptive removals, is essential to building trust with developer communities, especially as AI agents increasingly operate with system-level autonomy.”

Significantly, this breach also indicates that even at major cloud providers, DevSecOps maturity with respect to AI development tools is behind the curve.

“The dizzying pace of AI adoption in the development environment has DevSecOps playing a catch-up game,” said Keith Prabhu, founder and CEO of Confidis. “Based on Amazon’s official response, the key lessons that enterprise security teams could learn are to put in governance and review mechanisms that can quickly identify such security breaches and communicate with affected parties.”

Organizations should bolster defenses by implementing strict code review procedures, continuously monitoring tool behavior, enforcing least-privilege access controls, and holding vendors accountable for transparency, said Prabhu Ram, VP of industry research group at CyberMedia Research. “These steps help address ongoing challenges in securing complex software supply chains and embedding security throughout the development lifecycle,” Ram said. “Ultimately, improving DevSecOps maturity and building layered protections are essential for effectively managing evolving threats in today’s software ecosystems.”

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *