The importance of reviewing AI data centers’ policies

Tags:

Investment into AI data center development is exponentially increasing: in June 2025, Amazon announced a $20 billion investment into AI data center campuses in Pennsylvania alone, and in July 2025, Meta announced that the first multi-gigawatt data center, Prometheus, will be online in 2026. US political support for AI data centers has also been removing regulatory barriers for companies and operators as President Trump’s new AI Action Plan encourages AI tech stacks and data center development in the US and abroad.

One challenge that nearly all stakeholders have identified is the increased energy demand – nearly 612 terawatt-hours of electricity in the next five years – and the associated exasperation of global warming – 3-4% increase in global carbon emissions.

The lesser advertised challenge, though, is the expanded cyber threats that AI data centers face which create increased reputational, financial, regulatory risks for operators and enterprise users.

Like traditional data centers, AI data centers contain hardware, network, storage, data, and software components, making them targets of common cyberattacks: distributed denial of service (DDoS), ransomware, supply chain, and social engineering attacks. Data centers are also notorious for being vulnerable to side-channel attacks – a cyberattack that collects information on or tries to influence a system’s processes and execution – because data center hardware, from fans to central processing units (CPUs), can reveal sensitive information about CPU-level activity, data architecture and usage. For example, in July 2025, AMD found four new processor vulnerabilities that would allow side-channel attacks.

The risks AI data centers face

Compared to traditional ones, though, AI data centers face an expanded set of threats due to the differences in hardware, data and purpose.

While large data centers use CPUs and graphics processing units (GPUs), AI data centers always use GPUs because AI workloads require more compute power and because GPUs allow for parallel operations. Application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) are also powerful hardware that can be customized to efficiently compute and process AI workloads. Google invented an ASIC specific for AI and deep learning called the Tensor Processing Unit (TPU). These more powerful resources are vulnerable to side-channel attacks just like CPUs: in January 2025, TPUXtract, a TPU-specific side-channel attack that exploits data leaks and allows threat actors to infer an AI model’s parameters, was discovered.

In addition to the common cyberattacks and side-channel attacks, GPUs are more vulnerable to memory level attacks because GPUs don’t always have sufficient memory isolation. Memory can erroneously be carried from one process to the next, giving threat actors a chance to access information on AI model weights and training data. There is also GPU-specific malware that can run malicious code on a GPU’s memory and bypass traditional CPU security tooling.

In terms of data, because AI data centers house AI models, weights, and training data, AI data centers face risks of model exfiltration, sensitive data loss, and model-level threats. AI model information leak risks the integrity and confidentiality of the model. Model-level threats include data poisoning attacks and model poisoning attacks that can corrupt the model as well as model inversion attacks and model stealing attacks that reveal information about the model and its training data. Corrupt models can lead to biased and false outputs that impact clients’ operations.

Finally, given AI’s significance for national security and economic competitiveness, the global race to build AI capabilities and secure sovereign AI – the ability of a country to develop, use, and govern AI models and supporting infrastructure – has started. This race makes AI data centers targets for sophisticated foreign adversaries’ cyber activities.

Even before AI data centers are operational they are at risk of supply chain attacks and sabotage  since many components are exclusively developed by Chinese companies. Once operational, state-sponsored threat actors have the sophisticated capabilities to infiltrate AI data centers and steal models, especially since many commercial data center operators are not equipped to defend against such operations. Lastly, many of these AI data centers are being built around the world such as the US and United Arab Emirates joint partnership to build the largest AI data center outside the US. The concern is that the Persian Gulf is a part of China’s Digital Silk Road 2.0 with Chinese 5G technology and city-wide surveillance programs that could give the Chinese government access into AI data centers built in the region.

What cybersecurity leaders need to consider

Given these expanded threats, cybersecurity leaders and decision makers must closely scrutinize whether their AI data center operators are implementing corporate policies that require technical measures to secure AI data centers across all layers of security, including hardware, data, and geopolitical. Examples of such policies include: closely inspecting hardware being deployed in AI data centers to lower the supply chain security risks, deploying faraday cages or shield chambers with compute resources to mitigate side-channel attacks, executing continuous AI audits and monitoring to identify backdoors and vulnerabilities in models and defend against AI self-exfiltration, and investing into hiring processes to ensure that foreign threat actors are not infiltrating the AI data centers.

Additionally, before deploying AI tools, cybersecurity leaders need to understand where the data center hosting the AI workload is located and map the supply chain for the AI data center. This information should be used to assess whether the geographical location and supply chain creates heightened security risks due to potential state-sponsored activity or surveillance.

Especially when government oversight for securing AI data centers is lacking with the release of President Trump’s new AI Action Plan, the responsibility for corporate cybersecurity leaders to review and assess AI data centers’ corporate and technical policies grows.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *