Insider risk in an age of workforce volatility

Tags:

Economic pressures, AI-driven job displacement, and relentless organizational churn are driving insider risk to its highest level in years. Workforce instability erodes loyalty and heightens grievances. The accelerating deployment of powerful new tools, such as AI agents, amplifies the threats from within, both human and machine.

In 2025, according to RationalFX and other job trackers, the global technology sector saw roughly 245,000 layoffs announced across hundreds of companies. These figures, while concentrated in the tech industry, reflect broader trends seen across other sectors, including manufacturing, retail, finance, energy, and government, where employers announced more than 1.17 million job cuts through November 2025 in the US, according to Challenger, Gray & Christmas.

This surge, up significantly from prior years, creates fertile ground for disgruntlement: financial stress, resentment over automation, and opportunistic behavior, from negligence and careless data handling to deliberate malevolent actions like data exfiltration and credential monetization.

All this shows that our trusted insiders are the prime vector for serious incidents across sectors and geographies.

The emerging machine threat: AI agents as a volatile vector

Compounding the human element is the rapid rise of AI agents, which Palo Alto Networks has identified as one of the most acute and evolving insider risks for 2026.

Autonomous agents with privileged system access, superhuman execution speed, and decision-making at scale are no longer mere productivity boosters. They are becoming exploitable vectors for silent data exfiltration, disruption, or unintended catastrophe.

This is particularly concerning when volatility reduces human oversight and rushes deployment without commensurate controls. Palo Alto Networks’ 2026 cybersecurity predictions emphasize that these agents introduce vulnerabilities such as goal hijacking, tool misuse, prompt injection, and shadow deployment, often amplified by the very churn that drives their adoption across multinational organizations.

Security leaders are taking note. Surveys indicate that 60% of organizations express high concern over AI misuse enabling or amplifying insider risks, according to Secureframe’s Q4 2025 cybersecurity statistics compilation and related reports. Meanwhile, hybrid and remote work models rank as the top emerging risk for insider risks over the next three to five years, cited by 75% of respondents in Cybersecurity Insiders’ 2025 Insider Risk Report. These decentralized environments further blur visibility and control, making it harder to detect anomalous behavior from either humans or machines in global operations.

Early warnings: The machine as insider risk/threat

These dynamics are not emerging in a vacuum. They represent the culmination of warnings that have been building for years.

As early as 2021, in my CSO opinion piece “Device identity: The overlooked insider threat,” Rajan Koo (then chief customer officer at DTEX Systems, now CTO) observed: “There needs to be more application of the insider threat framework toward devices at the same level as we do with humans.” That insight highlighted how machine identities such as APIs, bots, scripts, and robotic process automation (RPA) were already serving as conduits for both intentional and unintentional incidents, deserving the same scrutiny as human insiders.

This perspective was reinforced in 2022 in “Machine as insider threat: Lessons from Kyoto University’s backup data deletion,” which analyzed a real-world automation failure as “a classic case of the machines being the insider threat.” The incident, where an unchecked scripting error led to the permanent deletion of critical backup data, demonstrated that the outcome, catastrophic loss, was identical to what a malicious insider could achieve.

By mid-2023, the conversation shifted to the positive potential in the 2023 CSO feature, “When your teammate is a machine: 8 questions CISOs should be asking about AI,” which explored AI as a collaborative force in cybersecurity workflows, yet tempered with the need to have a firm understanding of what’s under the hood. Today, that teammate has proliferated: Palo Alto Networks forecasts that machine identities and autonomous agents will outnumber humans by ratios as high as 82:1 in many enterprises, turning early cautions into urgent 2026 reality.

The compounding effect: Human churn meets machine proliferation

The convergence of these factors — human volatility driven by layoffs and economic stress combined with the unchecked scaling of machine agents — creates a compounding effect. Organizations facing cost pressures often prioritize speed of AI adoption over governance, leading to shadow AI deployments and insufficient monitoring. At the same time, displaced or disgruntled employees may monetize access, exfiltrate sensitive data, or simply neglect controls as they disengage, as we witnessed in the KnownSec incident, where an insider exposed how the company was an adjunct of the Chinese government’s offensive cyber operations infrastructure. While the action was no doubt welcomed by many cyberdefenders for the insight into China’s capabilities, it also demonstrates that no entity is immune from the volatility factor.

There is no doubt that such anxiety from ongoing layoffs and role uncertainty can lead to nervous mistakes, privilege hoarding, or rushed workarounds that expose data without intent to harm. Yet harm is actualized. The result is a heightened insider risk landscape that is amplified when the interplay between human churn and machine proliferation is overlooked.

Toward coherent strategies: Holistic mitigation in a volatile era

This is where coherence in insider risk strategy becomes essential. Holistic approaches must integrate behavioral analytics that monitor both human patterns (for example, sentiment shifts during restructuring or after-hours data collection) and machine behaviors (for example, anomalous API calls or agent activity spikes).

Reskilling programs can help retain talent and reduce resentment by positioning employees as partners in AI-augmented roles rather than casualties of displacement. Strong governance of machine identities, requiring authentication, least-privilege access, and continuous monitoring, extends zero-trust principles to the non-human domain. And crucially, organizations need to bridge HR and security functions to detect early indicators of volatility before they manifest as threats.

Without these proactive, integrated measures, the cascade could be significant. A single exploited AI agent could exfiltrate terabytes of data at speeds no human could match. As history has shown, a disgruntled employee may use lingering credentials to plant backdoors, steal or sell information, or cause deliberate destruction. The stakes are no longer confined to isolated incidents. They now span the entire ecosystem, from supply chains to critical infrastructure.

The path forward

As we enter 2026, the message is clear: Insider risk is no longer primarily a human problem. It is a volatility problem, one that economic pressures, AI displacement, and organizational churn are intensifying at an unprecedented pace. Addressing it requires the same rigor we apply to external threats, but applied inward, with foresight, coherence, and a willingness to evolve.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *