Is your perimeter having an identity crisis?

Tags:

For years, you’ve operated on a fundamental and fragile assumption that with enough training and the right tools, you could trust your employees to be the first line of defense. You taught them to spot the typos in phishing emails, to hover over suspicious links, to question the unusual requests and to report anything suspicious. However, the nature of identity, the bedrock of security models, is being systematically challenged. 

You’re now living in a world of digital doppelgangers, enhanced by the emergence of gen-AI and fueled by the terabytes of personal data siphoned from countless security breaches. The threat is no longer just a well-crafted email. Threats can now speak with the voice of your CFO, write with the syntax of your Head of Legal and potentially send text messages that reference real and recent conversations. 

Your digital life, including your employment history, your personal anecdotes from social media and your highly sensitive data, can become the raw material for your own malicious digital doppelgangers. 

Consider the recent wave of business email compromise attacks where attackers used AI-generated voice messages to authorize fraudulent wire transfers. Or the sophisticated campaigns where threat actors scraped social media profiles to create messages that reference colleagues by name or recent company projects. These aren’t isolated incidents; they represent a serious shift in how attackers operate, with a much lower barrier to entry. 

What once required advanced technical skills and extensive investigating can now be achieved by anyone with access to widely available AI tools and the time to probe through public data about an individual, AKA: their target.

How AI is automating impersonation

Threat actors are not just automating attacks; they are automating impersonation by leveraging AI to:

Craft phishing, smishing and vishing. Creating hyper-personalized emails that can be grammatically perfect, contextually aware and emotionally resonant. These messages no longer demonstrate the telltale signs of traditional phishing like broken English or generic greetings.

Synthesize trust. Using voice-cloning AI to leave a quick, urgent voicemail from a trusted executive, bypassing the skepticism you’ve trained into your employees.

Orchestrate multi-channel attacks. An attack might start with a benign-looking text, be followed up by a seemingly legitimate email and culminate in a phone call that seals the deal. Each step reinforces the fabricated identity.

Exploit contextual awareness. With access to your data, AI can now analyze your communication patterns, typical working hours, frequent contacts and even your writing style to create messages that feel authentically “you.” They can reference recent meetings, ongoing projects or shared experiences scraped from internal communications or social platforms.

These capabilities aren’t theoretical — they’re operational today. 

The result is a severe identity crisis, not just for your employees, but for your infrastructure. When an attacker can convincingly mimic a trusted identity, your traditional defenses can begin to look alarmingly thin. Multi-factor authentication (MFA) is critical, but what happens if the user is socially engineered into approving the push notification? Network segmentation is essential, but what if illegitimate credentials traverse those segments and are considered legitimate?

Your security mandate, default to distrust 

This requires a fundamental mindset shift. Your mandate must be Never trust, always verify. Your strategy must now pivot to a more dynamic, intelligent and identity-centric model:

Assume compromise. You should operate as if the initial point of entry has already been breached. Your focus should shift from solely prevention to rapid detection and response as well. It’s no longer just about keeping them out; it’s about finding them the second they’re in.

Radical visibility. You cannot fight an enemy you cannot see. You need unified visibility across every touchpoint, from the endpoint to the network core, from the cloud instance to the mobile device. You need to see not just traffic, but behavior. Is this user acting like the real person? Why is the CFO suddenly accessing files from an anomalous geolocation at 3 am?

Modernize authentication. Organizations should aggressively move towards phishing-resistant authentication methods like FIDO2, which can help make it harder to steal the ‘keys to the kingdom.’

Harness intelligence. Leverage threat intelligence that is broad, deep and predictive. As I know from my own organization’s work on the annual Verizon Business Data Breach Investigations Report (DBIR), understanding the overarching tactics and techniques of your adversaries is paramount to building a defense that anticipates, rather than just reacts.

This identity crisis is a defining challenge of your generation of security leaders. You are no longer just protecting data; you are defending the concept of legitimate identity against an autonomous, tireless adversary. The path forward requires the courage to fight AI with AI and the vision to build resilient, identity-first security frameworks.

And just as you began to grapple with this, the next evolution is already looming: Agentic AI — autonomous systems capable of independent reasoning and decision-making.

The next frontier: Agentic AI

Current generative AI models require a human operator, and they can generate a flawless email or clone a voice on command. Agentic AI has the potential to become an almost-independent threat actor. Given a simple goal, agentic agents can independently reason, plan and execute the complex, multi-step attack required to achieve malicious goals. This autonomous threat actor can operate 24/7, adapt to obstacles in near real-time and scale its operations with terrifying efficiency. Human defenders are now against agentic threat actors that do not sleep and can execute a sophisticated campaign at a speed and scale you have never witnessed before.

So the question you must all now ask yourself is not if your employees will be targeted by a digital doppelgänger, but how quickly you will be able to tell the difference.

As you prepare to defend against malicious AI agents, you must also consider the security implications of deploying your own. This requires applying a familiar security principle: the principle of least privilege. You wouldn’t give a new employee the keys to the entire kingdom, and the same logic must apply to AI. An agent’s access to networks and data must be strictly circumscribed by its role and context, operating within a well-defined hierarchy.

Implementing secure agentic AI requires establishing clear guardrails from the outset. Assign explicit limitations for each agent. Establish a process that can enable an immediate shutdown if agents begin operating outside defined parameters. Monitor and log every action, allowing for close reviews of any unusual behaviors. Most critically, maintain close human oversight over agent patterns and outcomes — with feedback loops that allow your security team to identify when there may be manipulation at play or when decision-making processes yield unexpected results.

Ultimately, just as you train your human teams to recognize and resist social engineering, you must continuously test the resilience of your AI agents. This is where red teaming becomes critical, not just for your infrastructure, but for your own autonomous systems, helping to verify that they cannot be tricked by an adversary. 

Organizations that can strike this balance will be best positioned to both secure and defend against autonomous systems in this new reality.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *