Can you prove the person on the other side is real?

Tags:

In my role, I spend a lot of time thinking about what “trust” means when money, grief and identity collide. By 2026, the real competition in our space won’t be who automates fastest or offers the most AI features. It will be who can still tell a legitimate executor, beneficiary or family representative from a manufactured persona.

We’re building with AI because the benefits are undeniable. But we’re also watching that same technology change the economics of impersonation. Synthetic identities and deepfake-enabled scams have moved from edge cases into constant pressure that slowly wears down controls we used to trust. On paper, identity programs still look strong. Under real-world attack conditions, too many become a thin perimeter that collapses once an adversary applies realism at scale.

In estate and identity-related work, that erosion of trust carries extra weight. A synthetic identity can easily misdirect distributions, delay rightful claims and drag families into disputes because the evidence trail looks “complete” even when the person isn’t real.

The rise of the digital ghost

Synthetic identity fraud means manufacturing whole “people” who never existed—the digital ghosts. Generative models can produce government-style documents, plausible histories and supporting media that clear routine checks, allowing a fake identity to look consistent across systems, channels and time.

That’s why the cost can be far higher than a single loss event. A synthetic identity can enter an ecosystem, behave normally long enough to blend in and surface later at the exact moment a claim, profile change or payout is needed. When it succeeds, it pollutes the baselines we used to depend on—risk models, case triage and the patterns analysts learn to trust.

Deepfakes raise the stakes further by collapsing the boundary between “digital” and “human.” Video calls, voice verification and live interactions used to feel like stronger proof. Now an adversary can show up with a face, a voice and a coherent story long enough to pass a rushed review.

If identity is spoofable, every downstream control runs on contaminated truth, even when the process is compliant and well-documented.

Exploiting the deceased and the dormant

Attackers follow leverage. Dormant, legacy and deceased identities create leverage because they already come with history, which serves as scaffolding for a synthetic persona to climb.

I have seen how quickly a subdued record can become an entry point. An adversary pairs an older account or identity footprint with newly generated documents and a polished support interaction. They request a profile change, a contact update or a payment redirection. They push for a new credential, a new device or a new channel. Each looks like a minute detail in isolation. In sequence, it’s a takeover that feels earned because the activity resembles real life.

Traditional trust signals struggle here. Device fingerprinting, behavioral analytics and static biometrics can help, but AI now targets those signals directly. Typing rhythm can be imitated. Mouse movement can be simulated. Voices can be cloned well enough to fool humans who are tired or rushed. Even experienced reviewers lose their advantage because the obvious seams appear less often.

That is why this threat feels different inside estate-facing workflows. There is usually a compelling story attached to the request. There is often urgency. There is often emotion. Attackers understand that human pressure and procedural pressure create openings that technical controls alone do not close.

Establishing a new standard of proof

There’s no plug-and-play fix for synthetic identity. Addressing it means moving past “Who is this?” to a more forensic question such as “How did this identity—and its digital footprint—come to exist?”

That shift raises the standard of proof. It prioritizes provenance, issuer verification and cross-channel consistency over surface-level plausibility. It also changes how teams operate. We can’t keep identity signals scattered across separate tools, queues and owners. We need a shared risk view built from independent signals that either reinforce confidence or reveal contradictions.

In practice, we examine where artifacts came from, how they were created and whether they were altered. We require stronger proof when risk changes and correlates across channels instead of trusting a single checkpoint.

We also have to tighten internal access and auditability. If attackers can impersonate external claimants, they can also target internal workflows. Privileged actions need least privilege, just-in-time access and forensic-grade trails.

Engineering accountability into internal workflows

Continuous verification has to be a deliberate design choice. A mature program ties the level of proof to the risk of what’s happening right now. A new device shouldn’t be treated like a routine login. A request to change payout instructions should face a higher threshold than a simple record view or address update.

That same discipline must apply internally. High-impact roles and machine identities need named owners, documented credential succession plans and access trails that can be reconstructed without guesswork. When something goes wrong, you don’t want an argument about who might have done it. You want evidence.

Regulators and boards are moving this way because the old model assumes identity stays stable once “verified.” AI breaks that assumption. Organizations that treat identity assurance as measurable, with clear risk appetite and regular adversarial testing, will be best positioned to defend their decisions with confidence.

The 2026 readiness test

As we look toward 2026, I keep coming back to one question. Can you prove, at any moment, that the identities behind your highest-impact actions belong to real, accountable humans?

If the answer is vague, then AI accelerates the wrong things. It accelerates decisions based on polluted data. It accelerates workflows that can be hijacked by believable fakes. It accelerates outcomes that look compliant until the moment you need to defend them.

In our business, we are not just managing accounts and records. We are protecting legacies, resolving obligations and serving people who often have one chance to get it right. Synthetic identity turns that responsibility into a security problem, a governance problem and a human problem all at the same time. Let’s treat it like the new ground truth.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *