Back to blog
Digital doppelgangersVoice cloningAI agentsDeepfakesIdentity verificationAI impersonationSynthetic persons

Digital doppelgangers: a modern-day Ship of Theseus

AI Usage
14%
GlennApril 27, 20264 min read

Digital doppelgangers: a modern-day Ship of Theseus

What if your identical twin was created by accident, by someone who never even knew you existed? Are you entitled to your own likeness? Could you become the next voice of Coca-Cola?

The Ship of Theseus asks whether a ship is still the same ship after all of its original parts have been replaced. That question used to be a philosophical puzzle. It is not anymore.

As technology becomes better at mimicking others, voice cloning is one of the clearest early examples. You can clone someone's voice to near perfection from only a tiny audio sample.

Imagine we are in the near future. A technician generates voice clips for commercials from a home studio, armed only with a laptop and an internet connection. He orchestrates and tweaks new voices using a specialized AI agent. His setup can take in all variables of the human voice. A knob for pitch, one for speed, one for timbre, one for breathiness, one for prosody.

He is trying to find a voice for an upcoming Coca-Cola advertising campaign. The client calls for something calm, authoritarian, sexy, and trustworthy at the same time. While he works the virtual knobs, he accidentally stumbles upon the exact voice composition of Brad Pitt.

Questions race through his head. Is Brad Pitt entitled to this sound, exclusively? Can I use this voice in an ad? Could I use this voice to prank friends into thinking Brad Pitt is calling them? Where exactly is the line drawn?

After some deliberation he decides to use it. He can prove this is his own creation. There are no Brad Pitt recordings in his training data. It is just a coincidence.

It is not only voice that can be replicated by accident or with ease. Pictures and text follow the same logic. If someone scrapes all of your online comments, an AI agent could replicate your writing style with ease. Think about all your TikToks and Instagram posts over the last few years. A malicious actor could have hours of training data on you already, and could build a digital doppelganger with very little effort.

This opens a new take on privacy and modern-day infringement. And it branches into two distinct threats that are easy to confuse but important to separate.

The copy and the invention

A digital doppelganger is built from a real person. It uses cloned or forged elements of their appearance, voice, and behavioral patterns to create something functionally indistinguishable from the original. The person exists. The doppelganger is an unauthorized copy.

A synthetic person is something else entirely. It is 100% computer-generated, invented from scratch, with no real human behind it. It is designed to be deployed into online spaces without oversight, to spread propaganda, run scams, or simply pollute discourse at scale. The synthetic person does not copy anyone. It pretends to be someone.

Both are already here. AI is already writing and responding to social media posts. Most of it is people copy-pasting AI-generated text, so there is at least some oversight. But it is not long before online spaces are filling up with content that no human wrote or reviewed. It is a question of when, not if.

When it acts on its own

Most of us know what a bot looks like. Bots have been limited to text and easy to expose with a follow-up question. An AI agent can stay in character long enough to make testing it a lot harder.

What makes both threats significantly more dangerous is agency. A digital doppelganger or synthetic person that simply exists as a static image or audio clip is concerning. One that can act is a different problem entirely.

Autonomous AI agents can operate on behalf of a persona across platforms, responding to messages, posting content, building relationships, maintaining consistency over time. A doppelganger equipped with an agent does not need someone to operate it. It runs. It learns. It becomes more convincing the longer it is active. It can do all of this without you knowing it exists.

Your voice becomes less valuable with each false copy. Authenticity, which we currently take for granted as something anchored to a physical person, becomes a claim that needs to be proven rather than assumed. Reputational damage, fraud, harassment, phony endorsements: these are not hypothetical harms. They are the predictable results of a world where identity can be forged at scale and provenance is nearly impossible to verify.

How many bits?

Doppelgangers in the real world exist by chance, and we do not sue each other over genetic resemblance. And a Snapchat filter that gives you a funny mustache is still you. The question now becomes: how many bits must we change before it is no longer considered your likeness?

This is the modern version of the Ship of Theseus. Only this time it is no longer a fun philosophical puzzle. We need an answer, and we need it soon.

And how could we even start to govern this? Digital footprints can be erased with screenshots, copies, metadata removal, and distributed hosting. Even if you try to catch creators of doppelgangers, it is far too easy to claim accident or coincidence. You could use a different machine, feed it pictures of a person, ask a model to generate a prompt describing that face in great detail, then claim the result based on that prompt was an independent creation.

And it is not just about catching the creators. The checks we use to verify that someone is human online are already failing. Liveness tests, CAPTCHA variants, behavioral signals: all of them are under pressure. A three-finger liveness check that seemed like a reliable proof of humanity earlier is a solved problem for a well-trained model today. If a doppelganger can pass as human in a video call, the assumption that online identity maps to a real person collapses entirely.

Act now

So would you recognise a perfect copy of your own child? At this point, perhaps you would sense that something is off. But how about in five years when the technology is even better? How about your grandmother? Would she recognise a perfect copy of you? Probably not.

So, establish a code word with your closest family. A short but unique phrase only the core of the family know. If your child calls you in tears asking for money right now, you are not going to run an identity check. You are reacting. That is the window a doppelganger opens. A code word or phrase does not solve the underlying problem. It just gives you a second to think.

The Ship of Theseus is no longer a thought experiment, it is a practical problem of identity in a digital age. Digital doppelgangers strip away the intuitive boundary between you and a copy. Synthetic persons erode the assumption that there is a human behind a profile at all. Autonomous agents mean neither requires ongoing human involvement to cause harm.

If we fail to act, the confusion could cascade into new forms of harm that are harder to trace and address than anything we have seen before.

The infrastructure to exploit this is being built right now. Time will tell whether the rules get there before the damage does.

Further reading

Voice cloning and impersonation

AI agents and synthetic content

Liveness detection and verification