In our latest blog, Head of Cyber & FinTech at CyberNorth, Jon Holden, shares his thoughts on AI.
In Blade Runner, the line between human and replicant is deliberately blurred. Replicants looked real, acted real, and even believed they were real. The tension wasn’t just about technology — it was about identity, trust, and what it means to be authentic.
Fast forward to today, and we’re in our own Blade Runner moment. AI image generation has given us digital “replicants” — photorealistic faces, voices, and videos that are convincing enough to fool not just our eyes, but entire organisations.
Replicants in the Wild
We don’t have to look far to see how these synthetic creations are already shaping our world:
- A finance worker in Hong Kong wired $25m after attending a Zoom call where every colleague was an AI avatar.
- Fake LinkedIn recruiters — built on AI-generated headshots — are harvesting CVs and luring candidates into scams.
- Political deepfakes are circulating globally, not to persuade but to destabilise trust in democracy itself.
- Romance scams now come with AI-generated lovers who look, sound, and even video call like the real thing.
These aren’t tomorrow’s problems — they’re here now. And just like replicants, they can pass as human unless you know what to look for.
The Nano Banana Moment
Google’s “Nano Banana” release (Gemini 2.5 Flash) is our equivalent of Tyrell Corp saying: “More human than human is our motto.”
On one hand, it’s extraordinary: creative potential for education, accessibility, and design at everyone’s fingertips. On the other, it’s a reminder that we’re handing society tools that blur reality even further.
The positive twist? Google has baked in SynthID watermarking so every image carries an invisible marker. It’s not perfect, but it’s a responsible move — and a sign big tech knows the stakes.
Denmark’s Version of the Voight-Kampff Test
In Blade Runner, the Voight-Kampff test was used to tell replicants from humans. Denmark is proposing something similar in law: giving people copyright over their own likeness.
That means you could demand takedowns of AI images using your face or voice without consent — and even claim compensation. It reframes identity as a right, not just a feature, and could be the start of an international playbook.
The Bigger Picture
Researchers have been warning for years about the “liar’s dividend” — when even the truth can be dismissed as fake. In other words, once everyone knows replicants exist, nobody trusts anything.
That’s the real danger. It’s not just about scams or fraud, it’s about eroding the very idea of truth.
So What Do We Do?
- Awareness: Teach staff and communities that seeing isn’t believing.
- Verification: Add call-backs and multi-channel checks for critical actions.
- Tech Standards: Back efforts like watermarking and C2PA provenance.
- Policy: Support regulation that treats identity as a protected right.
- Culture: Build resilience now, before trust gets eroded beyond repair.
Blade Runner asked whether replicants were “real enough” to be human. We face a different but equally important question: are AI-generated images “real enough” to fool us — and what happens when they do?
The challenge isn’t stopping the technology. Like replicants, it’s here, and it won’t go away. The challenge is deciding how we live alongside it — with ethics, transparency, and resilience — before trust itself becomes extinct.