The rise of generative AI has transformed how we work, communicate, and how fraud is committed. We’re no longer only dealing with crude document edits or false claims on CVs. Now, candidates are attending interviews via AI-generated avatars. Lip-syncing glitches, mismatched audio, even applicants whose face doesn’t quite align with reality.
These aren’t isolated incidents. Research suggests that 85% of identity fraud attempts in 2024 involved some form of AI generation or manipulation. Fraudsters are using deepfakes, synthetic faces, and tampered documents to infiltrate hiring systems, particularly those that rely on remote or digital onboarding.
And while the tools may be new, the consequences are familiar:
• Regulatory risk when identity or Right to Work checks are compromised
• Safeguarding gaps if unverified individuals gain access to sensitive roles
• Operational disruption when fraud is discovered too late
• A breakdown in trust, both internally and with candidates
This is a challenge facing every employer who hires remotely, not just those in regulated industries. If your onboarding process relies on visual checks or static documents, it may already be exposed.
Traditional screening was never designed for this
Identity checks have traditionally focused on whether a document looks legitimate or if a face matches a photo. But when both can be convincingly manipulated using AI, those defences are no longer reliable.
The goal of screening is to verify that someone is who they claim to be. That requires going beyond appearance and introducing mechanisms that detect deception, rather than relying on document collection or video interviews alone.
What better screening looks like today
Modern screening technology must match the sophistication of today’s threats. This includes:
- Liveness detection to confirm a real person is present
- Biometric facial matching that cross-references official records
- Tamper detection across documents, including MRZ validation
- Deepfake identification capable of spotting facial morphing and synthetic injection
- Ongoing adaptation to respond to emerging fraud techniques
When designed well, these tools do not slow down the process. In many cases, they speed it up by automatically clearing genuine candidates and flagging only high-risk cases for further review.
And it must work for candidates too
As fraud becomes more sophisticated, so do candidate expectations. Screening must be secure, but also intuitive, mobile-friendly, and inclusive.
More than half of jobseekers say a poor experience has influenced their decision to reject a role. Screening is often the first meaningful interaction a candidate has with an organisation. If it feels clunky or confusing, that impression lingers.
That’s why we’ve focused on making identity verification a well-designed gateway. It must work seamlessly for candidates using any device, offer support for non-native English speakers, and accommodate different accessibility needs.
What comes next?
Fraud is not standing still, and neither can we.
Employers have a responsibility to protect more than just compliance. They must also safeguard their culture, reputation, and people. That starts by verifying identity correctly from the very beginning.
We should be asking ourselves:
Are our current checks fit for today’s risks?
Can they distinguish a real candidate from an AI-generated one?
Are we confident in the first decision we make, which is who we let in?
If the answer is not a clear yes, then it is time to act.