Louise Bruder never forgets a face. Which is not only a handy skill at parties, but it has helped her carve out a career.

She has the fabulous job title of super-recogniser, and her work at UK digital ID firm Yoti involves comparing the photos on an identity document with an uploaded selfie, to determine if it is the same person.

But Yoti, in common with other ID firms, faces a new threat – spotting so-called deepfakes. These are fake images created using AI-powered software. Louise tells me that she hasn't yet been asked to assess deepfakes as part of her day job, but the firm is well aware of the threat. And so, it is actively working on technology that will help spot them.

Putting her skills to the test with the BBC's own Deepfake quiz, she scored seven out of eight. “There's a deadness in people's eyes that really means they don't look real,” says Louise. Ben Colman is the boss of Reality Defender, a US firm that aims to provide technology to spot deepfakes, and he thinks Louise may struggle soon to tell real from fake.

“I'd say that in the last nine months it's become next to impossible for even the best experts to tell real versus AI generated. We need a software solution to do this,” he says. Mr Colman differentiates between really sophisticated deepfakes, which may be deployed by a nation state to create disinformation, and what he calls “cheapfakes”, whereby criminals use off-the-shelf AI software.

Worryingly, even the cheap fakes “are still good enough to fool people, particularly within images and audio,” he says. Video, though “is still a little more challenging, and requires a lot more computation”.

The solution his firm offers can scan and flag AI representation in an image, video or audio. Clients include the Taiwanese government, Nato, media organisations and large banks.

While it is video or image deepfakes that more often get the headlines, audio-only scams are also growing. For example criminals sending voice recordings using a person's voice to say things like “Mum, I've lost my phone, please pay money to this account now”.

Collecting voice clips from someone's account social media or YouTube is an easy job, and just a few seconds of audio is enough to clone the voice and use it to create sentences the person never said. Some off the shelf software even allows users to “dial up” stress levels in a voice, a technique that has been used to fool parents in real cases where parents believed their child had been kidnapped.

Siwei Lyu is a professor at the University of Buffalo in the US who has studied deepfakes for many years, with the ultimate goal of developing algorithms to help automatically identify and expose them. The algorithms are trained to spot tiny differences – eyes that might not be quite looking in the right direction or, in the case of an artificially created voice, a lack of evidence of breath.

— CutC by bbc.com

Leave A Reply

Exit mobile version