Deepfakes aren’t just something you have to worry about your uncle falling prey to while browsing his Facebook newsfeed.
As AI continues to evolve, recruiters now have to worry about candidates using deepfake AI filters during interviews. Just ask Bettina Liporazzi, recruiting lead at Make, a remote digital studio that hires internationally (Liporazzi, for one, lives in Argentina).
She was contacted in mid-March by someone claiming to be an out-of-work software engineer. Though Make wasn’t hiring for their skillset, they wanted to connect, in case any relevant roles turned up in the future. After taking a quick glance at their résumé, she invited them to an introductory call.
“I’ll be honest, their résumé looked okay. I didn’t look into it with much detail, especially with how legit the companies were,” Liporazzi told HR Brew. “They reached out to me saying, ‘I’m out of a job,’ and with the market as it is, you are a human being. You want to help people.”
But from that point on, something felt off. They never accepted the Google Meet invite, and though they joined the 10-minute interview, (possibly via a private browser like Incognito, leading Google to flag them as suspicious), they did not turn on their camera—declining Liporazzi’s repeated requests to do so, claiming it was broken. After Liporazzi suggested rescheduling, they offered to restart their computer to fix the issue.
“I don’t know anyone who works remotely and whose camera is broken,” she said. “That’s something that, in the past, I would have said, ‘Okay, no problem. Let’s have a call without the video today.’ I cannot afford to do that. It’s not safe, unfortunately.”
Once they came on camera, things got fishier. Their eye and mouth movements appeared unnatural and not in sync with their speaking, and the edges of their face were distorted, as if they were using a sophisticated version of Snapchat’s face swap filter. These also happen to be signs of deepfake technology.
Liporazzi—who captured and shared this footage in a now-viral LinkedIn post—recalled reading about a similar situation on LinkedIn.
“The filter was basically very similar; the eyes, the mouth, everything looked quite similar,” she said. “If I hadn’t seen that post before I don’t know if I would have noticed.”
When she asked the candidate to wave their hand in front of their face—a tactic that can disrupt the filter—they immediately left the call. They’d only been on camera for 30 seconds, but she was shaken.
“It was very creepy,” she said. “You have no idea who you’re talking to.”
Zoom out. Deepfake job candidates are becoming more common, in part because job seekers are hiring professional interviewers to go through the hiring process for them, Ben Walker, COO at Glider, an AI skills assessment and interview platform that offers tools that scan for AI use, told HR Brew.
In less-common cases, deepfakes could be bad actors. Last summer, for example, cybersecurity firm KnowBe4 shared that someone from North Korea stole a US-based identity, posing as an IT worker through the hiring process. The firm caught them before they could access data or compromise systems, but others haven’t been as lucky.
The best recruiters can do is look out for deepfaker red flags. Many target IT or tech roles, as they’re more likely to be remote, and well-paid.
“That’s where the big dollars are. That’s where it’s worth taking a bigger risk,” Walker said. “If I want to attempt something fraudulent, I’m going to make a lot more money in software engineering.”
Glider is currently developing tools to detect deepfakes, though Walker is still coaching clients on human interventions, such as asking candidates to turn on their cameras, or watching for unnatural eye or mouth movements, or voice distortions.
“Everyone needs to have a heightened awareness, unfortunately, that this is, I hate to say a new normal, but with the distributed remote workforce being a thing that’s probably not going to go away en masse, we have to brace for the worst,” he said.
Stay vigilant. After her deepfake encounter, Liporazzi noticed previously missed red flags, like outreach that appeared to be AI-generated and repeated—they had reached out two years prior. She then noticed suspiciously similar messages from another “candidate,” sent on the same days in 2023 and 2025.
Now, she warns other recruiters to be aware of the signs and best practices for remote interviews.
“We have to help our community and we have to talk to one another and say, ‘Hey, this is happening. Just be careful.’ Because if you have no idea what the intentions are or the real reason behind it, you don’t have an understanding of how risky it would be if you let that person in the organization,” she said.
This report was written by Paige McGlauflin and was originally published on HR Brew.
This story was originally featured on Fortune.com
Recent Comments