This co-worker does not exist: FBI warns of deepfakes interviewing for tech jobs

Image Credits: DrAfter123 / Getty Images

A lot of people are worried about the prospect of competing with AI for their jobs, but this probably isn’t what they were expecting. The FBI has warned of an uptick in cases where “deepfakes” and stolen personal information are being used to apply for jobs in the U.S. — including faking video interviews. Don’t dust off the Voight-Kampff test just yet, though.

The shift to remote work is great news for lots of people, but like any other change in methods and expectations it is also a fresh playground for scammers. Security standards are being updated, recruiters are adapting, and of course the labor market is wild enough that hiring companies and applicants both are trying to move faster than ever.

In the midst of these ongoing changes, today’s FBI public service announcement warns that deepfakes are once again being employed for nefarious purposes — in this case imitating people whose identities have been stolen to apply for jobs:

Complaints report the use of voice spoofing, or potentially voice deepfakes, during online interviews of the potential applicants. In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.

You can imagine the process from start to finish: A U.S. citizen has their license, name, address and other important info stolen in some hack or database leak. A deepfake can be created by just about anyone who has a good picture or two of a person and used to record a fake video of the target talking, or even do it live (with mixed results, as we’ve seen). Combined with seemingly legitimate application data, this could very well be enough for a rushed hiring manager to sign on a new contractor.

Why? There are plenty of reasons. Maybe the hacker can’t work in the U.S. but wants to be paid in dollars. Maybe they want access to the data visible only to employees of that company. Maybe it’s just a test run to develop tools to do this at a larger scale and land an even bigger cache of marketable data. As the FBI writes: “… some reported positions include access to customer PII, financial data, corporate IT databases and/or proprietary information.”

It could even be a nation-state intelligence or funding operation; North Korea has been observed using falsified credentials to land U.S. jobs, especially in the cryptocurrency sector where enormous thefts can be effected with few repercussions.

US officials link North Korean Lazarus hackers to $625M Axie Infinity crypto theft

This is not the first time this sort of thing has been reported. Anecdotes of fake employees and co-workers have been around for years, and of course working under a false identity is one of the oldest tricks in the book. The twist here is the use of AI-powered imagery to get through the interview process.

Fortunately, the quality is not particularly convincing … for now. While deepfakes have in some ways become remarkably good, they’re a far cry from the real thing and humans are extremely good at spotting such things. Having 10 seconds of uninterrupted video that doesn’t trigger some kind of eye-narrowing by a viewer is hard enough — half an hour of live conversation seems impossible with current tools, assuming the interviewer is paying attention.

It’s disappointing that the FBI did not include any obvious best practices for avoiding this kind of scam, but it does note that background checks have identified stolen PII, and people have reported their identity, address, email, etc. being used without their knowledge.

And the fact is there’s not too much anyone can do about it. Someone whose identity has been stolen can only remain alert and be on the lookout for suspicious things like strange emails and calls. Small businesses are unlikely to be targeted because they don’t have much of value other than wages. Enterprises likely have fairly cumbrous hiring processes that involve traditional background checking.

If anything, it is perhaps startups and SaaS companies that are at the most risk: potentially lots of data or access to it but comparatively little security infrastructure compared with the enterprises they serve or are attempting to displace. That applies to hiring them to improve your security as well — startups get hacked constantly! It seems to be a rite of passage.

It’s probably too much to ask your interviewees to hold up today’s paper (unlikely anyone applying for a remote job in IT gets one delivered), but if you’re hiring in a potentially high-risk sector like security, health tech and the like, maybe just be a little more careful. Use strong encryption, modern access controls and listen to security professionals. Don’t say the FBI didn’t warn you.

Google bans deepfake-generating AI from Colab

Latest Stories