Dive Brief:
- It takes as little as 70 minutes for a researcher with no image manipulation experience to create a fake job candidate that could pass for a real person on a video interview, according to a report published Monday by cybersecurity firm Palo Alto Networks.
- The vendor said manipulated videos or digital representations produced by machine learning techniques — also known as deepfakes — have been used to dupe companies into hiring malicious actors, including those working for states such as North Korea. Once hired, the actors could steal sensitive data, including customer and client information, experts have said.
- Employers can counter suspected fake interviewees by requiring them to pass a hand over their faces or by detecting simple cues, such as rapid head movements, sudden changes in lighting and delays between lip movements and speech, Palo Alto Networks said. HR teams also may consider tactics such as recording interviews, with consent, for forensic analysis or using a comprehensive identification check process.
Dive Insight:
Deepfake candidates have emerged as a challenge for HR teams amid the proliferation of consumer-facing artificial intelligence tools. Federal agencies, including the FBI, have warned employers of the potential dangers of remote-work fraud for years, and the issue has gained traction thanks in part to the targeted actions of states like North Korea.
Several major U.S. companies have inadvertently hired North Korea nationals as information technology workers, Cybersecurity Dive reported last year. One 2024 U.S. Department of Justice lawsuit claimed that an Arizona woman conspired with foreign individuals and entities, including North Korea, to defraud U.S. companies of $6.8 million via a remote hiring scheme. The operation allegedly involved the use of falsified employment eligibility forms and wage and benefits information for IT workers whose identities were stolen or borrowed from U.S. persons.
The FBI has offered a reward of up to $5 million for information leading to the disruption of activities intended to support North Korea, including the use of workers from the country to generate revenue.
Candidate fraud is likely to become a bigger problem for recruiters in the near future. In February, researchers at consulting firm Gartner predicted that as many as 1 in 4 job candidate profiles globally would be fake by 2028, while nearly one-third of recruitment teams would use AI agents to complete some activities, according to a report viewed by HR Dive. Palo Alto Networks noted that real-time deepfake technology can allow operators to interview for the same position multiple times using different fake personas.
Additionally, a survey of hiring managers by Resume Genius, a career website, found that 76% of respondents said they believe that AI made it harder to assess candidate authenticity.
Palo Alto Technologies said HR teams may consider implementing checks such as document authenticity verification using automated forensic tools and ID verification. HR can train recruiters and interviewing teams to identify suspicious patterns in video interviews, the firm continued, as well as get interviewers comfortable with asking candidates to perform profile turns, hand gestures near the face or rapid head movements.