Researchers have come up with a surprisingly simple trick to identify anybody trying to get away with using live deepfake software on a video call. It is to ask them to turn their head to the side.
According to a new blog post by AI software company Metaphysic, many contemporary deepfake programs aren’t great at recreating the profile of somebody’s face. As soon as these deepfakes’ heads turn 90 degrees to either side, the otherwise photorealistic illusion falls apart.
The trick could come in handy in the near future. Think of this as a potential CAPTCHA for deepfakes. As the Better Business Bureau and even the FBI recently warned us, scammers could easily make use of the technology to steal money or sensitive information.
The trick works because most deepfake algorithms are based on mostly front-on facial alignments. In fact, most of the visual content that exists out there most of the time doesn’t include profile shots — because it simply isn’t desired.
“The profile is a big problem for current deepfake technologies,” Siwei Lyu, professor of computer science and engineering at the University at Buffalo, told Metaphysic.
“Probably the best an algorithm can do is to roughly estimate the profile, particularly if the person is enacting various expressions, or taking requests from the other correspondent in a video call, or from an automated liveness detection system,” Lyu added. “In a case like that, profile estimation is going to be kind of a ‘guess’, even if you were to have some depth information from some of the more recent sensors in smartphones.”
However, there are loopholes in this trick as well. For example, the entire head could be replaced with a photorealistic CGI model. With 3D body-scanning technologies, a scammer could conceivably turn their own likeness into somebody else’s.
While deepfake technologies may have made huge leaps in recent years, there are some surprisingly easy ways for us to tell truth from fiction. However, it is expected that soon they will figure out a way around this hack as well.