In Short:
Tech companies are working hard to detect real-time deepfakes, with Intel’s FakeCatcher from 2022 leading the way by analyzing blood flow in faces. Researchers at NYU are developing CAPTCHA-like tests to stop AI-bots in video calls. As deepfake technology advances, protecting yourself is vital, as traditional spotting methods may soon become unreliable. Future improvements in AI detection could offer seamless video verification.
Intel has taken significant strides in the field of deepfake detection, previously launching its FakeCatcher tool in 2022. This advanced technology analyzes variations in blood flow in a person’s face to ascertain the authenticity of video participants. However, it remains unavailable to the public.
Academic Research Initiatives
In tandem with corporate efforts, academic circles are also exploring diverse methods to counteract the deepfake menace. As highlighted by Govind Mittal, a PhD candidate in computer science at New York University, the sophistication in deepfake generation has escalated to a point where even minimal data can be leveraged for such malicious purposes. He remarked, “If I have 10 pictures of me on Instagram, somebody can take that. They can target normal people.”
Emerging Solutions
Mittal’s research, in collaboration with professors Chinmay Hegde and Nasir Memon, has proposed a groundbreaking challenge-based approach involving video CAPTCHA tests to authenticate participants before they join AI-driven video calls.
Collaborative Efforts in Deepfake Mitigation
Reality Defender is on a mission to enhance the detection capabilities of its models. According to Coleman, accessing substantial data remains a formidable challenge, a sentiment echoed by many AI startups. He anticipates forming new partnerships in the near future to bridge these gaps. Notably, following a deepfake voice call incident involving US President Joe Biden, the AI-audio startup ElevenLabs has collaborated with Reality Defender to combat potential misuse of their technology.
Advice for Consumers
To safeguard against video call scams, individuals are advised to remain vigilant. Following the same principles as the recommendations against AI voice scams, it is crucial not to rely solely on one’s ability to identify deepfakes, given the continuous advancements in technology. Signs currently recognized may become less reliable with further improvements to detection models.
The Future of Video Authentication
As Coleman aptly noted, “We don’t ask my 80-year-old mother to flag ransomware in an email,” reflecting the need for user-friendly solutions in cybersecurity. It is plausible that, should AI detection technology continue to advance and prove reliable, real-time video authentication could become as commonplace as malware scanners operating quietly in the background of our email systems.