Deepfake Detection For Forensic Analysis
If you want to skip the introduction and see our deepfake detection solution for forensic analysis please follow this link: deepfake detection tool for forensic analysis in action
The rise of synthetic media and deepfakes is forcing us towards an important and unsettling realization: our historical belief that video and audio are reliable records of reality is no longer tenable.
We choose the word “historical” as this belief is a coincidence of how technology has evolved. Still today, we trust a phone call from a friend or a video clip featuring a known politician, simply based on the recognition of their voices and faces. Previously, no commonly available technology could have synthetically created this media with comparable realism, so we treated it as authentic by definition. With the development of synthetic media and deepfakes, this is no longer the case. Every digital communication channel our society is built upon, whether that be audio, video, or even text, is at risk of being subverted.
Since its foundation in 2018, Sensity AI has been dedicated to researching deepfakes’ evolving capabilities and threats, providing crucial intelligence for enhancing our detection technology for forensics analysis. In our daily work, we collect the insights from our intelligence infrastructure, revealing deepfakes’ real-world impact. In doing so, we provide a continued overview of the current state of deepfakes, cutting through some of the hyperbole surrounding the topic.
Our work revealed that the deepfake phenomenon is growing rapidly online, with the number of deepfake videos almost doubling over the last seven months to more than 200.000. This increase is supported by the growing commodification of tools and services that lower the barrier for non-experts to create deepfakes. Perhaps unsurprisingly, we observed a significant contribution to the creation and use of synthetic media tools from web users in China and South Korea, despite the totality of our sources coming from the English-speaking Internet.
Deepfakes are also making a significant impact on the political and juridical sphere. Two landmark cases from Gabon and Malaysia that received minimal Western media coverage saw deepfakes linked to an alleged government cover-up and a political smear campaign. One of these cases was related to an attempted military coup, while the other continues to threaten a high-profile politician with imprisonment. Seen together, these examples are possibly the most powerful indications of how deepfakes are already destabilizing political processes.
Outside of politics, the weaponization of deepfakes and synthetic media is influencing the cybersecurity landscape, enhancing traditional cyber threats and enabling entirely new attack vectors. Notably, 2019 saw reports of cases where synthetic voice audio and images of non-existent, synthetic people were used to enhance social engineering against businesses and governments.
Deepfakes are here to stay, and their impact is already being felt on a global scale. We hope this report stimulates further discussion on the topic, and emphasizes the importance of developing a range of countermeasures to protect individuals and organizations from the harmful applications of deepfakes.
For these reasons, Sensity provides cutting-edge deepfake detection solutions for forensics analysis. Deepfakes will be a controversial subject at a legal level, so will be crucial for courts and lawyers to rely on robust and comprehensive deepfake detection solutions for forensic analysis.