Deepfake Detection API: The automated solution for identifying fake faces

In this blog, we’re proud to share our deepfake detection API, with this edition focusing on automating the identification of fake faces. Keep an eye out for future posts where we will explore more of the API’s detection capabilities.


Last week, Reuters news agency published a story on Oliver Taylor (pictured above), a British student at the University of Birmingham who had written half a dozen op-eds on Jewish affairs, including for The Jewish Times and the Times of Israel.

However, the Reuters investigation was not focused on Taylor’s writing, but on the fact that he is not a real person. The persona was found to be entirely fabricated, with the key indicator being the above ‘profile picture’ provided for Taylor. This image is entirely synthetically generated and the man depicted does not exist.

The rise of fake faces

This case is the latest in a growing number of cases where bad actors have used highly realistic synthetic images of non-existent people for malicious purposes:

-The Daily Beast exposed a network of 19 fake authors using synthetic profile pictures of non-existent people who wrote op-eds for dozens of conservative news publications.

-A Facebook investigation identified a global network of fake accounts engaged in inauthentic activity, with many of the accounts using synthetic profile pictures.

-A Twitter account using a synthetic profile picture posed as a fake Bloomberg journalist and attempted to extract information from Tesla short-sellers

-A fake Linkedin profile using a synthetic profile picture was found interacting with US civil servants and politicians, with experts believing the case could be part of a state espionage campaign.

As with Oliver Taylor, many of these images succeeded in deceiving at least some of their targets before being discovered. At Sensity, we have seen a sharp increase in the number of these images being deployed as part of malicious activities, with notable examples contributing to fraud, espionage, and coordinated disinformation operations. The potential damages these activities could cause individuals and organisations are significant, and, unlike some other forms of deepfakes, are already an established problem.

Why now?

So why have we seen this rapid increase in the exploitation of fake faces for malicious uses since 2019?

One key reason is that these synthetic images are now realistic enough to deceive people who aren’t familiar with synthetic media. Nvidia’s release of the generative model StyleGAN in 2018 saw the realism of generated facial images significantly improve on previous techniques, with StyleGAN2 enhancing this image quality even further in 2019. While visual flaws still exist in StyleGAN2 images, it is only a matter of time before many of these are also trained out, making it near impossible to reliably identify fake images with the human eye.

This enhanced realism is also combined with the ease of accessing and deploying the generated images. The latest generative models have been open-sourced, enabling users with technical proficiency to download the model and create their own interfaces for practical usage. This has led to the creation of websites such as thispersondoesnotexist.com where refreshing the page generates a brand new image, and commercial services for purchasing images that can be customized in terms of age, gender, ethnicity, facial expression, and image background. The result is that within a few clicks, bad actors can now download and deploy these realistic images of fake faces via social media, dating apps, and other use cases as part of large scale operations. 


Commercial companies are extending the capabilities of generative models for consumer services. Image credits: Generated Media, Inc. and Rosebud AI, Inc.

 

Deepfake Detection API at scale

As the first-to-market deepfake detection product, Sensity’s RESTful API provides the leading automated solution for detecting deepfakes, including the images of fake faces generated by StyleGAN. It is powered by Sensity’s proprietary deep learning technology to identify “unnatural fingerprints” left in pixels by the generators. If a face is present in the input image, the API indicates whether it was generated by a GAN, together with a confidence score for the analysis. Additionally, the detector can also attribute a fake image to a specific implementation of a GAN, e.g. a StyleGAN2.


Ec90-Vi-FXo-AIf6v1

The output of Sensity API’s for the above image of Oliver Taylor, returning essentially perfect confidence that the image is generated by a GAN, and can be attributed to a StyleGAN2 generator.

The API is designed to be seamlessly integrated within a customer’s multimedia platform, enabling automated detection and filtering at scale. Users may test a collection of images by URLs or file uploads, and obtain the analysis result in less than a second per image.

Request API Access


 

Next Articles

Deeptrace becomes Sensity

We are excited to announce that Deeptrace is now Sensity. We love our birth name, but this is time for a… Continue

Deepfake Threat Intelligence: a statistics snapshot from June 2020

  In September 2019, we released our State of Deepfakes report, the first research of its kind providing data-driven analysis… Continue

Please note that on our website we use cookies necessary for the functioning of our website, cookies that optimize the performance. To learn more about our cookies, how we use them and their benefit, read our Cookie Policy.

I Understand