10 things you should know about deepfakes if you work in a defense or intelligence agency in 2024

sensity deepfake-technology

Deepfakes and AI have become an ubiquitous reality in all aspects of our daily life, permeating the web for various purposes, both legal and illegal. The weaponization of these technologies in terms of national security and their overall impact in intelligence and counterintelligence operations is expected to significantly increase in the coming years. 

Here are 10 facts about deepfakes that members of the defense and intelligence community should be aware of.

01 Deepfakes and generative AI will grow exponentially

In a 2023 report, the National Security Agency (NSA) and U.S. federal agency partners have highlighted the threats posed by deepfakes to the U.S. Government and critical infrastructure organizations. The report predicts a rapid evolution of deepfakes and synthetic media technologies, with an estimated 35% annual market growth. By 2030, their total market value is projected to surpass $100 billion, creating favorable conditions that will contribute to reducing the cost of such technologies. This trend is already manifesting, notably with the proliferation of cheapfakes.

02 Deepfake detection tools will also evolve

The exponential surge of AI technologies will also drive the rapid development of the deepfake detection industry. Deepfake detection vendors are expected to offer a diverse range of products designed to address the different challenges posed by manipulated videos, audio and images. These may include passive forensic techniques and authentication methods tailored to the specific needs of public and private organizations. While government, defense and intelligence agencies will have a plethora of detection software options, the challenge lies in selecting the most effective solutions in order to meet their unique requirements. 

Test your skills in detecting AI-generated images and deepfake videos.

03 Deepfakes can have a huge impact on the political process

The dissemination of deepfakes by adversarial powers has the potential to disrupt the political processes of entire nations by misleading the public and influencing collective decisions. One of the latest examples occurred during the 2023 Slovakian parliamentary elections, when an allegedly manipulated audio recording, implicating progressive party leader Michal Šimečka in discussions about election rigging, went viral just before election day. Despite denials and indications of AI manipulation, the timing within a media silence period and the audio format exploited loopholes in election rules and Meta’s manipulated-media policy. The election resulted in a narrow victory for the rival party of Progressive Slovakia, raising concerns about the actual impact of the episode on the outcome and highlighting the vulnerability of the European elections to deepfake manipulation of public opinion.

04 The impact of deepfakes on privacy

Apart from potentially disrupting the political process, the proliferation of deepfakes has created a series of new legal challenges, particularly within privacy. This is primarily due to the capability of AI technology to replicate an individual’s physical characteristics, potentially leading to identity theft or compromising one’s reputation. Such concerns extend beyond celebrities, who are frequently the primary targets of deepfakes, to include also ordinary citizens. Images of regular individuals can be exploited in various fraudulent schemes, such as phishing attempts, increasing their vulnerability. 

05 The growing effectiveness of deepfake audios

Apart from potentially disrupting the political process, the proliferation of deepfakes has created a series of new legal challenges, particularly within privacy. This is primarily due to the capability of AI technology to replicate an individual’s physical characteristics, potentially leading to identity theft or compromising one’s reputation. Such concerns extend beyond celebrities, who are frequently the primary targets of deepfakes, to include also ordinary citizens. Images of regular individuals can be exploited in various fraudulent schemes, such as phishing attempts, increasing their vulnerability. 

Deepfakes don’t need to be particularly sophisticated to be effective. This is especially true in the case of AI generated voices, as shown in a study published in 2023 on Plos One, in which researchers asked a large group of individuals to identify deepfake audios, running two experiments in English and Mandarin. Regardless of the different language, listeners only correctly spotted the deepfakes 73% of the time. Moreover, even if provided with other deepfake samples, “training” listeners didn’t really improve their capabilities. As a result, researchers underlined the necessity of strong deepfake detection tools, given that speech synthesis algorithms are set to become much more credible in the future.

06 Deepfakes and text to video models

Among one of the most promising AI generated technologies are Text-to-Video Models that could become a significant source of deepfake videos in the future. A prominent example of a similar tool, which experts believe will undoubtedly have widespread implications across almost every industry, is Sora, an AI application launched by OpenAI in February 2024, designed to follow written instructions in order to transform written texts into videos. According to experts, if combined with voice cloning, text to video technology is set to become the new frontier of AI manipulation, opening up a whole series of challenges in deepfake detection.

07 Deepfakes and critical infrastructure 

One of the many goals of intelligence agencies is to protect critical infrastructure like energy, transport, financial networks, and internet connectivity. All these assets are potentially vulnerable to a variety of deepfake and AI attacks. These may range from spreading disinformation by manipulating communication channels to even orchestrating impersonations within the leadership of these infrastructures. The use of manipulated content, whether in videos or audio recordings, could exploit weaknesses in security systems. One of these attempts occurred  in January 2023, when the UK National Cyber Security Center warned of the threat of targeted spear-phishing campaigns against British organizations and individuals carried out by cyber actors based in Iran. 

sensity generative-AI-deepfakes

08 Deepfakes and satellite imagery

In the context of recent war scenarios, AI has been often used to understand frontline dynamics in absence of visual evidence. For instance, amid the conflict in Ukraine, OSINT (Open Source Intelligence) experts are utilizing AI-generated images paired with detailed reports to provide better understanding of events, such as the construction of a “30Km freight train wagon wall” in the Donetsk region. On the other hand, the proliferation of manipulated geographic and satellite imagery could impact national security, raising concerns about safety risks as well as impediments to humanitarian activities, where reliance on fake imagery and data can be detrimental.  In this respect, various intelligence agencies and researchers are collaborating to develop machine-learning tools to detect fake satellite images.

09 Deepfakes can jeopardize the gathering of relevant information

The intelligence community is designed to collect and process a vast amount of data crucial for national defense, using both electronic and human sources. This critical and costly activity becomes vulnerable to adversarial entities seeking to hinder it by inundating the field with numerous false or irrelevant pieces of information. Deepfakes and AI generated material can serve as effective instruments in achieving this goal, generating significant amounts of “noise” that distracts from obtaining pertinent information.    

10 Deepfakes can erode the trust in institutions

The trust between intelligence agencies and the public is essential to their survival. Holding accurate and reliable information is key for intelligence operators, and their exposure to deepfake campaigns, along with potential data breaches, poses a threat to their credibility. When the public perceives a vulnerability to manipulated content, the agencies’ ability to advise and warn on national security matters is undermined. The potential impact on public trust underlines the importance of developing robust strategies for detecting and addressing deepfake threats within the intelligence community.

The rise of AI-generated images can be challenging to discern. Don’t be fooled.