Deep fake technology is a recent development, that allows artificial intelligence (AI) to render near-realistic images and videos whereby individuals are integrated into environments, situations, or circumstances that previously did not exist. Deep fake technology can be utilized for “entertainment purposes; beneficial purposes; or nefarious purposes” (Harris & Sayler, 2023). Additionally, deep fakes can be used for “education, art, and . . . autonomy,” (Chesney & Citron, p. 1769).
Yet deep fake technology also poses a significant threat to U.S. foreign policy. In an age of misinformation and government propaganda, the presence of deep fakes produces a volatile compound capable of compromising objective and essential transmissions. But deception has been an instrument of malice throughout recorded history; therefore, its existence must be presupposed and necessary precautions should be taken and deter it, (2 Th 2:3, 4).
Deep fakes use generation adversarial networks (GANs) to create two distinct “neural networks,” (Harris & Sayler, 2023). In one corner, the generator replicates “the properties of the original data set;” in the opposing corner, the discriminator identifies “counterfeit data,” (Harris & Sayler, 2023). When the generator and the discriminator come head-to-head, they “compete—often for thousands or millions of iterations—until the generator improves its performance such that the discriminator can no longer distinguish between real and counterfeit data,” (Harris & Sayler, 2023).
DEFENSE
It is rationally defensible to argue that deep fakes do not pose a significant threat to U.S. foreign policy (USFP), as fraudulent counterfeit depictions of individuals “can often be detected without specialized detection tools,” (Harris & Sayler, 2023). Moreover, “[d]eepfakes have a number of worthy applications,” (Chesney & Citron, p. 148).
Deep fakes are free speech, thus government is Constitutionally unpermitted to infringe upon the creation of deep fakes. Under the framework of logic; “the fact that deepfake content production can theoretically take place in tandem with blatantly immoral and illegal acts cannot be used to demonstrate the illegitimacy of Deepfake content production itself,” is sound, (Harris & Sayler, 2023). Domestically, defamation can be utilized to remove deepfakes of individuals that compromise the integrity of their identities or character, (Chesney & Citron, p. 1791).
Social media platforms can implement “expand the means of labeling and/or authenticating content,” (Harris & Sayler, 2023). The nefarious potentiality of deep fakes should not deter its advancement; “DARPA has had two programs devoted to the detection of deep fakes: Media Forensics (MediFor) and Semantic Forensics (SemaFor),” (Harris & Sayler, 2023).
Additionally, The World Economic Forum records that more than half of all cyber attacks involve “credentials of former employees whose accounts hadn’t been disabled,” including millions of credentials for sale on the dark web, (WEForum, 2024).
CRITIQUE
Algorithms wield significant foreign policy influence in the field of foreign affairs; including “distortion[s] of democratic discourse” and “damage to international relations,” (Chesney & Citron, p. 1777). But a careful balance must be achieved to ensure resilient foreign relations with other world leaders. Sam Gregory of the human rights group Witness warns that deep fakes produce a concept known as the “liar’s dividend,” whereby it becomes progressively “easy to claim a true video is falsified and place the onus on people to prove it’s authentic,” (Allyn, B., NPR, 2022).
Problems. Deep fakes can be weaponized to embarrass, blackmail, accuse foreign leaders of war crimes, declare war, or depict a false surrender, (Harris & Sayler, 2023). The weaponization of deep fakes bears the potential to escalate tensions between nations, ultimately inducing conflict, and persisting it. Anonymous users can access “freely available software” and “rent processing power through cloud computing,” (Harris & Sayler, 2023). Moreover, deep fakes are becoming “increasingly realistic, rapidly created, and cheaply made” (Harris & Sayler, 2023). To remedy this solution, communication lines needs to remain open and diplomacy must be observably consistent.
Limits. The limitations of deep fakes can still be observed without using additional technology. Therefore it is possible to detect if a deepfake was created, but this technology is quickly advancing. In 2022, “a rendering of the Ukrainian president [Zelensky] appear[ed] to tell his soldiers to lay down their arms and surrender the fight against Russia;” a “deepfake that ran about a minute long,” (Allyn, B., NPR, 2022). In this instance, users were able to depict the artifacts and discern its deception; specifically Zelensky’s “accent was off and that his head and voice did not appear authentic upon close inspection,” (NPR). But the detection of deepfakes still require “close inspection.” As this the quality of deep fakes increase, it could require special technology to transcode it. Soon, declarations of war, or a declaration of surrender, will both need to be confirmed as legitimate. Worse, offensive attacks must be verified as authentic before retaliating defensively.
The cessation of deepfake technology is unviable. To achieve a system that bans the creation of deep fake technology, the U.S. government would have to monitor and scan every piece of content that is uploaded to the internet by American citizens. But open access to personal information is a concept foreign to America. Instead software to determine deep fake technology must be focused explicitly on USFP, and diplomacy must be relied upon as a primary source of information—beyond digital images and video. The president’s direct access to world leaders is essential to preclude the impacts of deepfake technology.
Gaps. The World Economic Forum (WEF) describes deepfake detection software as fighting “fire with fire,” id est “defeating deepfake and AI technologies with rapidly advancing biometric technology,” (WEForum, 2024). Scripture decrees that “[m]any will come in my name, saying, I am he!’ and they will lead many astray. And when you hear of wars and rumors of wars, do not be alarmed. This must take place, but the end is not yet,” (Mark 13:6, 7; ESV).
Similarly, “algorithm-based detection tools could lead to a cat-and-mouse game, in which the deep fake generators are rapidly updated to address flaws identified by detection tools,” (Harris & Sayler, 2023). Therefore, to account for this gap in new technologies must be established and normalized to assist human detection; to best determine what is real, (Mar 13:5-7). Laurie Harris and Kelly M. Sayler predict that “the sophistication of the technology is rapidly progressing to a point at which unaided human detection will be very difficult or impossible,” (Harris & Sayler, 2023). Thus, “the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields,” (Chesney & Citron, p. 148).
CONCLUSION
In sum, U.S. foreign policy must consider the immediate effects of deep fake technology and work to mitigate its unfavorability, thereby precluding an infringement on individual civic rights. This may likely need to begin at a local, regional, and national level to sound the alarm for the public that this technology exists, thus preparing citizens for a potential encounter with deep fake tech.
Bibliography
Allyn, B., NPR. (Accessed on May 7th, 2025). A Deepfake Video Showing Volodymyr Zelenskyy Surrendering Worries Experts. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.
Chesney, B., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(6), 1753–1820. https://www.jstor.org/stable/26891938
Chesney, R., & Citron, D. (2019). Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics. Foreign Affairs, 98(1), 147–155. https://www.jstor.org/stable/26798018
Congress. (Accessed on May 7th, 2025). Deep Fakes and National Security. Congress.gov. Library of Congress. https://www.congress.gov/crs-product/IF11333.
Mingst, K.A.; McKibben, H.E. (2021). Essentials of International Relations. (Ninth Edition). (Function). Kindle Edition.
Sturino, F. S. (2023). Deepfake Technology and Individual Rights. Social Theory and Practice, 49(1), 161–187. https://www.jstor.org/stable/48747289
WEForum. (Accessed on May 7th, 2025). https://www.weforum.org/stories/2024/01/in-an-increasingly-fake-world-biometrics-technology-can-help-you-prove-your-identity/

