When International Law Meets Deepfakes: The Coming Crisis of Evidence
AI-generated synthetic media threatens to undermine the evidentiary foundations of international justice. As deepfakes become indistinguishable from reality, courts face an unprecedented challenge: how do you prosecute war crimes when seeing is no longer believing?
There was a time when video evidence carried inherent authority. A photograph could be contested, but a moving image, especially one showing events in real time, held a credibility that few other forms of proof could rival. That era is ending. The rapid rise of AI-generated deepfakes now threatens to destabilise one of the most fundamental pillars of international justice: the reliability of evidence.
Deepfakes already circulate widely in conflict zones and online platforms, with fabricated videos depicting statements, attacks, or admissions that never occurred. In emotionally charged contexts, these images travel quickly. Once viral, they shape perceptions long before forensic verification is possible. And whilst expert analysis can often detect manipulation, the pace of misinformation outstrips the ability of courts and institutions to respond.
The legal challenge is profound. International courts rely heavily on authenticated audiovisual materials, especially in cases involving mass atrocities, state responsibility, or violations of humanitarian law. If these evidentiary foundations become unstable, accountability becomes harder to achieve. How can investigators distinguish genuine footage from AI-fabricated content? How should courts weigh evidence when authenticity itself is uncertain?
The Erosion of Evidentiary Trust
Sam Gregory, programme director at WITNESS, an organisation specialising in using video for human rights advocacy, has warned repeatedly about the dual crisis facing visual evidence. In a 2023 article for the Harvard Human Rights Journal, Gregory noted that we face both the "liar's dividend", where real footage is dismissed as fake, and the proliferation of actual synthetic media designed to deceive. This creates what he describes as a "truth decay" that undermines the entire evidentiary ecosystem.
The problem is not merely theoretical. During the ongoing conflict in Ukraine, deepfake videos purportedly showing President Zelensky calling for surrender circulated on social media. Whilst quickly debunked, the incident demonstrated how synthetic media could be weaponised during active hostilities. Similarly, fabricated images from Gaza have muddied waters already turbulent with contested narratives. As Hany Farid, a digital forensics expert at the University of California, Berkeley, observed in his 2023 testimony to the US Senate, the technology to create convincing deepfakes is now accessible to anyone with a laptop and an internet connection.
This accessibility marks a sharp departure from earlier eras of propaganda. Previously, sophisticated video manipulation required state-level resources or specialised equipment. Now, open-source tools allow anyone to generate convincing synthetic media within hours. The democratisation of deception has arrived.
International Law's Inadequate Arsenal
International courts, from the International Court of Justice to the International Criminal Court, have developed elaborate procedures for authenticating evidence. These protocols, however, were designed for an analogue world. They assume that whilst evidence might be forged, such forgery would leave detectable traces or require significant resources that would limit its spread.
Professor Rebecca Hamilton of American University's Washington College of Law has argued that international criminal law faces a fundamental "epistemic crisis" as synthetic media becomes indistinguishable from reality. In her 2024 article for the European Journal of International Law, Hamilton contends that the Rules of Procedure and Evidence at the ICC, last substantially revised in 2013, contain no specific provisions for AI-generated content. The court can question authenticity, certainly, but it lacks systematic protocols for digital forensics or technical verification of audiovisual materials.
The UN Human Rights Council has begun to grapple with these issues. In a 2024 report, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression noted that deepfakes pose "a fundamental threat to the integrity of information ecosystems and democratic discourse". Yet the report stopped short of proposing concrete legal mechanisms for verification, instead calling for further study and multi-stakeholder dialogue.
This cautious approach reflects genuine uncertainty. No international consensus exists on how to regulate synthetic media, much less how to authenticate evidence in legal proceedings. Different jurisdictions are experimenting with varied approaches, from mandatory disclosure requirements for AI-generated content to blockchain-based authentication systems. The fragmentation itself creates opportunities for bad actors to exploit jurisdictional gaps.
Towards New Evidentiary Doctrines
The law will need new doctrines, potentially including AI provenance standards, chain-of-custody technological markers, and independent digital forensics units embedded within international organisations. The integrity of future trials may depend on these innovations.
Several proposals have emerged. Dr Yvonne McDermott Rees of Swansea University has advocated for the creation of a standing digital evidence laboratory within the ICC Office of the Prosecutor. In a 2023 article for the Journal of International Criminal Justice, she argues that investigators need real-time access to forensic tools and expertise, rather than relying on external consultants who may take months to produce reports. Her proposal includes training prosecutors in basic digital verification and establishing clear standards for the admissibility of synthetic media evidence.
Meanwhile, technologists have proposed cryptographic solutions. The Coalition for Content Provenance and Authenticity (C2PA), an industry group including Adobe, Microsoft, and the BBC, has developed technical standards for embedding metadata in digital media at the moment of creation. This "content credentials" system would theoretically allow courts to trace the origin of any image or video, though widespread adoption remains uncertain. As Bruce Schneier, a cryptographer and security expert, noted in a 2024 blog post, such systems are only as strong as their weakest link, and determined adversaries will find ways to strip or forge metadata.
Perhaps more promising are proposals for shifting the burden of proof. Professor Danielle Citron of the University of Virginia School of Law has suggested that in cases involving contested audiovisual evidence, parties should be required to produce the original digital files, complete with metadata and provenance information. Failure to do so would create a presumption against authenticity. This approach, outlined in her 2023 article for the Yale Law Journal, would place the onus on those seeking to introduce potentially synthetic evidence rather than on sceptics trying to disprove it.
The Liar's Dividend and the Paradox of Doubt
Yet even the most sophisticated authentication systems face a deeper problem: the "liar's dividend" that Gregory identified. When any video can potentially be fake, all videos become suspect. Bad faith actors need not create convincing deepfakes; they need only sow sufficient doubt about genuine evidence to escape accountability.
This dynamic has already appeared in domestic legal contexts. In the United States, defence attorneys have begun claiming that legitimate audio or video evidence against their clients might be deepfakes, despite no actual evidence of manipulation. As legal scholar Riana Pfefferkorn of Stanford Law School observed in a 2023 article, this "deepfake defence" threatens to become a get-out-of-jail-free card for anyone caught on camera.
The implications for international law are stark. If state actors can plausibly deny authentic footage of war crimes or crimes against humanity by invoking the spectre of deepfakes, the entire architecture of accountability crumbles. We risk entering an era of evidentiary nihilism, where seeing is no longer believing and nothing can be conclusively proven.
Building Resilient Systems
The solution lies not in a single technical fix but in building resilient, multi-layered verification systems. International institutions must invest in forensic capacity, develop clear legal standards for digital evidence, and establish rapid-response mechanisms for assessing contested materials. Courts will need to become comfortable with probabilistic assessments of authenticity rather than the binary determinations that characterised earlier eras.
Crucially, this evolution must happen quickly. The technology is not waiting for the law to catch up. Every day's delay creates more opportunities for synthetic media to pollute the evidentiary record and undermine the pursuit of justice. The integrity of international law itself may depend on how successfully we navigate this transition.
The coming crisis of evidence is not hypothetical. It is here, unfolding in real time across conflict zones and courtrooms alike. How we respond will shape the future of accountability, truth, and justice in the digital age. The question is no longer whether deepfakes will challenge international law, but whether international law can adapt fast enough to meet the

