•  
  •  
 
University of Chicago Legal Forum

Abstract

Dall-E. ChatGPT GPT-4. Words that did not exist in the English lexicon just a few years ago are now commonplace. With the widespread availability of Artificial Intelligence (AI) tools, specifically Generative AI, whether in the context of text, audio, video, imagery, or even combinations of these, it is inevitable that trials related to national security will involve evidentiary issues raised by Generative AI. We must confront two possibilities: first, that evidence presented is AI-generated and not real and, second, that other evidence is genuine but alleged to be fabricated. Technologies designed to detect AI-generated content have proven to be unreliable,1 and also biased.2 Humans have also proven to be poor judges of whether a digital artifact is real or fake.3 There is no foolproof way today to classify text, audio, video, or images as authentic or AI-generated, especially as adversaries continually evolve their deepfake generation methodology to evade detection. Thus, the generation and detection of fake evidence will continue to be a cat-and-mouse game. These are not challenges of a far-off future; they are already here. Judges will increasingly need to establish best practices to deal with a potential deluge of evidentiary issues.

We will discuss the evidentiary challenges posed by Generative AI using a civil lawsuit hypothetical. The hypothetical describes a scenario involving a U.S. presidential candidate seeking an injunction against her opponent for circulating disinformation in the weeks leading up to the election. We address the risk that fabricated evidence might be treated as genuine and genuine evidence as fake. Through this scenario, we discuss the best practices that judges should follow to raise and resolve Generative AI issues under the Federal Rules of Evidence.

We will then provide a step-by-step approach for judges to follow when they grapple with the prospect of alleged AI-generated fake evidence. Under this approach, judges should go beyond a showing that the evidence is merely more likely than not what it purports to be. Instead, they must balance the risks of negative consequences that could occur if the evidence turns out to be fake. Our suggested approach ensures that courts schedule a pretrial evidentiary hearing far in advance of trial, where both proponents and opponents can make arguments on the admissibility of the evidence in question. In its ruling, the judge should only admit evidence, allowing the jury to decide its disputed authenticity, after considering under Rule 403 whether its probative value is substantially outweighed by danger of unfair prejudice to the party against whom the evidence will be used.4 Our suggested approach thus illustrates how judges can protect the integrity of jury deliberations in a manner that is consistent with the current Federal Rules of Evidence and relevant case law.

Start Page

75

Included in

Law Commons

Share

COinS