Generative AI for Video Evidence? No Thank You. Not Yet, Anyway.
Noah Brozinsky has been practicing criminal law since 2013. From 2018-2021 he supervised dozens of new lawyers at the Miami Public Defender’s Office, teaching jury selection, trial advocacy, and advanced Fourth Amendment topics. He has delivered countless training presentations, including several on behalf of the Florida Association of Criminal Defense Lawyers.
An Arizona trial court recently allowed a victim’s family to show an AI video of a beyond-the-grave impact statement at the defendant’s sentencing hearing. The family created a video of what the deceased might have said if given the chance to confront his attacker. The defense objected, but video was played, and now the defendant is appealing that ruling.
Elsewhere, last year, NBC News reported that a judge in King County, Washington issued a first-of-its-kind order barring the use of AI-enhanced cellphone video in a criminal trial. The defense tried to introduce a “cleaned-up” (that is, computer-enhanced) version of a cellphone video to support a self-defense argument. The prosecutors argued the video wasn’t faithful to the original. The defense countered that the prosecutors’ worries about the video augmenting reality were overblown. Ultimately the judge precluded the evidence because its admission would create a “trial within a trial” about the reliability of the AI’s additions to the actual surveillance footage. One hopes the judge would have made the same ruling had the prosecutors proffered similarly-edited video evidence supported by a government-paid, so-called expert witness.
This was a good decision, and hopefully signals judicial skepticism about generative AI because trials should never be about which side has better access to more advanced technology. Without strict rules for the use of generative AI, courtrooms will too easily become a battleground for experts who testify more about the predictive power of their software or “deep fakes” than about the true facts of the case.
The current consensus amongst lawyers seems to be that using generative AI to write briefs is discouraged. Some judges have begun ordering parties to disclose whether they’ve used AI in briefs, and other judges prohibit the practice entirely because of hallucinations and bias. This is good—AI “hallucinations” (output that is completely fabricated or inaccurate, such as case citations that don’t exist in reality) is reason enough lawyers shouldn’t trust generative AI to do their work for them.
But AI is now much more advanced than a mere shortcut for brief writing and doc review. That technology is several years old at this point.
The newest frontier in generative AI that lawyers must contend with is creation of visual evidence that would not exist but for predictions based on the users’ inputs. This is a dangerous thing. I’m not talking about predictive excel sheets or speculated data sets based on statistical trends. I completely accept the mathematical reality that we can predict, say, what a business should’ve charged its customers, and how an algorithm might detect anti-competitive prices or fraud (see, for instance, “Benford’s Law”). That’s not really a problem.
The problem is pictures and videos—so-called “deep fakes” and pixelated predictions of what “might have happened” out of the camera’s view.
Imagine a situation where the government and defense both proffered contradictory AI-generated videos to demonstrate what could have happened out of frame. The jury would need to be cautioned not to weigh “whose video was better” (as in, more realistic, better cinematographic material) rather than whose video was more likely a reflection of reality based on the AI’s quality. How could any juror honestly separate those concepts in their mind?
Currently, some better-resourced prosecutors’ offices employ an “Anti-CSI” witnesses whose job it is to explain to jurors how rare DNA evidence is, or how a hazy, pixelated CCTV image sometimes really is the best evidence available; and you can’t actually zoom in on CCTV footage with ludicrous clarity as seen on TV. If trial attorneys aren’t closely circumscribed with well-delineated rules for the use of generative AI before juries, we can expect both sides field experts just like this in every case from now on. And who wants that?
Ultimately, if AI-generated, visual evidence is admitted, the court should immediately caution the jury that what they are about see is not something that was captured in real time, but was generated after the fact for the purpose of litigation. Then, lawyers should ask for closing jury instructions to reinforce that fact.
Some cautionary jury instructions I’ve thought up could look like this:
To be read if the record includes evidence the parties stipulate is generative AI video or photographic evidence:
You must consider some exhibits with more caution than others. This is particularly true where, as here, you have been presented withevidence the parties agree was generated by artificial intelligence.
I have told you which [videos/photos] are AI-generated videos, and which are not.
At the time you viewed this AI evidence, I cautioned you that it did not depict reality but was, instead, created by a computer based on computer-generated predictions, which themselves were based upon inputs provided by the party seeking to admit the evidence and for the purpose of creating that. When weighing the value of this evidence you should consider, among other things, (1) that the evidence was created by the party that presented it to you for the purpose of these proceedings; and (2) whether and to what extent the party introducing this AI evidence has explained, to your satisfaction, the accuracy of the computer’s predictions.
To be read if there no stipulation as to whether a video or photograph was AI-generated or captured reality in situ:
You must consider some exhibits with more caution than others. At issue in this case is whether a video you have seen is reality or was created by a computer and presented to you as reality. As the finders of fact, only you can decide what the evidence is. You may believe the video evidence is an accurate reflection of reality, or you may believe it is not. Even if you believe a video you have been shown does depict reality you must still find the defendant “not guilty” unless the government has proven its case beyond and to the exclusion of every reasonable doubt.
***
Unless AI-generated evidence comes with a stern judicial warning that the material jurors are about to watch was fabricated, and comes with a massive caveat about the statistical (un)likelihood of its accuracy, I just can’t see the utility in allowing AI to opine on what might have been captured by the camera had the viewfinder been steadier or the lighting better, or had the action not continued around the corner. Whenever litigants present a jury with something that’s created for the purpose of litigation, they’re asking “whose fiction is better?” I think it’s unseemly, and contrary to the truth-seeking function of courts.
The use of fabricated, invented evidence—that is, the creation of material that doesn’t actually exist—also implicates heaps of ethical rules, not least of which is that lawyers can’t tamper with evidence. Like golfers, lawyers have to play the ball as it lies, and for good reason.
To be sure, creating evidence is different from compiling evidence or presenting it in creative ways. There’s obviously some play in the joints, but there’s surely a difference between presenting closing argument with a sleek PowerPoint of key documents overlaid on a dynamic timeline, versus generating material that never existed.
There are deep societal concerns to weigh when evaluating the usefulness of any new technology, but particularly so when that technology can be employed by the State to imprison people. If it’s better to let a thousand guilty people go free than to convict an innocent person—and it is—it’s better to assess cases on the actual evidence available, not the generated evidence of what could have happened. And it’s better to hold off on using that technology until well-reasoned rules about its use are in place and the playing field can be leveled.
October 9, 2025