Loading...

Deepfakes Are About To Make Evidence A Hell Of A Lot More Suspect

Deepfakes Are About To Make Evidence A Hell Of A Lot More Suspect<br />
<b>Warning</b>:  Undefined array key /var/www/vhosts/lawyersinamerica.com/httpdocs/app/views/singleBlog/singleBlogView.php on line 59
">
Courts
May 2023


Deepfakes Are About To Make Evidence A Hell Of A Lot More Suspect
"Is this your client?" Those four words in conjunction with an easily swayed jury are the stuff of nightmares for a trial attorney. As often as it may be the case that some dude off the street was unfairly charged with a crime by some power-drunk cop, it is a lot harder to get traction on that argument when there's a picture or video of your client committing the act(s) they were accused of -- you can go from zealous defender of the downtrodden to a real-time rendition of Shaggy's "It Wasn't Me" that way. But lo, the reality of that worry is approaching quickly.

[D]eepfakes, which have become more sophisticated and easier to create given the democratization of generative AI tools like Midjourney and DALL-E, are inevitably poised to permeate the legal process.

Within the last year, at least two separate trials have included claims from opposing parties about the evidence presented being a deepfake--in a Tesla lawsuit involving Elon Musk, and in a case related to the Jan. 6 riots involving former President Barack Obama. While both judges determined that the evidence was not manufactured, attorneys and AI experts believe the instances are likely the prologue to a much longer problem.

To be sure, its likely more AI-generated images will come into court as evidence. While some deepfakes will be caught by the first line of defense against inauthenticity--e-discovery professionals well-versed in the Federal Rules of Evidence (FRE)--others may be gatekept by a tech-savvy trial judge. Some deepfakes, however, will end up causing protracted "battles of experts," or leaving unwanted impressions on a jury.

This bit of apparent sci-fi is worth taking seriously. Let's remember that AI-generated photos went from looking like real-time stroke symptoms:

Name one thing in this photo pic.twitter.com/zgyE9rL2XP

-- dumbass ass idiot (@melip0ne) April 23, 2019

To this in a very short amount of time:

Were you fooled by these AI-generated images of Pope Francis looking stylish in a puffer jacket? And should what some are calling "the first real mass-level AI misinformation case" be a cause for concern? https://t.co/UJViDIntAj

-- New Scientist (@newscientist) March 27, 2023

In passing, I'm sure we've all seen photos of politicians like this made by people who think referring to someone as Drump or Byeden constitutes a colorable argument:

agreed
tell it to Trump......slamming DeSantis while kissing Newsome's ass
DESPICABLE pic.twitter.com/FJ5pfcYdXq

-- Edmund Wright (@CEdmundWright) April 24, 2023

But do you want to be the attorney in a divorce proceeding a few years from now when tech even better than this produces a picture of your client French kissing their not-spouse? I'll admit that unless your client has a lot of game, the judge is not likely to believe that your client was actually caught kissing Mila Kunis or something, but what about if the deepfake is something much more plausible like one of your client's coworkers? And as easy as it is to just believe that if people really pay attention that they'd be able to distinguish between the real and the fakes...

"I applied as a cheeky monkey, to find out, if the competitions are prepared for AI images to enter."

A German artist has rejected an award from a prestigious international photography competition after revealing that his submission was generated by AI. https://t.co/4qGjDVbrGb pic.twitter.com/JFXE7dfkgL

-- CNN (@CNN) April 19, 2023

The deepfake threat isn't limited to just photographs either:

Hackers can mimic people you know by using AI to copy their voice and an app to change the caller ID.

"When I do that type of attack, every single time, the person falls for it," said Rachel Tobac, an ethical hacker trying to raise awareness about scams. https://t.co/1cbZIDUUXj pic.twitter.com/xdTZ0sArTk

-- 60 Minutes (@60Minutes) May 21, 2023

If you want to put your win/loss record on the notion that juries on average are better at determining the veracity of photographs or audio than whoever the people are that are judging prestigious international photography competitions or have their hard earned cash on the line, be my guest.

This is just the start:

Lee Tiedrich, a professor of ethical technology at Duke Law and a former partner at Covington & Burling, told Legaltech News that the quality of audio and visual deepfakes is only improving, and the judicial authentication process isn't necessarily equipped to cope just yet.

"At the end of the day, jurors are humans," she said. And "first impressions" are hard to shake, similar to the impact of the Pope in the puffer jacket, or fake pictures of former President Trump being arrested, she said.

"Not only do I worry that a jury won't be able to unwind" the emotions they may feel seeing a fake image of someone attacking someone else, or a recording saying something threatening, but "if this gets to the point where we don't have ways to quickly authenticate [evidence], I worry about access to justice issues, prolonged trials and ultimately, you end up with an expensive 'Battle of Experts.'"

Fortunately, the penchant for analogical and adapting thinking that runs through lawyers like a pox has already led to some potent defenses against fabricated evidence:

Ron Hedges, a former magistrate judge for the District of New Jersey and the principal at Ronald J. Hedges, told Legaltech News that he believes the main issue around deepfakes in court is going to end up being about authentication, and then about admissibility.

The "gatekeeper" so to speak when it comes to authentication would be the e-discovery teams, he said. Whereas when it comes to admissibility, it would have to be the judge.

"Number one: we've got existing rules that courts are going to have to use, because I don't see any new rules coming down," Hedges said, referring specifically to Federal Rules of Evidence Rule 901. "That's a whole series of rules about authentication.

For now, Hedges believes the current FRE 901 series, which includes admissibility of computer generated information in Rule 13 and 14, may be enough to guide judges dealing with potential deepfake evidence.

The problem is likely going to be about long, drawn-out e-discovery battles, especially among technical experts, he said.

Until the courts figure this out, try not to piss off anybody who's really good with photoshop or Midjourney.

Deepfakes Are Coming to Courts. Are Judges, Juries and Lawyers Ready? [Law.com]


Deepfakes Are About To Make Evidence A Hell Of A Lot More Suspect
Chris Williams became a social media manager and assistant editor for Above the Law in June 2021. Prior to joining the staff, he moonlighted as a minor Memelord(TM) in the Facebook group Law School Memes for Edgy T14s. He endured Missouri long enough to graduate from Washington University in St. Louis School of Law. He is a former boatbuilder who cannot swim, a published author on critical race theory, philosophy, and humor, and has a love for cycling that occasionally annoys his peers. You can reach him by email at cwilliams@abovethelaw.com and by tweet at @WritesForRent.

Top