0:00
This is John Quinn, and this is Law Disrupted, and today we're going to be talking about
0:09
It's all AI these days, right?
0:11
But this is a particular part of AI that we're going to be talking about, an area of
0:15
the law that's developing.
0:17
We'll see how far it goes, and that's the application of defamation law to the output
0:23
of large language models.
0:25
We're starting to see some cases that are being filed, where people are saying I was
0:31
And here to talk to us about this is my partner, Bobby Schwartz, an associate in our firm,
0:38
Marie Hyra Petian, I hope I said that, or close enough, Marie.
0:44
So tell us, Bobby, we're starting to see some cases filed, and tell us to give us some
0:50
examples of some of the cases, and what's the theory of the case?
0:53
Yeah, we've had some cases filed, one case has resulted in a defense summary judgment
0:58
win, and we'll talk about that.
1:01
And some of the issues that arise in these cases are, for example, who's the speaker?
1:07
Is it the person who prompted the AI bot to produce an output that turned out to be false
1:13
and otherwise the family, and how do you deal with public figures where you have to prove
1:19
constitutional malice through evidence, either that the speaker knew the statement was
1:23
false or acted in request disregard for the truth?
1:26
If the speaker is an AI bot, how are you ever going to show that spots don't have intent?
1:33
I'll give us the fact pattern of some of the cases.
1:38
It was against open AI and a fellow, Mr. Walters, soon as open AI, for defaming him because
1:46
somebody queried a chat GPT to ask about a lawsuit to which he was a party, and after
1:55
multiple prompts, the chat GP output said that to the CFO, the plaintiff in this case
2:02
was an embezzler, and that lawsuit had nothing to do with the ad.
2:08
He had never been an embezzled anything, and so it was obviously accusing him of a crime
2:16
And so he sues open AI for how does he become aware of this output?
2:20
Is this something that had he put in the prompt or somebody else did and forwarded to
2:27
How did he become aware of it?
2:28
I don't want to follow the conversation.
2:29
Can I ask Rita to jump in and give us a little background on the case?
2:33
That's a great question, John.
2:35
And I had to go back into the complaint when I asked myself the same question, and there
2:41
was really nothing from the plaintiff explaining how he even found out about it.
2:47
It was in the motion for summary judgment and the court's order granting summary judgment
2:54
where it became clear to me that the plaintiff found out after the journalist went to the
3:04
company and asked if that was true, or went to the plaintiff and asked if that was true.
3:10
And the plaintiff said, no, it's not true.
3:13
It's still unclear to me how Walter found out about it.
3:19
So a journalist put his prompt into an LOM.
3:23
This was the output.
3:24
So the journalist does his job because he starts investigating it and the result is a lawsuit
3:30
Now, what was the grounds on which the court granted summary judgment?
3:33
There are two grounds, one of which 100% sure of, and the other seems more sustainable
3:40
This was that the open AI successfully argued that the output was not to family's worry.
3:47
As a matter of law, it did not communicate a defamatory meeting.
3:53
And that's based on, I know, but judging Georgian focused on was the legal principle that
4:02
says that the statement has to be reasonably understood as describing actual facts about
4:09
a plaintiff and that to no reasonable reader would have under these circumstances concluded
4:20
that what chat GPT was putting was outputting were actual facts.
4:25
And that's because there were all kinds of warnings to the journalist making the queries
4:32
saying, just remember, this is not a flawless and there were additional warnings that said,
4:38
I can't go back in far enough in time to pull up the complaint you want me to ask
4:44
And this is the LLM saying I can't do this.
4:50
So the LLM gave the journalist limitations and that should have put the journalist on notice
4:58
or was sufficient to put the journalist on notice that what chat GPT was outputting
5:02
were not actual facts.
5:04
That doesn't sound exculpatory to me.
5:07
Now, if I were open AI, I'd be a little worried about whether that's going to survive a
5:12
pellet review if there is any of a pellet review.
5:14
But there's maybe some unusual facts here where the reporter already had the complaint or
5:19
a copy of the complaint.
5:20
Well, let's do some other examples of cases.
5:24
The other cases haven't been resolved.
5:25
But here's some other instances.
5:26
And this one, I think, has more legs.
5:27
This one's against Google.
5:30
And a business is claiming, is suing Google for defamation because it's AI overview tool
5:40
spit back a result that said the Minnesota State Attorney General was suing the plaintiff
5:46
for deceptive practices.
5:48
And that resulted in a lot of lost business anywhere from 100 to 200 million dollars in
5:54
How did everybody learn about this?
5:57
That's a good question.
5:59
So when you hit something on Google, the very first thing that pops up is the AI overview.
6:08
And anyone in that time frame, that Google did apparently read that particular fat powder.
6:15
So that's a Gemini paragraph at the top right or true so a lot of contracts started
6:25
If you're seeing that every time you search this business, that's up on top that they're
6:29
being investigated by the Attorney General.
6:32
Before we plunge into the elements, give us just another example.
6:35
So we have a flavor of what's happening in the courts and AI and defamation.
6:42
Sexualized deep faked image it is based on photos that not as naked people, but let's
6:50
say somebody posts a photograph of themselves on social media.
6:56
And then somebody else runs a query and it's a text to image query and says show me photographs
7:03
of so it's in a nude or not in a nude, whatever.
7:08
And that's happened on Groc, which is the X platform or the X AI tool where and so they've
7:16
been sued anonymously by a woman who says, Hey, that's me.
7:22
I've never posted a photo of myself naked and this is under California law, defamatory.
7:30
It's revenge porn and on and on.
7:33
Do you think the law of defamation that's really going to have applicability to the output
7:41
You've identified what are some pretty obvious problems.
7:45
We think we know defamation is intentional tort, but don't have intentions, LMs don't
7:51
have intentions so far as we know.
7:54
People who put in prompts may be do.
7:56
Yeah, that's where I think the dividing line will get drawn that that the AI platforms
8:04
are less likely to end up having liability versus individuals who prompt AI tools, get
8:11
some statement and then re-cublish it, carelessly or otherwise, and maybe they should have liability.
8:19
And by the way, lurking in all this yet to be addressed in any case is Section 230
8:23
of the Communications Decency Act, which provides a safe harbor for Internet platforms for
8:31
against liability for material that's posted by users now.
8:35
Well, question is, it does Section 230 apply in this context and I realize maybe I'm jumping
8:40
my head here, but it's an interesting question because normally that gets applied if a user posts
8:48
a defamatory statement about somebody and then the subject of that post files a defamation
8:54
claim against the speaker and the platform for republishing it or publishing it in the
9:00
first instance and the platform says, sorry, no, I didn't put that there.
9:05
Well, in this case, it's not clear the user certainly the user prompted the platform to generate the
9:11
output, but there was some interaction with maybe non-volitional, but nonetheless some conduct on
9:18
a part of the platform and courts have yet to deal with that. So that's another issue lurking
9:24
here that we'll have to get resolved.
9:26
All right. So other than the intent element, I guess there's also an issue about who is the speaker?
9:34
Yes. It's the LLM, the speaker is the the data scientist behind the LM, the speaker is the
9:40
person who entered the prop the speaker. And the platform would presumably take the position
9:48
that they're not actually, and this gets into sort of the esoterica of how these large language
9:55
models work. They're based on algorithms that look for probabilistic similarities or patterns
10:03
in communicate human language. And based on that, they articulate or come up with responses.
10:11
So they're not necessarily thinking, oh, Sally Jones is a bad person or whatever the
10:18
defenitory statement would be. They're just trying to present a creative language in response to
10:25
prompts. And are they really the speaker or are they even speaking at all, even though we recognize
10:31
it as language that we can comprehend. The model doesn't think of it that way. And are they
10:37
the speaker and are they the speaker because all they were doing was reacting to prompts by
10:42
a third party who wanted to hear something or get something about the subject of the prompt.
10:50
Obviously, if there is an output, which is defamatory, and somebody then takes that and publishes
10:56
the defamatory output, that could be a traditional defamation claim. I agree. Yeah, there's nothing
11:05
unusual about that. That's just republication. The only reason we're talking about this is
11:10
people are bringing claims against the open AI and Claude and the like. It raises these issues
11:16
about intents and who's the speaker and the like. And when you're dealing with republication,
11:21
so the standard is, and this has arisen a lot in social media context, not involving the
11:28
original language models, but just traditional social media or any other form of media.
11:33
The question the courts ask is, was it reasonable for the speaker to believe that when the
11:40
he, she or it, said whatever they said to this other person or these other persons that these
11:46
other persons would republish it to members of the public. And if that's, if the answer is yes,
11:54
then they can have liability. But if I were representing an AI platform, I would say, of course,
12:00
we didn't have that expectation. We warned our users that our models are capable of hallucinating
12:08
and that they should use care. And it would not be very hard if it's not already there to bake
12:14
something in the terms of service or the end user license agreement that provides some
12:21
language that would disclaim any intent or expectation and give some insulation against republication.
12:29
What should somebody do if they feel that they've been the victim of AI defamation?
12:35
What actions can you take? Most of the platforms, I think all of the platforms have systems,
12:43
monitors, complaint procedures where you can bring to their attention an issue or a problem.
12:49
And especially if it's built baked into a social media platform, which is off in the case,
12:57
so that if somebody has posted something and even if it had nothing to do with an AI generated
13:03
output, but if somebody's unhappy with something somebody posts, there are mechanisms.
13:08
They're not very effective. They're not overseeing, if you will, by some statutory rubric like
13:14
the Digital Millennium Copyright Act is for copyright infringement. But you have that recourse
13:21
and you can contact the platform, you can contact the user.
13:26
I think they're given the volume of this activity. It's impossible for
13:31
platforms to be able to meaningfully respond to it, take things down, whatever they do that they
13:37
can do realistically they will. But I don't think you should assume it. And the other problem
13:42
with protecting your rights here is usually you're just confronted with some user handle
13:48
that could be in a private mode. In other words, you may not be able to contact that person.
13:54
You may not be able to even sue them other than through the pseudonym. You might have to sue
14:01
the entity to get them to compel them to tell you who the user is so that you can then actually
14:06
file a lawsuit against them as a real human being. It's very hard to enforce these rights for defamation.
14:12
It sounds to me like this, the intersection of long defamation and defamation and AI and these
14:20
cases are coming up are interesting developments, but it doesn't sound to me like it has legs.
14:26
Unless they're fundamental, we recognize fundamental changes in the law of defamation.
14:31
Marie, do you agree with that? I definitely agree with that. Defamation is usually described as
14:37
an intentional tort, but that's a little bit misleading because you don't have to intend to hurt
14:42
someone. People the same others by accident all the time. What you need to have is to have met
14:48
to publish the statement and AI starts to break that framework because in these cases, nobody really
14:56
meant to publish the defamatory statement. The user asked an innocent question. The company built
15:02
a product and warned at my mate mistakes. The model just predicted the next word. That's how
15:08
these things work at their core. It's not retrieving facts like a surge engine. It's assembling
15:15
language based on patterns. So the harm can be real, but the causal chain can be clear,
15:23
but nobody fits neatly into the traditional definition of a publisher. And in some sense,
15:28
everyone did something reasonable and so on. Still got hurt and fault based tort law doesn't have
15:34
a great answer for that, which raises a bigger question. Do we eventually need a different framework
15:40
entirely? And one possibility people have been talking about is product's liability. So you don't
15:47
too forward by proving they intended your interactive fail. You show the product was
15:53
deceptive and caused harm and applying that idea to AI generated speech would be a major shift,
16:01
but these are commercial products deployed at massive scale and the harms are foreseeable.
16:06
So no court has gone there yet. It's just what we see people writing about.
16:12
That's an interesting thought. Thank you both. Bobby Schwartz, Marie Hyra Petian. Thank you
16:19
for joining us to talk about AI and defamation. This is John Quinn. This has been Law Disrupted.
16:29
Thank you for listening to Law Disrupted with me, John Quinn. If you enjoyed the show,
16:34
please subscribe and leave a rating and review on your chosen podcast app.
16:39
To stay up to date with the latest episodes, you can sign up for email alerts at our website
16:45
lawhighfdisrupted.fm or follow me on x at jbq law or at quinoe manual. Thank you for tuning in.