Table of Contents

‘So to Speak’ podcast transcript: Artificial intelligence: Is it protected by the First Amendment?

Artificial intelligence: Is it protected by the First Amendment?

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Nico Perrino: You’re listening to So to Speak, the free speech podcast, brought to you by FIRE, the Foundation for Individual Rights and Expression. Welcome back to So to Speak, the free speech podcast, hosted by me, Nico Perrino, where, every other week, we dive into the world of free expression through personal stories and candid conversations.

Today, we’re focusing on the rapid advancement of artificial intelligence and what it means for the future of free speech and the First Amendment. Joining us as we navigate the complex and ever-evolving landscape of free expression in the digital age are our guests, Eugene Volokh, a First Amendment scholar and law professor at UCLA, David Greene, the senior staff attorney and civil liberties director at the Electronic Frontier Foundation, and Alison Schary, a partner at the law firm of Davis Wright Tremaine. Eugene, David, and Alison, welcome onto the show.

Alison Schary: Thanks, Nico.

Eugene Volokh: Thanks for having me.

David Greene: Yes, glad to be here.

Nico: I should say that introduction was not written by me, but rather by OpenAI’s popular artificial intelligence chatbot, ChatGPT, with just, I think, one or two tweaks from me there on the back end. I think it’s fair to say artificial intelligence technologies have advanced quite a bit here in the past few months, or even past few weeks, and taken many people by surprise. It’s not just ChatGPT, of course, which you can ask questions to and it can churn out surprisingly cogent responses, but there’s also DALL-E, which creates AI images, VALL-E, which turns text into surprisingly human-sounding speech, and then there are tools like QuickVid and Make-A-Video from Meta, which produce AI video.

So, to start this conversation, I wanna begin by asking the basics. Should AI be granted the same constitutional rights as human beings when it comes to freedom of speech and expression, and if so, how would you define the criteria for AI to be considered as having First Amendment rights? And I should ask – that question itself was generated by AI, and I know, Eugene, over email, you had a quick take on this one, so I’ll let you kick it off if you’re willing.

Eugene: Yeah. So, rights are secured to persons – let’s not say people necessarily, but persons.

Nico: Well, what’s the difference?

Eugene: Right. So, they reflect our sense that certain entities ought to have rights because they have desires, they have beliefs, they have thought, so that’s why humans have rights. Even you could imagine if somebody were to show that orangutans, let’s say, or whales, or whatever else are sufficiently mentally sophisticated, we could say they have rights. Corporations have rights because humans who make up those corporations have rights, but computer software doesn’t have rights as such.

So, the real question would be whether, at some point, we conclude that some AIs are like enough to us or thinking enough that they have desires, that things matter to them, and then, of course, we’d need to ask what rights they should have, which is often very much based on what matters to us. So, for example, humans generally have parental rights because we know how important parenting is to us. I don’t know what would be important to AIs. Perhaps it’s different for different AIs. I think it’s way premature to try to figure it out.

Now, there’s a separate question, which is whether the First Amendment should protect AI-generated content because of the rights of the creators of the AI or the rights of users of the AI. I’ll just give you an example. Dead people don’t have rights, not in any meaningful sense, but if somebody were to ban publication of, say, Mein Kampf, it wouldn’t be because we think that the dead Adolf Hitler has rights, it’s because all of us need to be able to – excuse me, it would be unconstitutional, I think, because all of us need to be able to read this very important, although very harmful book.

So, I think there’s a separate question as to whether AI-generated speech should have rights, but at this point, it’s not because AIs are rights-bearing entities. Maybe one day, we’ll conclude that, but I think it’s way too early to make that decision now.

Nico: Well, as a basic constitutional question – maybe, David, you can chime in here – what does the First Amendment protect? We talk about the freedom of speech. Is it the speech itself, or is it because the speech is produced by a human that it therefore needs protection, or is there just value in words strung together, for example?

David: Well, I think the First Amendment and freedom of expression as a concept more broadly protects the rights of people to express themselves, or persons, or collections of people that form entities. I agree with Eugene that I think AI is best looked at as a tool that people – persons who have rights – use to express themselves, and so, when we’re talking about the rights framework around AI, we’re talking about the rights of the users, of those who want to use it to create expression and to disseminate expression, and those who want to receive it. And, I do think in every other context of the First Amendment, that’s what we’re looking at. We’re looking at the value in expressing oneself to others.

Nico: But is there a reason we’ve historically looked at it that way – it’s because it’s only ever sentient people that have produced the expression, right? It can be produced no other way. Now, you’re creating a technology that can take on a life of its own and produce expression that was never anticipated by the so-called creators of that artificial intelligence, and if speech and expression has an informational value and we protect it because of the informational value it provides to, say, democracy or knowledge, is there an argument that that should be protected as well?

Alison: Well, I’m gonna come at this, I think, as a litigator. Just to be totally practical, we can hypothesize about the constitutional, theoretical underpinnings, but when push comes to shove, the way the law gets made is in lawsuits, and you have to have a person to sue, and so, if somebody is gonna bring this case, they’re going to sue the owner of the – the person who distributed the speech, or the person who created, or the entity that takes ownership of or develops the AI system. That’s what’s happening with current ones.

So, I think as a practical matter, when these First Amendment arguments get made, they’re inevitably going to be made by people, and it’s either going to be the person distributing saying, “This is my speech,” or it’s going to be the developers of the algorithm saying, “This is my speech in the way it’s put together and it’s also the speech of people who give the prompts to it, etc.,” but I just have trouble thinking of – other than a monkey selfie kind of situation, which got squashed by – that’s kind of the most on-point precedent –

Nico: What is that case, for our listeners who aren’t familiar with it?

Alison: Oh, sorry. So, the monkey selfie is when there was a monkey – a macaque, I think – took a nature photographer’s camera, took a selfie, and then the photographer was distributing it, and I think it was PETA that sued as the next best friend of the monkey, arguing that the monkey had copyright because the monkey was the one who pressed the button, and they lost, and there is no standing for a monkey to assert copyright because it doesn’t have the rights that are contemplated by the Copyright Act. So, I have trouble seeing where it’s gonna get in the door to even adjudicate a First Amendment right for an AI in and of itself.

Eugene: Right. I like the litigation frame, I think it’s very helpful, so let’s imagine a couple of things. Let’s imagine somebody sues ChatGPT, or essentially the owners of that – sue the company that runs it – and they say, “You are liable because your product said things that are defamatory,” let’s say. One thing it could do is it could raise its own First Amendment rights – “That’s our speech” – but it could also raise third-party First Amendment rights.

So, it could say, “It’s not our speech, but it’s the AI’s speech, and we are here to protect that,” and there are quite a few cases where, for example, a book publisher could raise the rights of its authors, let’s say, or, in fact, a speaker can raise the rights of its listeners. There are cases along those lines. I don’t think that that’s gonna get very far, because again, courts should say, “Huh, why should we think it has rights? Do we even know that it’s the kind of thing that has rights?”

So, I think instead, the publisher will say – excuse me, the company that’s being sued will say, “We’re going to assert the rights of listeners, that if it’s publishing these things, it’s gonna be valuable to our readers, and it’s true, the readers aren’t the ones being sued here, but if you shut us down, if you hold us liable, readers will be denied this information, so we’re gonna assert their rights.”

And again, courts are quite open to that sort of thing, and I think the answer will be yes, readers have the rights to read material generated by AIs regardless of whether AIs have the right to convey it. The same thing would apply if, for example, Congress were to pass a law prohibiting, let’s say, the use of AI software to – either prohibiting it altogether for the use of it to answer questions, or maybe imagine a rule that says that AI software can’t make statements about medical care, like people would be asking it, “What should I do about these symptoms?” We don’t want it to do that.

Alison: It would have to be disclosed.

Eugene: Right, you could have a disclosure requirement, or the easiest example would be a prohibition. Again, I think the AI software manufacturers would say not so much “Oh, the AI has a right to play doctor.” Rather, it’s that listeners have a right to gather information, for whatever it’s worth with whatever disclaimers are there, and the statute interferes with listeners’ rights. I think that’s the way it would actually play out in court.

David: And I do think, Nico, that your question sort of assumes that inevitability of sentience from AI that I don’t know how close we are to or whether we will ever get there, but we are certainly not there right now, and I hate to be one of those tech lawyers who talks tech – who tries to make analogies to the Stone Age and things like that, but there’s always been tools that speakers have used to help them compose and create their speech, and I do think that AI is best thought of in that way now, and maybe there will be some tipping point of sentience or we’ll have to think about whether or not there’s no remedy for speech harm because there’s no one to defend the speech, and maybe we would get there, but I don’t think we’re there yet, and I think it’s actually – I think that it’s a bit dangerous from a human rights perspective to give AI this decision-making that’s independent of those who do the inputs into at and those who give the prompts.

It leads us to sort of a magical thinking about AI in that it’s inherently objective or that it’s always making the correct decision, and I don’t think that’s great to actually disengage it from the responsibilities – when we’re in the harms framework – from those who are actually inputting data into it and giving prompts. There’s really still a very significant human role in what AI spits out.

Nico: Yeah, I asked that question because as I was preparing for this conversation, I talked with some of my colleagues and said, “Hey, what would you wanna ask David, Alison, and Eugene?”, and as we know from popular culture surrounding movies made by artificial intelligence, often they involve artificial intelligence that reaches a sentience that passes the Turing test, where they have an intelligence that is indistinguishable from human intelligence, and there’s this popular horror flick out right now called M3GAN starring an AI-powered doll that gains sentience and murders a bunch of people while doing TikTok dances, and so, they were like, “Well, what are Megan’s free speech rights?”, putting the murders aside.

And then, of course, there’s The Positronic Man written by Isaac Asimov a while back, which became The Bicentennial Man, featuring Robin Williams, when it was made into a movie. That was essentially the story of a lifelike robot petitioning the state for equal rights. I never like to close my mind to the possibility that technology will do miraculous things in the coming years and decades.

I think if you would ask someone 150 years ago about virtual reality, they just wouldn’t have even been able to conceive it, and with the advancement of AI in the past three months, the sort of images that tools like DALL-E are turning out in some cases just blow my mind, images that look like someone drew a portrait that you would have paid thousands of dollars to see. But I want to get back to this question about liability. So, the classic example for artificial intelligence, or the classic worry from those who are very worried about artificial intelligence is, okay, you’ll ask AI to do something, and you won’t be able to anticipate its solution to that problem.

So, you ask AI to eliminate cancer in humans, and it decides the best option is to kill all humans, right? That’ll eliminate the cancer for sure. So, when AI takes a turn, is it the programmer who is responsible, for example, if they defame someone, or if the artificial intelligence incites violence, or is it the person who takes the generative product and publishes it to the world? How should we think about that?

Alison: I think that what David was saying about thinking of AI as a tool is the right thing here. If you just let a robot come up with how are we gonna solve cancer, and then just go with it, and have no kind of humans in the chain checking whether this is a good idea, that seems pretty negligent to me.

We have claims that can account for that all the way up to all kinds of torts, but having an algorithm – being able to run the numbers and come up with solutions, and then having a human to look at those and cull through it, but kind of doing the computation – I think that’s a tool, and so, if, then, a human makes the decision to publish something or to act on something, you have a person to hold liable because it’s the person who took that recommendation and went with it, which is the same thing regardless of who’s making that recommendation.

Eugene: So, one of my favorite poems is Kipling’s The Hymn of the Breaking Strain. It’s got some deeper things going on in it which, actually, I don’t much care for – those don’t work well for me – but it’s, in some measure, a poem – it starts out as a poem about engineers. Here are the opening eight lines.

“The careful textbook’s measure/let all who build beware/the load, the shock, the pressure/material can bear/So when the buckled girder/lets down the grinding span/the blame of loss or murder is laid upon the man/not on the stuff, the man.” I used it for my torts class when I used to teach torts class, that the bridge isn’t negligent, the creator of the bridge may be negligent, maybe the owner of the bridge is negligent in not maintaining it. Maybe the user of the bridge is negligent in driving a truck over that exceeds the posted limits.

Now, to be sure, note there’s one difference. The careful textbooks do not exactly measure what AIs are going to be able to do. In fact, one of the things that we think about modern AIs is precisely they have these emergent properties that are not anticipatable by the people who create it, but it is the job of the creators to anticipate it at least to some extent, and if they are careless – this is a negligent standard, generally speaking, for these kinds of things – if they are negligent in their design – if, for example, they design an AI that can actually do things, that can actually inject people with things, and then they’re careless in any failsafes they put in, they’re careless in what the AI could inject people with, then, in that case, the creators will be liable, or perhaps the users, if the carelessness comes on the part of the user.

Alison: But the user’s gonna sign a release no matter – you’re not gonna do that in the real world without somebody signing away every possible right.

Eugene: Well, my understanding – and I’m sure it varies sharply from jurisdiction to jurisdiction, but at least in my own state of California, there are limits to releases as to, for example, bodily injury. They’re not always enforceable. In fact, in many situations, they’re not enforceable. So, for example, a hospital can’t say, “As a condition of coming to this hospital, you waive malpractice liability.” You can’t do that.

So, again, it may vary from jurisdiction to jurisdiction, and what if the AI is not even in the U.S.? What if the AI is in Slovenia, and who knows what Slovenian law is on this kind of thing? Maybe it’s in a place which specifically, deliberately has law that is relatively producer-friendly rather than relatively consumer-friendly. But the important thing is, generally speaking, the creator is going to be subject to a negligent standard, or, again, it doesn’t have to be the creator, it could be the user, it could be whoever it is who contributes to this.

Now, one difficulty, of course, is often in trying to figure out what is negligent. What if the AI does have some capacity to manipulate things, and experts come to the stand, and they say, “Well, they did as good a job as they could have, we think.” Will the jury believe that it wasn’t negligent, or will they say, “No, no, surely you must have been careless in not anticipating this particular harm.” Interesting question.

There’s also the question of what if the AI only provides recommendations? Does the First Amendment provide some sort of defense against a negligence cause of action in the absence of knowing a reckless falsehood, let’s say, the liable actual malice standard and such? So, those are interesting questions, but in principal, again, I think we need to look to the people behind the AI, whether, again, its creation, or its adaptation, or its use, and not to the AI itself.

David: Yeah, I agree. I do think the answer here is the nerdy lawyer answer, that it is going to depend on the mens rea of whatever the tort claim is and whether that’s going to be a negligence claim, or, as we often have in free expression cases, a higher, more demanding mens rea standard, a subjective intent standard.

And then, to what extent any act is going to be a negligent act is really going to depend on that particular AI, that tool at the moment of time, and what the known risks are, and all the context about whether someone – whether the user – what they knew about the tool and its propensity to give wrong answers or say harmful things. I do think it will end up playing out that way. We’ll be looking at this just as a standard mens rea problem.

Nico: I wanna ask about fraud and misrepresentation. I’ve seen some futurists posit online that we’ll be able to eliminate a lot of our email inbox by just training artificial intelligence on how we typically respond to emails and having it go through and respond for you. Do you think there are any concerns about fraud or misrepresentation there?

Another example protected under the First Amendment is the petition of government for a redress of grievances. I’m just thinking here about activists at organizations, not unlike FIRE, who might train artificial intelligence to make it seem like there are more activists in support of them who write and call their congressman or -woman with unique emails generated by AI, or even unique voicemails that are left at the congressional office generated by AI, but it’s really just one organization or one person trying to – it’s kind of like the bot problem that you have on social media.

Alison: Yeah, I feel like this exists. There are nominally –

David: Yeah, SBC, I think, had leveled some accusations about it.

Alison: Yeah. I think this exists, this is just a more efficient version of “Here, we have a bunch of letters, just sign your name here and we’ll send them all out.” It’s slightly different because there is a human attached to each of them, but in terms of being organized by a central organizing force, I think, is not like a totally new issue, it’s just probably the volume.

David: And I think it’s totally possible under the law for someone to commit fraud through the use of an AI tool. There’s nothing in the law that I can think of that would bar liability because the fraud was committed through the use of an AI tool as opposed to any other tool, so I think it’s certainly possible, and there’s probably lots of examples, but I don’t see any obstacle to that.

Alison: I don’t think I would trust AI to respond to my email, certainly not at this point, certainly not as a lawyer.

Eugene: So, all that sounds right to me, but let me point out a couple of things that I think are implicit, Nico, in your question. One is what if we’re not after what would normally be actionable fraud or misrepresentation, like somebody signing “Eugene Volokh” and it’s not actually me, it’s actually an AI. That might be – in some situations – fraud. But what if it’s an unsigned letter and it looks like it’s a human, but it doesn’t say that, and maybe it’s not reasonable to just assume that it’s a human who’s sending it.

So, what about disclosure mandates? What about a law that says any email sent by an AI has to be flagged “sent by an AI,” which, again, means that any email that a human authorizes to be sent by an AI has to have this disclaimer? Is that impermissible speech compulsion – again, impermissible violation of the rights of the human who is using AI to create this – or is this a permissible way of trying to prevent people from being misled?

A second related question relates to the fact that there is a right to petition the government, but there is no obligation as a constitutional matter on the government’s part to respond to the petitions. So, for example, if you were a government agency, you might say, “We’re not gonna prosecute you for sending us AI comments on some rule or some such. If you wanna do it, that’s fine. You have every right to clog our inboxes.

That’s not enough of a harm to justify – at least, unless it’s like the equivalent of a denial-of-service attack – not enough of a harm to justify punishing that, but we will ignore – we will just not pay any attention – to anything that doesn’t say at the bottom, ‘I certify as a human being that this was written by me, signed, the name of the person.’” And then, if I send that certification through an AI, then I am committing, possibly, for example, the crime of a full statement to the government on a matter within its jurisdiction. That’s 18 USC Section 1001, perhaps.

So, it may well be that the government and others will have to set up similar such rules to say, “Look, I’m only going to respond to messages that aren’t from AIs.” More broadly, you can imagine email facilities that actually do say, “Look, at least with things sent by people whom you don’t know, one feature we will offer our users is the option of saying ‘block all material that isn’t certified as being from a human’ because the last thing you want is your email box clogged by all this bot mail.” And if that’s so, then, again, somebody bypassing that by false certification would be committing fraud.

Alison: I think related to this how do you solve – the way people might solve the problem because the problem of all this is the generation of junk, just creation of junk mail, just the drowning out of the real people in the mass of – the cacophony of speech created by nonhuman means, and I think what’s going to happen, potentially, is systems that place a premium on verification – not necessarily something clicking a box, but maybe you’re holding more town meetings, you’re holding more hearings in person in a way that can’t be gamed as much.

It can also cause – if you’re making policy, maybe you’re not reading the comments and you’re really talking to the stakeholders who you know who they are, and that’s kind of how a lot of laws have been made for a long time. Maybe that’s not so different than what’s already going on. I’m not sure how diligently every random person’s letter into the agency is being read as opposed to the briefs – the papers that are submitted by people that they know, and have connections, and have the ability to go in and push for their position.

So, I’m not sure that – I think it’s going to exacerbate a problem that already exists, and maybe what we might lose, potentially, is some of the ability for the democratic access that comes with being able to petition the government or show up as somebody who doesn’t have a way to get in the door already because you might be drowned out in the unverified mass.

David: And you wonder whether the big problem ends up being that the government either doesn’t believe that there’s a popular support or popular opposition to something because they’re making some assumption that it’s some bot that’s just spitting out these things. “That was a beautiful letter in opposition, probably written by AI, so I can just ignore it.” I get more concerned about official lawmakers having some excuse to ignore really, really valuable input because they’ve been convinced that it’s not the work of real humans.

Alison: Well, they’re not convinced, but they understand that they can dismiss it in that manner, not to be cynical. I think I’m the cynical voice on this podcast.

Nico: I have family that work and/or worked in congressional offices, and one of the things when constituents or anyone calls into the office – one of the first questions they ask is “Are you a constituent? Are you in this district?” If the answer is no, then they don’t really continue the conversation, but if the answer is yes, they hear the complaints and they log it. And then, for emails, they log all those emails, too. It actually surprises me how many of these offices log every constituent concern or complaint, but the problem, of course, with AI is are they really a constituent.

When you’re talking about text-to-audio, they might sound like a constituent or say they’re a constituent, but that would be – speaking to Eugene’s point – a misrepresentation or fraud that’s already accounted for by the law. I think a big concern there would mostly be the denial-of-service-type thinking that Eugene was talking about. You only have so many people that can answer the phones and so many hours in a day, and if you keep bombarding them with AI –

Alison: Unless you use AI to sort through it. It’s turtles all the way down.

David: Exactly! Although I wonder whether it might be good to think a little bit about human-AI partnerships. So, my sense is that there are a lot of people who might say, “I have some thoughts about this, but I know I’m not the most articulate person. I’m not sure I have the best arguments for this, but I’m going to use AI to create a better thing than I would have myself done, but I will endorse it, or maybe I’ll edit it a little bit, or I’ll review it and endorse it.”

Or there may even be people who will say, “I need to write up a letter about something, so I’m just gonna let it do the first draft,” the way I think a lot of translators use translation software. They realize translation software’s far from perfect, but provides a good first cut, and then that cuts down the total translation time. So, one interesting question is what should we think of that?

Alison: I think it’s good. I think that saving – you have so much mental capacity in a day, in a week, and saving that for the tasks that are the highest and best use can be good if you’re talking about, within my job, I’m a really good editor, but it takes me forever to write that first draft. Or maybe it’s a way to get a bunch of words on a paper, and then – I love when someone else writes a first draft.

David: Right, but let’s think about this specifically with an eye towards the submission of comments to, say, an administrative agency or whatever else. So, on one hand, we could say it’s actually fairer to people who may not be as articulate or may not be as experienced with doing this to use the comment-writing bot, and then they have to review it and then sign it.

On the other hand, if it turns out that a lot of people, as a practical matter, don’t sign it, and what they do is they get advocacy groups to submit little prompts saying, “All of our fans, why don’t you use an AI running this prompt, and then, of course, edit the results as you think is necessary?”, but as a practical matter, people just submit it this way, and you get all the problems – it’s true, you do have somebody’s at least formal statement, “I am a human and I endorse this message.” It may not be practically that realistic – that much of a human judgement there.

The other problem is to the extent we do use AIs to detect AI-written stuff, which, in fact, we academics have been thinking about, whether we can do that to deal with AI-based plagiarism, like what if somebody submits to us a paper, how do we know that it’s written by an AI? Well, we may run it through the AI-based AI detector.

Part of the problem, though, is that presumably, it would detect material that is written in its first draft and then only slightly modified as AI-generated material. If you think it should be accepted so long as a human endorses it, then you wouldn’t really have the option of an AI being used to sort through all of that spam AI-submitted things because the human-endorsed version looks just like the one that is just submitted by a bot.

Alison: Yeah. I think we’re not gonna out-computer the computer. It’s always gonna be a race. It’s like encryption. Somebody’s gonna come up with something better, and then it’s gonna get – it’s just gonna be kind of like this – sorry, this is a podcast – this is me saying “one after the other.”

Nico: There is a video component of it too, Alison.

Alison: Oh, good.

David: So, it’s turtles all the way down – you mean elephants all the way up.

Alison: Elephants all the way up, exactly. But I think – there was an op-ed – I think it was in the Times – recently that I thought was pretty compelling. It was about people who are – it was taking the opposite view of rather than “Let’s try to stamp out plagiarism,” instead being like, “People are going to use this. This is a new tool. It's important to have literacy. Let’s use it to say when we give a prompt of ‘write X in the style of Y,’ what is it drawing on? Why is it in the style of Y? What elements of it are reminiscent? What things is it getting wrong?” and having that – rather than this fear of technology, which I think is old, very old – the radio, the TV, phones, everything – we’re always afraid it’s gonna be the end of the world, and maybe learning how to – what to do with it and what its best use is.

I think trying to trip it up is not gonna necessarily be a productive or reliable thing, as we can’t always know that we have the best algorithm to – and it may be wrong, and so, maybe just assume people might do it and have that not be the point. Learn how to incorporate it even affirmatively into a lesson.

David: Yeah, I think that’s right, and my understanding is that journalists have been experimenting with using the ChatGPT to do first drafts of articles, to just see – and I think we have these tools, and we should learn to become comfortable with them, and those of us who are just naturally beautiful writers and have had this advantage over everyone else because we can effortlessly spit out beautiful text – this might be an equalizer. We’re going to have to find our advantage someplace else, right? But I agree, we can’t be running away from these things or trying to impose equities on them that really aren’t necessary.

Eugene: So, as a general matter, I think it’s right that the stuff is coming, it’ll be here to stay, it’ll be getting better and better. We have to figure out how to adapt to it. At the same time, if my goal as a professor is to measure a student’s mastery of particular knowledge, I can’t accomplish that goal now through an essay that they write, especially at home, if ChatGPT can write comparably good essays.

Alison: Or you could do an oral exam still.

Nico: Yeah, that’s how they do it in Europe, right?

Eugene: Right, so we may need to change things to do that. So, in a sense, we’re not running away from the technology, but we are essentially saying that this technology threatens the accomplishment of a particular function. I don’t think the solution is to ban the technology generally. The solution may have to be, though, to change the function so it can use the technology.

So, as to oral exams, I appreciate the value of them. That’s not as good a solution, I think, in many ways, partly because it’s more time-consuming for the graders, and partly because my sense is that oral exams, first of all, are further away from what lawyers do. Most of lawyers’ work is written, not oral, so they measure things a little bit less, and there are also lots of opportunities for bias in oral exams that are, in some measure, mitigated. There are other opportunities for bias in written exams, but they are, in a considerable measure, mitigated in written exams.

And I’m not even just talking about race and sex, it’s just appearance and manner. Everybody likes the good-looking and the fun-seeming, and in writing, thankfully, I can’t tell what somebody looks like or how fun they are, I just look to see what they’re actually saying. So, in any event, I do think that the AI stuff is going to be potentially quite harmful to the way we do academic evaluation – again, I agree we shouldn’t ban it, but we shouldn’t also ignore the fact that it should lead us to think hard about how to prevent this kind of cheating.

David: No, I agree that I think the cheating thing is something we have to deal with. There was the calculator conundrum. This was when I was a student, was when calculators became mass available, there was this big question about whether to allow their uses in class, or was it a better – and ultimately, they exist, and it’s better to actually assess people with their facility with the tool than to pretend the tool doesn’t exist and to require that people have this capability, and then we did the same thing with search.

I remember when I first started teaching, there were questions about whether people could use Wikipedia because it was too easy. One of the ways I think we have to do as educators is get over this idea that there’s some nobility in having people do the difficult way, and one of the things we can do is to teach them how do you use available tools to do your excellent job, right? I understand the assessment gets – we have to change what we’re assessing, and maybe we’re assessing the output with the use of the tool instead of the output before the use of the tool.

Nico: I was watching an Instagram reel – or maybe it was a TikTok video – the other day of an attorney – maybe a real estate attorney – in New York city who asked ChatGPT to essentially draw up a real estate contract with these specific terms, like standard New York language with this force majeure and this jurisdiction, and it spit it out in four minutes, and then he splitscreened it, and he goes through – clause by clause – each term, and he’s like, “This is pretty good. This would save me hours of work, and I just go in here and tweak around the edges.” So, he was viewing it as kind of an augment to his work.

And I will say, as I was asking ChatGPT to write up the introduction to this show introducing these guests, saying “every other week, this is the tagline,” it did a pretty good job, but I wanted to add my own kind of language around the edges because it sounded a little bit stilted, and you’ll hear that as I’m asking some of the questions during the show. I asked ChatGPT to write the questions, but I needed to tweak them a little bit so it sounded more authentic.

Alison: Well, and it’s based on what’s been asked before, so it’s gonna sound a little – it’s not creating necessarily a new thing. I wanna also add, just to maybe – an optimist, just to allay the fears about academic cheating, my husband is a professor and actually wrote what I thought was a pretty interesting article about this in The Atlantic at the end of last year.

He was making a point that these are all free tools right now, and it’s a sandbox, and it’s a playground, and everyone can kind of go and make their college essay on it, but it’s extraordinarily expensive to run these tools, and they’re not going to stay free, and eventually, they’re going to be incorporated into something where there is money, there’s a use case for it, and that’s where they’re going to be used, or you’re going to have to pay for it, and that’s going to also make it easier to tell who is using it. So, I’m not sure that it’s always gonna be the case that anyone can just hop on and use the best ChatGPT generator to generate their – it might be a now problem, but I’m not sure if it’s a forever problem.

Nico: Well, that was actually interesting. I saw an exchange on Twitter where someone in the tech space – because a lot of us see this, and it’s free, and we assume it costs nothing to produce, just like prior to paywalls on news websites. We were like, “Oh, yeah.” But, a smart tech person asked, “Well, what are the server costs associated per use on this?”, and I thought that was a smart question because it shows that there are limits to how free this technology is gonna be.

Alison: It is apparently extraordinarily expensive to run this, to do it free right now, especially with – but it’s getting a lot of buzz, and people are learning about it, and there’s –

Eugene: And the costs always decline. Remember how, when CD players first came around, I think people would say, “Oh, well, this is just for the rich.”

Alison: Yeah, but it’s the computing cost of it that is very high, so maybe it comes down over time, but right now, it’s expensive, and I’m not sure how long it’s going to just be anyone can screw around with DALL-E.

Nico: So, I have two more topics that I wanna get to because I know we’ve got a hard stop in 10 minutes, and I think David has to hop off here in five minutes, which is okay because he said he didn’t wanna talk about the IP stuff or didn’t have much to add on the IP stuff. We’ll cover that on the last question here. But I wanna ask about deepfakes, and I wanna start by playing some audio for you all.

Video: I am not Morgan Freeman, and what you see is not real – well, at least in contemporary terms, it is not. What if I were to tell you that I’m not even a human being? Would you believe me? What is your perception of reality? Is it the ability to capture, process, and make sense of the information our senses receive? If you can see, hear, taste, or smell something, does that make it real, or is it simply the ability to feel? I would like to welcome you to the era of synthetic reality. Now, what do you see?

Nico: So, that’s actually a video that I saw on social media, and we’ll cut that into the video version of the podcast, but it’s someone standing in front of a camera, and that someone is Morgan Freeman, but it wasn’t actually Morgan Freeman saying those words, and I was talking to one of my colleagues about this, and he says right now, there’s AI-based deepfake-detecting technology that has kept up with deepfake production, so it’s pretty easy – if you have the technology – to determine what is a deepfake, but this looked exactly like – to my untrained eye – Morgan Freeman, and it sounded exactly like Morgan Freeman.

With all new technologies, of course, there’s scaremongering, but we have a real War of the Worlds-type panic happening as a result of deepfake, and I imagine none of you would say that this sort of thing would be protected speech, or maybe I’m wrong. It could be fraud or misrepresentation, depending how it’s used. In that case, Morgan Freeman – the AI-generated Morgan Freeman – said, “I am not Morgan Freeman, full disclosure,” but you could imagine a world where they do that with then-President Barack Obama, and people think it is actually him. What are the thoughts on deepfakes?

Eugene: So, this is an important subject. It’s also, like so much, not really a new subject. My understanding is that when writing was much less common, people were much more likely to look at what seems to be a formal written document and just presume that it must be accurate because, after all, it’s written down, maybe it was filed in a court, and so on and so forth, but then, of course, as it became more common – I think we are all familiar with the possibility of forgeries.

It’s true that we kind of grant most documents an informal presumption of validity if it looks serious, but if somebody says, “Wait a minute, how do you know it’s not forged?”, I think it’s pretty easy for people to say, “Oh, yeah,” or very likely that people react, “Oh, yeah, right, we need to have some proof, we need to have some sort of authentication.”

Often, in a testimony, someone says, “Yeah, I’m the one who wrote it” or “Here are the mechanisms we have for detecting forgeries and the like.” So, I do think that if somebody puts up a video that purports to be some government official doing something bad, and it turns out it’s a deepfake, I think the person can say, “Look, we all know about deepfakes. This is one of them. I never did that.”

Just like if you were to post a letter that I supposedly wrote, the answer is I didn’t write it. I believe in the late 1800s, there was some forged letter, I want to say by then-candidate James Garfield, that played a big role in the election campaign, and it turns out it was a forgery, and I think it was denounced promptly as a forgery. One interesting question is to what extent will people who really are guilty of what’s depicted in the video will say, “Oh, no, no, all deepfake, I didn’t do that. Why do you believe this nonsense? It’s obviously fake.”

Alison: It’s kind of like the “I’ve been hacked” defense.

Eugene: Right, exactly, the “I’ve been hacked” defense. So, one possible problem isn’t so much that people will believe too much, although it may be that, at some visceral level, the fact we’ve seen that even if we know it’s fake, we’ll still kind of absorb it, it will still color our perception of the person – maybe – part of the problem may be that people will become even more skeptical for fear of becoming too credulous.

They’ll become too skeptical, and as a result, people will become very hesitant to believe even really important and genuine allegations about misconduct by people, or by governments, or by others. So, I do think it’s gonna be a serious problem. I do think it’s important to realize that this is just a special case of the broader problem of forgery, and if you think of deepfakes as basically video and audio forgery, then I think you see the connection more than if you just sort of have a completely new label for it. In fact, I just came up with this. I think I’ll blog it later today.

David: I agree, and going back, one of the reasons that libel was a more serious offense than slander was because of the inherent reliability of the written word, and fortunately, I guess, we’ve had – common law has had a thousand years of creating a series of legal remedies based on falsity, whether that’s damaged reputation, or emotional distress, or whatever, and I do think in terms of legal frameworks, we look to see whether those remain sufficient for this new type of false statement, and it’ll be interesting, but I agree. I think societally, the idea that maybe we just don’t know what to believe anymore is going to be the much more difficult thing to get used to than tort law.

Nico: Well, that’s one of the things you see in societies where there’s – I read Peter Pomerantsev’s book Nothing is True and Everything is Possible, which is about the state of modern Russia, where they just flood the zone with shit and nobody knows what to believe anymore, and so, as a result, they just become cynical about everything. You could see that sort of situation happening, where people –

Eugene: To be fair, Russians – we’ve been cynical about everything for a very long time.

Alison: Not to be a media lawyer, but let me put in a plug here. One way out of this is excellent journalism because I think media literacy is important, and not just believing things because you see a picture of it is not necessarily a bad thing, but you can authenticate, you can say, “Here’s this thing, here’s what we did. We talked to this, we examined XYZ.”

Here’s showing/explaining why it’s consistent with – why you feel comfortable reporting this and why you think it's authentic, or why you’re not sure. I think that’s helpful to show people, and good journalists can use – it’s okay to have some healthy skepticism about sources – audiovisual sources like this. I don’t think that’s necessarily a bad thing, and I think explaining and showing people how to evaluate them is good media literacy and is good journalism.

Eugene: So, I think that’s right, although one problem is this issue has come up at a time when my sense is people are much more distrustful of the mainstream media than ever before, with good reason. I think we would need to regain a notion of an ideologically impartial media to do that, not to say that the First Amendment should only protect ideologically impartial media.

I think ideologically partial media are fundamentally important – that is, ideologically one-side-or-the-other media are a fundamentally important part of public debate, but when you’re getting to questions about basic fact, like is this real or this not, and people are afraid, “Oh, well, maybe the reason that they’re not investigating this is because they have some agenda, some social justice agenda, let’s say, or some traditional values agenda that’s keeping it from doing it,” or when they say it’s fake, is it being colored by their preferences?

Part of the problem is that those are really serious concerns – bye, David – and today, I think they’re much more serious than ever, at a time when we need impartial media more than ever.

Nico: David, we appreciate you joining us. I know you have to run.

David: Yeah, I’m sorry I have to run. I have my canned answer, also, for the IP stuff if you want me to say it so you have the recording.

Nico: No, that’s okay. I think we’ll probably cover what you were gonna say anyway. The question to IP – and David, if you need to hop off, I’ll let you – is artificial intelligence has the ability to generate – and this is an artificial intelligence question right here – artificial intelligence has the ability to generate original work, such as music, art, and writing, and some have raised concerns that this could potentially lead to violations of intellectual property laws.

So, what are your thoughts on that? You say, “We want this written in the style of so-and-so,” or there’s this thing going around social media where DALL-E generated images of Kermit the Frog or Mickey Mouse, or there’s this Al-Qaeda-inspired Muppet that has been going around and is kind of burned into some of my colleagues’ brains. How do we think about that? I’m assuming what you’ll say is we already have a legal framework for addressing that - fair use –

Alison: Or substantial – or copyright – to me, it doesn’t so much matter how you came up with the thing that looks exactly like the copyrighted work. If you are distributing it and doing one of the things that is covered by the Copyright Act, then I don’t think it necessarily matters if you used a brush to make it or if you used a computer to make it. We have a framework for that.

David: Yeah, I think that’s right, and I’ll give you my last bit before I hope off. I think in terms of outputs from AI, sure, AI could spit out potentially infringing materials the same way that any other tool could as well. I think the more difficult question – or, I don’t think it’s difficult, but I think an interesting question is in terms of the training of AI tools, using copyright images for training, I certainly think that using those as inputs for training purposes – I think that’s a fair use, that using copyrighted images in order to train an AI tool would be a fair use of those images, but then, the output certainly could be infringing, and you would have to look at each individual output to determine whether it was or wasn’t.

Nico: That’s an interesting question there, David. I hadn’t even thought of that.

David: Now I have to go.

Nico: Well, we’ll let you go.

Alison: I’ve gotta go in five.

Nico: Yeah, Alison has a hard stop in five minutes. Okay, so, you’re using a copyright work to produce a commercial product, right? I think of when I’m going to USAA, my insurer, and we’re talking about what I need to insure with my home, they say, “I can’t look at your home on Google Street View because we haven’t created a license with Google to be able to look at your home through that product.” Eugene’s looking skeptical.

Eugene: That’s a strange thing for them to say, I think, although who knows?

Nico: That’s what they told me. It sounded strange to me.

Eugene: People say all sorts of things.

Nico: I said, “I have a split-level home. You can go on Google Street View and look at it.” They’re like, “No, we can’t, because we haven’t licensed that technology to use in our insurance underwriting business.” I’m like, “Okay.”

Alison: Maybe that’s a liability issue.

Eugene: Maybe there’s some terms of use that we don’t pay attention to in Google Street View, but I will say – so, while I agree with what people have said generally, I do think there’s gonna be some important legal questions that are different, so let me give you an example.

So, the Supreme Court, in the Sony and Universal case, held that VCR manufacturers couldn’t be held liable for copyright infringement done using the VCRs because that’s just a tool, so you could say, well, likewise, AI developers shouldn’t be held liable for the copyright infringement done using them, like, for example, if you run it and then use it in an ad or some such.

Alison: As long as there’s a substantial non-infringing use.

Eugene: Right, right. But it’s possible that the analysis might be different for AIs. You might say, well, first, we can expect more copyright infringement detection, as it were, from AIs than we could from just a VCR. Another thing is the VCR manufacturer had done nothing at all in developing its VCR that used anybody else’s copyrighted work, so it was only the use that might be infringement.

Maybe you might say – I’m not sure this is right, but you might say if you are using other people’s copyrighted work in developing, essentially, and training your AI, then that is a fair use, but only if you then also try to make sure that you’re preventing it from being used by your users to infringe. Of course, there’s also the complication that it may be that a lot of users’ stuff will be just kind of for fun, and maybe it will look exactly – it is Mickey Mouse, and it’s just for home use, for noncommercial use, maybe that’s a fair use, whereas you put it in an ad, it’s not a fair use, and the AI may not know what the person will ultimately use it for.

So, those are interesting questions, but at the very least, I think we can’t assume that all the existing doctrines, such as contributory liability, will play out in quite the same way. One other factor is copyright law, unlike patent law, does provide that independent creation is not infringement. So, if they create something that happens to look just like a Picasso – just happens to look just like a Picasso – that’s not an infringement, but of course, you might say if the training data included the Picasso, maybe that was fair use at the outset in the training, but now you can no longer say it’s independent creation because, after all, it’s not independent. It’s very much dependent.

Then, what happens if you deliberately exclude Picasso from there, but you end up using all sorts of other artists who were influenced by Picasso, maybe even including that they had some sort of infringing elements, but that nobody sued over? In any event, I do think this will raise interesting and complicated questions because the existing doctrines have been developed in a particular environment that’s now shifting.

Alison: I am also gonna throw, and then I do have to drop – I think to be the practical lawyer angle on this, one thing that I see as impacting the way this may play out is kind of that the copyright office – at least so far – has refused to copyright things that were just solely generated by AI, unless there was substantial human involvement, and that’s gonna affect what you can do with these kinds of works because no one’s gonna wanna – rather than using an artist that you can hire and license their work from or do as a work-for-hire, use an AI, and then they have no ability to copyright the output, and then it’s gonna be – it’s not gonna necessarily be of valuable commercial use if you want to be able to protect what you’re using the AI to create on the other end – in a commercial sense.

Nico: Yeah, that was kind of the flip side, right? Who gets to copyright works produced by AI?

Alison: Yeah, it sounds like nobody right now.

Nico: Well, I know Alison has to drop off, so if she needs to drop –

Alison: This was really fun.

Nico: Yeah, that was fun, Alison. And then there was one, Eugene. I’ll let you finish up and give your thoughts.

Eugene: Sure. So, I’m not terribly worried about people not being able to copyright AI-generated works – that is to say, users not being able to copyright them. It’s an interesting question whether they could, based on their creativity in creating the prompt, but let’s say they can’t.

The whole point of copyright protection is not copyright being valuable in the abstract, it’s that it makes possible for people to invest a lot of time, effort, and money in making a new movie, or writing a novel, or whatever else. If indeed it’s very easy for you to create a work, we don’t really need to stimulate the creation of that work through copyright protection – that is to say, very easy for you being the user. It may be very difficult for OpenAI to do it. That’s a separate matter.

So, to be sure, copyright law does indeed protect even things that are easy to create, like I can write down an email that’ll take me half a minute and no real creative effort. That email is protected by copyright law, but that’s a side effect of copyright law generally protecting textual works that people write, which is motivated by a desire to have an incentive to create. If indeed a picture is easy to create with just the relatively modest effort required to select a prompt and then sort through the results, not such a big problem, I think, if that’s not copyrightable.

Now, for commercial purposes, it may be important that the result could be used as a trademark, essentially – I oversimplify here, but basically, if I create a logo using OpenAI, I should be able to stop other people from selling products using a very similar logo, but I think trademark law would already do that. Trademark law already protects things that are not protected by copyright.

Nico: Do you think – Alison said, as someone who works in the copyright space, that the government isn’t copyrighting anything produced by AI. Do you think eventually, we’ll get to a place where it will?

Eugene: I’m sorry, that the law does not provide for this protection? You said “the government.”

Nico: Yeah, what is it? It’s not the patent… What government agency issues copyrights? And I should know this because I have some.

Eugene: There is no government agency that issues copyrights. A work is copyrighted when you write it down, when you fix it in a tangible media. You can write an email, and that’s protected by copyright the moment you write it, at least under modern American law. Now, before you sue, you have to register it, but that’s just a condition of filing a lawsuit, it’s also a condition of getting some remedies.

So, the question isn’t so much if somebody is registering these copyrights, the question is whether the law offers this kind of protection. Do we say that an AI-generated image is protected? And there, the question is to what extent does that reflect the expression provided by the supposed author? So, if I just say, “Show me a red fish and a blue fish,” at most, what I’ve provided is the idea of having a red fish and a blue fish. That’s not enough to be expression.

On the other hand, if I were to give enough details, then it may be that it’s protected, at least insofar as a literal copy that includes all of the details that I’ve asked for. It might not be infringing. So, I do think there’s gonna be some degree of protection if the prompt is sufficiently creative.

Nico: Well, I think, Eugene, we should leave it there. It’s just left to the two of us. A lot of…interesting thoughts to chew on, and I imagine we’ll have to return to the subject in the next couple of years because there will be litigation surrounding artificial intelligence and the First Amendment, but thanks for spending the time today, and I hope to talk with you again soon.

Eugene: Likewise, likewise. Very much my pleasure.

Nico: That was Eugene Volokh, a First Amendment scholar and law professor at UCLA, David Greene, the senior staff attorney and civil liberties director at the Electronic Frontier Foundation, and Alison Schary, a partner at the law firm of Davis Wright Tremaine. This podcast is hosted by me, Nico Perrino, produced by me and my colleague, Carrie Robison, and edited by my colleagues Aaron Reese and Ella Ross.

To learn more about So to Speak, you can follow us on Twitter or Instagram by searching for the handle “free speech talk,” you can like us on Facebook at Facebook.com/SoToSpeakPodcast. We also have video versions of this podcast available on So to Speak’s YouTube channel and clips available on FIRE’s YouTube channel. If you have feedback, you can email us at sotospeak@thefire.org, and you can leave a review. Reviews help us attract new listeners to the show, so please do, and you can leave those reviews on any podcast app where you listen to this show, and until next time, I thank you all again for listening.

Share