Table of Contents
‘So to Speak’ podcast transcript: ‘Robotica: Speech Rights & Artificial Intelligence’
Note: This is a unedited rush transcript. Please check any quotations against the audio recording.
Nico Perrino: Welcome back to “So to Speak: the Free Speech Podcast.” I am your host, Nico Perrino. And today I’m joined by two exciting guests in FIRE’s D.C. Office. They got through the gauntlet that is the security in the front lobby.
Listeners will know our first guest here, Ron Collins. He’s a distinguished scholar at the University Of Washington School Of Law. Welcome back, Ron.
Ron Collins: Glad to be here.
Nico: What is this, your like third or fourth or fifth podcast?
Ron: Listen, as long as you have me, I’m always happy to be here, so –
Nico: Well I don’t think I’ve ever put you through security like that before.
Ron: I know. We just came from Davis Wright Tremaine, and they put us through quite a lot, but it was exciting.
Nico: We’re also joined by David Skover. He is a professor of Constitutional Law at Seattle University School of Law. It’s good to finally meet you, David.
David Skover: Oh, thank you very much, Nico. The same here.
Nico: So we have you both here today because you are the authors of a new book called Robotica: Speech Rights and Artificial Intelligence. Now am I right in counting this as your seventh book together?
David: I think it is actually our ninth book together, because we are in the process of writing our tenth, so –
Ron: Our tenth book on Lawrence Ferlinghetti and the prosecution of the Howl poem comes out in March; from Rowman and Littlefield. Available on Amazon and fine bookstores though out the country.
David: That was a little bit of commercial speech, in case you didn’t get that, Nico.
Nico: Well how did this collaboration between you two start?
David: Oh, very long time ago, in the early ‘80s. My school was doing a symposium on state constitutional law. And the Law Review asked me, “Who should we get? As the – to give the forward to our printed symposium – “
Nico: And you can’t tell me they had recommended Ron.
David: I recommended Ron because he was the only really well established scholar in the area at the time. And I said, “I don’t know if you can get Ron Collins. I don’t know him personally, but try. Because he’s the guy. He’s the one who should do the keynote.” And they asked and he agreed.
And so that’s how we met. He came to my school in Seattle, and then he came back the year after to teach with me for a year.
So that is how we at first established our friendship. We decided to write together. We wrote a piece called “The Future of Liberal Legal Scholarship” and that was our first article together. And it’s been a very, very productive and prolific writing relationship ever since.
Nico: Yeah, that’s to say the least!
How does – how do you two approach the co-authoring process? I’m always very curious about just sort of the functional approach to doing this sort of thing.
Ron: Well we do it differently, I think, than most folks. First of all, we do an inordinate amount of research at the outset. We read a lot. I mean, virtually everything on the subject. So we have our research-assistants and our librarians help us out in terms of doing the research.
And then after we read everything typically David will prepare what we call “nuggets,” you know, important thoughts in various articles. And then it’s all in a binder. We’ll read then, through the binder. Then we’ll make an outline. And then I’ll either visit David or he’ll visit me and, literally, this is how it works; David is behind the computer. I’m usually standing, walking around. I may start a sentence, he can finish it. Or he’ll start it and I finish it.
And so literally word-for-word, paragraph-for-paragraph, page-for-page – I must say, though, there are times when I get a little worried when I’m spouting a sentence and he’s not typing.
David: Yeah, Ron will say, “David, I don’t hear any keys clicking.” My response is, “Ron, I hear nothing worthy of clicking.”
Ron: So it may not be the most efficient system, but it certainly works for us.
Nico: Well it’s a closer collaboration than I imagine most co-authors have.
David: Oh, by far.
Nico: You’re in the same room together.
David: Yeah, by far. Most people will break a book up in chapters and “You do this chapter, I do that chapter.” But the – and our very, very first project was done that way, but –
Ron: It wasn’t efficient.
David: It – well, when we got together, we realized it had two different voices and it did not read as a piece. We had to do the entire thing over again.
We were at his parent’s home in L.A. and we wrote – had to write the entire thing over again.
And from that point on we decided, “No, no. We need to be in the same room.” And eventually our minds, you know –
David: Meld –
Ron: But you know I really work at – really, on the one hand, as I said, one of us can begin a sentence, the other can finish it. And that’s really how much we are in sync –
Nico: But on the Lenny Bruce book, that’s like a – over a thousand-page book.
Ron: Well, yeah so – but just a couple things before we get to Lenny Bruce.
But, so I think more often than not we’re in sync. Rarely have we got stuck where we just can’t – you know, we have to take a break and go out for a walk or something.
But we do check each other. So sometimes we’ll be in the middle of an argument and – or discussion, I should say – and one of us will say, “You know, there’s a problem with this.” And we’ll talk it out and stop typing, what have you, then come back.
And it works for us. So we are really offering each other a critical perspective as we’re writing the piece.
And then we – once it’s done in draft form – we come back to it, again word-for-word. Usually David will read, start reading, and then I will; maybe after he reads for ten minutes I will start reading. So we not only want to know that what we’re writing is good for the eye, but also good for the ear. And so that, we think, improves our game.
David: And frank –
Nico: And the Lenny Bruce book?
David: Oh, every book was done this way. So it doesn’t matter how long it is. All you’re asking, really, is how long did it take us? There are some books like Lenny Bruce and Mania that took years. Multiple visits. I came out – when we were writing Mania, I spent three weeks –
Nico: Well what’s Mania about?
David: Oh, Mania was the story of the great Beat writers from – well, really starting at the point at which they were doing their most creative and dynamic write –
Ron: Allen Ginsberg, Jack Kerouac, you know, those folks –
David: Just when Jack Kerouac was writing On the Road. So we don’t go back and talk about Jack Kerouac’s early life.
So it was – it went from that to the Howl trial. The trial in – at which Lawrence Ferlinghetti was accused of obscenity for publishing the great Ginsberg poem Howl. And of course that was a great First Amendment victory.
So the book is pitched towards the obscenity story, but there had to be a lot of background given.
Now for that, I spent three weeks in D.C. with him. I’d go back to Seattle for two weeks just to rest –
Ron: And in our youth we were able to write 17 hours a day –
David: A day. And then I’d go back for three weeks and –
Ron: Now if it were – now that we’re older, nine hours or ten hours is a good day.
Nico: So let’s talk about this book now, Robotica: Speech Rights and Artificial Intelligence. It’s a short one, by your guys’s standards –
Ron: But a lot of thought went in to it.
Nico: Yeah, 63 pages, I think –
Ron: Sometimes the hardest books are the ones that you really – there’s only – Well first of all, this is the first book that’s been written on the subject of robotics and free speech, so we’re entering the field where it’s not really crowded. So a lot of original thought has to go into it. And we try not to be about volume; we try to be more about exhausting an idea. And we were just earlier today at Davis Wright Tremaine with a group of lawyers, kind of going back and forth about a number of discussions.
And when you write a book like this, particularly if you’re the first out of the gate, you have to understand that it’s really a work-in-progress. I mean, this book, a lot of it was written a year, year-and-a-half ago, which in technicalogical terms is a long time.
But – so – really a lot of these arguments were just crafted out of whole-cloth. They really didn’t depend on an enormous amount of research. It required us to familiarize ourselves with the technology, and I think that was the really, the big part –
David: And when it comes to the First Amendment argument, that I know you’ll want us to talk about, it also required us to think very, very hard about whether or not there was any existing First Amendment doctrine that supported our relatively novel theory.
So, in that respect, there was research. But Ron is quite right; this book is filled, every page is jam packed with ideas. So the length of the book is not indicative of its significance. It is so thickly contemplated and discussed that it's a relatively slow read. And our audience should be grateful that we didn’t have a much, much longer book for them, because this is not the easiest of reads.
Ron: And besides we wanted to keep it pretty much within the same parameters of Milton – John Milton’s Areopagitica or John Stuart Mill’s On Liberty –
David: Which were very short.
Ron: And the other thing, too, is although we’re dealing with a difficult subject, we try in every one of our books to write with a certain degree of clarity. To use narrative whenever we can. And to really make the sort of arguments that we’re making – as difficult as they may be – as accessible to a wide range of people as possible. So we don’t feel any compulsion to write in turgid ways in order to speak to the pottairs of the academy. So that you’ll not see.
Nico: Well the book couldn’t have come at a better time, with the debate over 3D guns, right now. Now your book doesn’t just talk about artificial intelligence, it also talks about robotics and how artificial intelligence plays into there. So what’s your take on the 3D guns debate that’s occurring right now?
Ron: Well for one thing, the 3D guns, so – in other words, you make available, on the internet, a codes as to how to produce a plastic gun that can get through –
Nico: Through these 3D printers.
Ron: Through the 3D printers. They can get through a lot, if not all security systems and what-have-you.
What’s fascinating about this controversy, and it just illustrates an important point that’s in Robotica; and that is, really what this case is all about, first and foremost, is the relationship between this technology, all right, that allows these guns to be made, and the Doctrine of Prior Restraint.
Doctrine of Prior Restraint comes into play with: Can the government prevent this individual from distributing these codes and information as to how to do that? And although, in the short run, the First Amendment claim has lost, the federal judge overhearing the case has said that there are some significant First Amendment arguments that come into play.
Again, I think the big take away point is; when we have new technologies, how do those technologies change our vision of the world. Change our vision of harm. Change our vision of law. Change our vision of our culture. It may be for the better; it may create worse. But when you have these new technologies, this is part of what they do.
Nico: Well this isn’t so new. I watched a documentary on Netflix recently about the man who wrote The Anarchist Cookbook, which is a book about how to make pipe-bombs and Molotov cocktails. And that’s been out, available on the internet since the internet was born, because the book was written in the ‘60s or ‘70s, I think.
Ron: The idea might not be new, but the execution is, alright? So there were always a limited number of people that could do that. Now there are millions of people that could have access to this information and run with it.
It’s not that these ideas didn’t exist before, but it’s now that they’ve been, if you will, energized by the availability of the internet; energized by the availability of machines that can produce these sorts of things.
Nico: Well I think there have been some think-pieces going on, on the internet, saying, “Well, if the government wanted to narrowly tailor its regulation of these 3D guns, then it should regulate the printing of the gun, not the distribution of the code.”
David: Precisely. And that, I think, is going to be the way eventually this is going to be taken care of. Because – I mean, there are many famous cases where dangerous activity was being described and the government tried to go and censor the speech, rather than to go towards the dangerous act that the speech may have been enhancing or enabling.
The more recent case is the Hitman Case. But, in that instance the publisher of the book Hitman, which was a manual for how to become a hitman, how to preform hits, right? The publisher wanted to push the First Amendment argument as far as he could. And with the support of many book publishers behind him who said, “We will finance your penalty if you are found guilty.”
Nico: And they do that often. They did that with Satanic Verses as well.
David: That’s correct.
Nico: They came together.
David: So they came together in this consortium to defend him.
Because he had that, he was enabled to make the argument that there was scienter.
Now let me explain. He was saying, “I published this for – with knowledge and intents, to enable these crimes.” Most publishers would never admit to this. And essentially he was saying, “I was an accessory to the crime. I was handing this person – wanting this person to use this information in order to kill.”
Ron: Just twist the –
Nico: So that was the intent –
David: That was the intent. And that’s what got him. That’s what got him.
Ron: So two important points: 1.) It was not a prior restraint case and 2.) It was a damages case. By the way, a prominent person, First Amendment persona, represented the plaintiffs in that case. And that’s professor, now Dean, Rodney Smolla, in that case. But –
David: So that’s an unusual situation, right? And in the context of something like The Cookbook you were talking about, there was no such intention or admission. And there would be much, much more First Amendment protection in a context like that.
Nico: So let’s get into the types of artificial intelligence before we talk about how we should think about those types of artificial intelligence.
So you describe two sorts of intelligence; first order and second order. Others have called them strong or weak intelligence, or general or narrow intelligence.
The first one is sort of a functional intelligence, first order intelligence. It’s the idea that you write code to tell this computer or this robot to do something and then it executes your wishes.
The other one is a more general, second order intelligence in which you write the code, and then the intelligence sort of takes on a mind of its own, and can, more or less, think. Not thinking as a human would, but – it could potentially get there – but generate new ideas, new evaluations –
David: I would say that, in the first order robotics, the robot is really seen as a direct agent of the human creator. In the second order robotics you have computers that are – or robots that are self-learning and self-correcting. And the foreseeability of what the robot will be doing will become more distended.
Nico: So what are the arguments for why, if at all, AI should be regulated? And I ask that by trying to understand what you’re responding to, in writing this book. Was there any impetus for it? Did you hear arguments for censorship? You spend the first part of the book going through the whole history of communications technology, more or less. So was the idea being, “This is a new technology. The regulators are coming for it. We’re only at the beginning stages of it. We should start thinking about these things now.”
Ron: If you go back to antiquity, when we start, at the time of Socrates, any tech – any new communications technology that has great utility, inevitably will create some corresponding harms. And inevitably there will be calls for censorship –
Nico: For instance?
Ron: Well, for example, let’s take the invention of the Gutenberg press. The Gutenberg press was the great thing if you were Protestant, but if you were Papal Catholic it wasn’t. The whole idea of starting to regulate the press and have to license printing came precisely because the press was seen – the press, that is, I mean the technology, as a clear and present danger to certain values.
Now originally those values had to relate to religion, but inevitably they came to relate to politics as well. I mean the reason why the First Amendment protects the technology, and I emphasis technology, not an institution – Eugene Volokh has written an incredible article in The Pennsylvania Law Review pointing out that if you’re an originalist what was really being protected was the technology, the press. And there was a reason for that.
And so basically what David and I try to do, going back to Socrates talking about scribe-ality and how, you know, he was – he opposed the idea of the invention of writing, because he said “Writing gets rid of face-to-face, one-on-one communication. It’s a bad thing,” right? And he makes a strong arguments for that.
I mean, one of the flies in the ointment there is that, had Plato not written it, we’d have never known.
So, if you just start with the premise that whenever you have a significant new communications technology that has great utility, it’s only a matter of time before the censorial-hand places.
Nico: Because it upsets the established order.
Ron: Or some established order, right.
David: But your question went to the issue of are there – because we’re at the beginning, and there’s not a lot of litigation – are there really people out there who are arguing that the First Amendment shouldn’t cover robotic expression at all –
Nico: Yes, we had the code and speech debate in the ‘90s and early 2000s –
David: Right. And the answer to that is yes. Three fine examples of academics, legal academics, who are arguing that there should be no First Amendment coverage whatsoever for robotic expression because robots are not human and they don’t have human intentions; I would name, first and foremost, Tim Wu, who is a Columbia University Law School professor. But along with him are Oren Bracha who is at the University of Texas Law School, and Frank Pasquale at the University of Maryland Law School.
All three have written articles, newspaper op-ed pieces, and advised the government as well, as to their unlimited power to regulate robotic expression, because robots aren’t humans and they do not get First Amendment protection for their speech.
So these arguments are out there. They – and I believe they’re going to be used. There’s no question about it. But you’re right. We are at the beginning of the – of this field of robotic agents who are expressing – expressing speech that may be considered threatening. And whether or not the government decides to regulate it in the future at any – in any significant way, has yet to be seen.
Nico: So let’s make this tangible. I believe there was a court decision where the judge said – and this might have been a Supreme Court decision, I can’t remember – where the ruling said, “A recipe doesn’t lose its protection because an oven is required to make the good – the baked good. And music doesn’t lose its protection – music on a sheet of paper doesn’t lose its protection because a guitar is required to bring that music to life.” Would Tim Wu and the others get on board with that, do you think?
Ron: Well let me put this –
Nico: Because I have a hard time, in my mind, drawing a distinction between that and saying, “I wrote this code, but because a computer is required to bring it to life, it therefore shouldn’t have the protection.”
David: But you have to admit that there is – that the cookbook can’t really change its recipe. There’s no independent agent, there’s no independent process by which the cookbook alters itself.
Nico: Well not – same with artificial intelligence, at least first order artificial intelligence –
David: First order artificial intelligence is much more limited, but second order – it’ll be another day, right? So this would be, second order would be, an electronic cookbook realizing, “Oh my God, this really should be only one teaspoon of sugar and not two,” and then changing itself, okay? That’s the second order.
Nico: Well I don’t want to get too far ahead of ourselves, but let’s say that the AI decides that this shouldn’t just be one teaspoon of sugar, it should also be, like, one teaspoon of sugar plus one of – name your date –
Nico: Name your date-rape drug. Then who would be liable there?
Ron: So first of all let’s just keep in mind that utility is a really important aspect – focal point of our discussion. So let’s just – so we’re clear for your audience; the first question is, when you talk about robotic expression, is it even covered under the First Amendment, alright? Not is it protected, but is it covered? Is it speech within the meaning of the First Amendment?
And the main portion of our argument, speaking to the critics that David has mentioned, is to establish a case that it is covered by the First Amendment. That really is where a lot of the heavy lifting is.
The next question is; if it’s covered, how do we determine if it’s protected or not, alright? And in that regard utility is extremely important. And given – go ahead.
Nico: But before we get there, when you say covered you mean, do you need to bring a First Amendment analysis to bear on the question.
Nico: That’s what I want to know.
Ron: So for example, if I were to punch somebody in the face, that’s not even speech within the meaning of the First Amendment. If I was to run somebody over with my car, that’s not even speech within the mean – alright? We don’t get to the question of whether or not it’s protected because it doesn’t even come within the ambit of the First Amendment to begin with.
Nico: Got you, okay.
Ron: So, having established, as best we can, the argument for that – and more to be said about that in a moment – then the next question is; how do we determine whether something is protected? And, at least for many decades, both the courts and academics have developed various theories to determine when speech is protected. Really high-minded theories. Self-Realization, Self-Governance, Democratic Rule –
David: Marketplace of Ideas.
Ron: Marketplace of Ideas, the Checking Function, what have you. But what we’ve witnessed in First Amendment law over the decades is that these various doctrines, when they’ve applied to modern situations, have taken on a very extended and attenuated, if you will, portrayal. And that in many respects what’s going on is a form of hypocrisy.
It may be that pornography does many things. But to say that it adds to democratic government is a bit of a stretch.
So when we wrote this book we didn’t want to take some high-minded ideal and stretch it to the breaking point in order to reach our arguments, in order to address our arguments. We thought it better to take a realist approach. And that is to say, “What is really driving the idea that we would even protect this to begin with?” And that is utility.
And in very short form; if you think about the printing press, if you think about telephone, if you think about television, if you think about the internet, if you think about cellphones, and what have you, the utility is so great that often that utility changes our notion of harm. Changes our notion of value. I mean, just think about how Generation Z people think about privacy when it comes to social media, and how people over 50 think of it. It’s radically different.
So we think that, at least for a starting point, the norm is utility, alright? That – so what we do is, you balance the utility of the value received against the harm it creates. The greater the utility, the greater the harm is gonna be. The greater the harm, the greater the utility is gonna be.
And so often that’s very contextual, but that’s just a very general approach. David?
David: Yeah, I mean in many respects – and Ron is right, to get to our argument of First Amendment coverage, we have to talk about something entirely different from utility. Because utility is only a norm that we use –
Ron: For protection.
David: For protection. For the question of protection. But so that your audience understands, there is no need to go to the question of protection if the government can regulate something because the First Amendment doesn’t cover it at all.
And so for us, we – our first and foremost objective in writing this book was to contest the arguments of people like Wu and Pasquale. Which are, “Well, you are unconstrained in your regulation of robotic expression, Government, because robots aren’t human. Robots don’t have human intentions. Therefore the First Amendment doesn’t apply at all.” So our main focus is to address that argument.
Nico: Well James Gimmel – Grimmelmann, in his criticism of you said your argument, more or less, would result in speech eating the world.
David: And we say, “Not at all.” Not at all.
Nico: And the example he provides is, he says, “Well, turning on a – there’s turning on a lightbulb and then there’s Morse code. Both requires a message to be sent from the human to the piece of robotic or machinery. But one has an expressive intent, the other one is purely functional.” And your response is?
David: Well, would it make one hoots difference to Grimmelmann’s argument if a robot had been standing in Old North Church sending signals about the British coming? Would we have cared? No, not at all.
Nico: But that question rests on whether it’s a human or a robot sending the message.
David: Well that’s right, but his point about a lightbulb and light being used as a Morse code is something we fully accept. A lightbulb by itself is not considered speech. But when I use the lightbulb to send the signal, or in this case it was – in the case of Paul Revere or whatever it was, right – it was the lantern. When I use the lantern to send the signal that the British are coming, that instrument is a medium of communication. And no one would say it’s not.
So light can –
Nico: Can be used as a medium –
David: Can be used as a communicative device.
Nico: So you would grant him this; that the functional, just turning on a light, is not even covered by the First Amendment, much less protected.
David: Right. Because most of us would never in a million years think of that as expression.
I mean, what is being expressed?
Ron: But you know, the other thing is, what David’s comment causes us to think about is, think how long it took the First Amendment to even get to symbolic speech. I mean, that’s essentially what you’re talking about. In other words, we communicate by ways other than mere words, alright? And that’s what the whole evolution of symbolic speech is about.
David: That a medium, the light, the flashing light, is used instead of the human voice screaming that the British are coming.
Nico: So, if I want to start my car, for example, and there’s communication between the button I press on my new fancy hi-tech car and the engine that lets it go, that’s not protected. That’s not even covered, because there’s no message being communicated.
Ron: So yes and no.
So no, that wouldn’t be speech, but if you’re talking to Siri in your car – Siri meaning the Apple robotic lady that communicates with us – that well could be covered under the First Amendment.
Nico: But you’re not willing to say definitively that it would be protected.
Ron: No –
David: No, no, no [inaudible] [00:30:28] covered –
Ron: Those are different – those are different questions. Something may well be covered, but not protected.
Nico: See, I’m having a hard time distinguishing this, because if I tell Siri to turn on a light, versus flicking it with my finger myself, the only difference is that I used my voice to do it.
Ron: Let me give you an example. Let’s turn the clock back a hundred years, alright? So let’s say I’m a person of means and I have a butler. And I say, “Would you go over and light that candle?” Question; when I asked the butler to do that, is that speech within the meaning of the First Amendment? Not is it protected, but is it speech? So I asked the butler –
Nico: In its purest sense, yes. Of course.
Ron: Yes, so –
David: What if your butler, today, is a robot, and you say, “Please turn on – or light that candle.” If the robot fulfills your desire, then the robot is your butler. I mean –
Nico: And this is why I think your book is so important, because it forces us to ask why we have free speech protection under the First Amendment in the first place. Is it for self-actualization? Is it to – because it’s useful, there’s a utility to it? Is it because it produces knowledge? Is it because it supercharges the democratic process?
David: Well I think the answer is yes, yes, yes, yes, yes. But because all those could come to play. We are never saying that utility is a norm that exists –
Ron: That exists isolated.
David: alone, isolated from others. It’s just that many of the traditional First Amendment norms are essentially too highly elevated to really cover the vast majority of what we would consider functional expression.
So we have to recognize that when you’re talking to Siri and asking her a question and she gives something back; this isn’t for purposes of Self-Governance. This isn’t necessarily for purposes of the Marketplace of Ideas. It’s functional. What you needed to know was “How do I – what’s the address of FIRE’s offices in Washington D.C.?”
Nico: It’s funny you say this, because we had Paul Sherman, who’s an attorney at the Institute for Justice, on a podcast when we discussed Brett Kavanaugh’s nomination and some of the Supreme Court cases last term, and he said, “You know, the question before the Court right now isn’t really between – about offensive speech, like it might have been, or political speech like it might have been decades ago. It’s now, really, how covered is effective speech?” And he’s talking about commercial-speech cases. He’s talking about campaign finance cases. He’s talking about occupational speech cases. When speech is effective, can it achieve First Amendment protection?
Ron: Well it –
Nico: And his argument is, “Of course!”
Ron: Well it depends on what it’s effective at. If it’s effective at perpetuating fraud, or perpetuating other types of crimes, no.
Nico: Good point. Yeah.
Ron: I think what’s implicit in your question – and, by the way, this book is just day one. I mean, there’s so much on the table –
Nico: That it’s going to take us forever to get through the questions we’re asking.
Ron: But one of the things, as we communicate – and I think this was implicit in Professor Grimmelmann’s questions, unfortunately he didn’t really tease them out enough – but I think what robotics does, is it gets us to revisit the speech-conduct distinction. As we communicate more and more with data, with our voice, with various bot agents – and we do communicate to them, right? – Then that dichotomy – that traditional dichotomy, if you will – the difference between me turning on a light and me asking my butler to turn it on, or me asking a bot to turn it on, that becomes a little fuzzier.
And this is what the new technologies do. They take our old paradigms, and they get – in a sense they turn them inside-out and get us to think anew about them.
David: Let me just jumpstart, I think, what is implicit in most of what we’ve talked about, but not really expressed so far. And that is; what is one of the major purposes of this book?
One of the major purposes of this book, in addressing the naysayers like Tim Wu and Frank Pasquale, is a theory of the First Amendment that does not require us to give human agency, in any way, to the robot, or to believe that the robot has any human intentions.
Because what they are doing, what the naysayers are doing, is saying, “First Amendment coverage is limited to humans. And unless you are a human, your expression doesn’t even come within the egis of the First Amendment.”
Our response is “No.” We give what we call an Intentionalist Free Speech Theory. And there are several important points to understanding what we – what this Intentionalist Free Speech Theory is about.
First of all, it is designed particularly to avoid these questions. The fact that the robot is not human is irrelevant. The fact that the robot has no human intentions is irrelevant.
Then, what we really are doing is resting, or situating the meaning of speech and the First Amendment, in the experience of the receiver. Now –
Nico: And you rest on reader-response criticism.
David: From literary criticism back in the ‘70s, ‘60s and ‘70s, that’s right. The First Amendment people have – except for us – have never really reflected on this great debate that was happening among literary critics back then, in the ‘60s and ‘70s. They were debating this very idea.
Some of them were saying, “No, meaning is in the text. It lies in the text. And you have to unlock the text and then ‘Boom!’ the meaning will pop out.”
Nico: And that’s where the value lies.
David: That’s where the value lies, in the text.
The reader-response people were saying, “No, that’s ridiculous. Meaning is situated in the mind of the receiver. It is the reader, it is the listener of music, the reader of books, who, in his or her mind, is making meaning. And so the significance is in the reader, or the receiver’s response to the stimulus of the text.”
Now we adopt this latter view. And part of the reason we do is because we’ve seen that, although the court has not – the Supreme Court has never explicitly made this connection. Not explicitly. But much of modern First Amendment doctrine is really based on reader-response value.
Let me just mention three areas quickly of the Obscenity Doctrine. If you look at the definition of obscenity in Miller versus California, every single one of the prongs is about the response of the reader or the viewer of the obscenity particularly. I mean consider that for pornography to be protected, it must have some substantial political, artistic, literary, or scientific value. Well value to whom? They're not talking about to the pornographer; they’re talking about to the observer, to the reader, to the viewer of pornography. So that criterion is entirely reader-response.
Nico: Of course it has value to the creator. Otherwise he wouldn’t do it.
David: Monetary. Monetary value –
Nico: If nothing else.
David: If nothing else. But that’s not what they’re resting on. They’re looking at the value to the receiver.
Nico: Okay, obscenity.
Commercial speech. In the 1976 case of Virginia Pharmacy the Supreme Court explicitly found that the reason for protecting commercial advertising was that the information would be valuable to the consumer who needs to make wise economic decisions in the marketplace. Again, reader-response, or receiver theory of value.
The most recent, I think, and very telling case is the Brown versus Merchant Entertainment Case, which involved California’s attempt to proscribe violent video games. When the Supreme Court upheld the First Amendment defense of Brown Merchants, its argument was that the value to the gamer is what – entertainment value to the gamer is what was being protected. And in that particular case they got the closest to actually acknowledging this literary criticism link, because in writing his opinion for the majority, Scalia cites to a decision by Posner who talks about literary experience –
Nico: Of course he does.
Ron: That was the Kendra Case.
David: Yeah, the Kendra Case. And he’s talking about the literary experience and the imagination of the reader creating meaning and value of – speech value in his mind.
Well, I mean, so he’s associating that with what the gamer does when the gamer’s playing these violent video games.
So our point is that this theory is – although we are the first to really propound it explicitly this way, as a theory for First Amendment coverage – I don’t know anyone else who’s done this before us – is still grounded, very well, in – no, well grounded in existing First Amendment doctrine.
Nico: So how would you approach, then, a case like Morris v. Frederick, Bong Hits for Jesus Case, where the Justice – who was it, Scalia? – asked what was the purpose in sending this message. And I think his decision, more or less, relied on the question that question asked and the response that they really didn’t have a purpose in holding up the bong hits for Jesus sign.
Ron: No, that Bong Hits for Jesus Case is a good example of the Court taking liberties. From all we know from the record – and I think the defense were right on base – these kids were just being comical, nonsensical, what have you. To say that they were somehow aiding and abetting in the, you know, urging people to use pot, is –
Ron: Is a bit of a stretch. And this is what you see in a lot of First Amendment jurisprudence, either to deny a right or to affirm a right.
Nico: The question of what is the purpose of this expression.
Ron: Yes. And I think what we’re trying to do is – look, no theory is perfect. You know, there are always some difficulties at – on the border. And as you begin to learn more and more about the technologies your view of the law changes.
And that’s why I said – I can’t emphasis enough – this is the first day, not the last day, in terms of how we come to think about these issues.
But if you say at the outset that algorithmic produced stories for Associated Press having to do with financial matters and having to do with sports are not covered under the First Amendment. If you say that robotic music, jazz music, or classical music, at the outset isn’t even covered by the First Amendment. If at the outset you say that robotically produced art, all of which – these things are, by the way, have happened already in our day – that’s just categorically not even within the meaning of the First Amendment, I think you lose a lot.
And what we’re trying to do is come up with some way of thinking anew as to why such forms of expression and others might, first of all, be within the ambit of the First Amendment, and then second of all, why and under what circumstances might they be protected.
So first order artificial intelligence; I think you could even tie Tim Wu’s theory into believing that that is protected. Because it is a human who is pushing the first domino.
Ron: Well I think for them the agency is too attenuated. And the thing is that, even if you go back to Socrates, Socrates was saying that writing, alright, is not speech. Because it’s not alive; it’s not human-to-human; it’s not person-to-person; it’s not face-to-face. This is an old argument, if you will, with a new face.
So, the thing is, of course, at some point – and this brings us from coverage to protection – at some point there has to be some agency or some utility – I should say better, a better word would be utility. Because if there’s no utility, there’s no need to create this in the first place.
Nico: There’s no value for which the right is protecting.
Nico: For –
David: Well there would be no incentive for creation. I mean, who is going to produce a robot that is of no use to anyone. I mean, why –
Ron: I mean, Mandan –
David: We wouldn’t want –
Nico: Well you might create a robot that walks back and forth just to test the technology, for example –
Ron: Well sure, but there’s a purpose –
David: But, but, that’s not gonna be regulated. The government’s not gonna regulate it, because it’s not gonna harm anyone.
Nico: It’s not gonna do anything, right?
David: It’s not gonna do anything. The –
You have to remember that it’s utility that drives usage. When a new creation is shared by five people, you’re not gonna get governmental –
Nico: It’s also the utility that demonstrates the futility of regulation, oftentimes. I think about Uber. Uber came into these markets, which had heavily regulated taxis for decades, and blew it wide open by demonstrating its utility. And by the time the government started to catch up to this new technology they were too late. People fell in love with it. They couldn’t do it without intense blowback.
Ron: And by the way –
Nico: You talk about this with people talking on their cellphones, and now you have wireless technologies.
Ron: And the thing is that, what’s also interesting about the new technologies, which really kind of changes the function of censorship or even the need for censorship, and that is the technological fix. That if a new technology does indeed create harms, such that it wouldn’t be protected, the question is, is there an alternative technological fix? And if there is, then problem cured, right? The need for censorship – really, at least, censorship of any lasting moment – is no longer present.
Nico: And like I said, you talk about the problem with people talking on their cellphones, or looking at their cellphones in cars, and it creates a higher risk for accidents. But you said, as a result of that, you got hands-free technologies, for example.
David: That’s right. I – and that was far more effective than any law that was passed –
Ron: Police officers watching cars come by –
David: Right. I mean there were many articles written on the fact that drivers were blatantly violating no-cellphone laws while driving. So the real fix was a technological one, not governmental regulation.
Nico: I want to talk about general or second order intelligence right now.
Ron, how much time do you have?
Ron: I think we have 15 minutes.
This is where –
Ron: Oh, excuse me, ten.
This is where I think the real problems come up. So let me posit a few things –
David: But also the real benefits.
Nico: Yeah. And all the interesting-ness for us who care about the First Amendment.
So, we talked a little bit about creating an artificial intelligence that might tweak your recipe to make it better. Or might tweak the recipe because it has generated an intelligence that resulted in it hating you, for example, and then it poisons you.
So you have that one possibility, where if you tell an intelligence to do something, it might come up with really weird ways to implement it. For example I listen to a podcast with Sam Harris where he talks about how, if you tell an artificial intelligence to cure cancer, one thing it might be – might do – is kill all the humans, because that’s one way to kill the cancer. You can’t figure out all the various permutations for how the goal might be met.
And then you have the issue with Microsoft’s Twitter experiment with Tay, which I’m sure you’re familiar with –
David: Yes, certainly.
Nico: Ron ran off to the restroom here, he’ll be back in a moment.
They created this technology on Twitter, this artificial intelligence, which was sort of, supposed to respond and become like other users on Twitter. And after – I think within 24 hours it started denying the Holocaust, making racial and racist statements, and they had to take it down.
So how do we deal with technologies like that and the communicative consequences of it?
David: Well part of the problem with all of these examples is the fact that we’re at a very, very early stage of robotic development still. And certainly the creators of robotic programs have a lot to learn from these kinds of experiments.
But I –
Nico: But the risks of messing up are huge. Can be huge.
David: The risks, yes. And frankly –
Nico: And so waiting to regulate it can be too late, I think your critics would argue.
David: Well, except the problem is – by that measure the television could have been shut down. Because there were arguments about the vast wasteland that was created by the television and how reading wouldn’t be promoted when people were viewing television. And –
Nico: But none of those could result in the deaths of the human race, for example –
David: No. Well except people thought that the minds of our youth would be polluted by the television.
There were arguments made by religious people that Elvis Presley music should be crushed because it was satanic and creating sexual urges in its listeners.
I mean, you can see these kinds of censorial arguments go back, as Ron said earlier, to the very beginning of the creation of the new communications technologies. So we’re always going to have these censorship arguments made.
The real question is; when – if we can accept the reception period that we’re giving the Intentionalist Free Speech theory for coverage – then how are we going to, on an ad hoc basis because it’s going to be contextualized, how are we going to determine when the government is empowered to regulate a technological use. It’s usually not the technology itself at that point, but the fact that the technology has been used to do something that the government considers to be illegal.
Look, if you have a robot –
Nico: What if your intention is to not do something illegal, but the intelligence takes on a mind of its own, as I said, by you telling it to cure cancer –
David: Well certain –
Nico: and it kills all humans because that cures cancer.
David: Look, just like a human being is not entitled to defame; or a human being is not entitled to –
Ron: To defraud –
David: to commit fraud; robots are not going to be entitled to do it either.
So if you are talking about a category –
Nico: Well the robots won’t be responsible for it. I’m assuming it’s the human who created the code that made way for the artificial intelligence –
Ron: Yeah, and the question is; could they have availed themselves of some technological fix?
But, as we talk about these things, Nico, just keep this in mind; that if you have a technology that increases a million-fold the number of people you can reach; if you have a technology that decreases a thousand-times the length of time it would take to deliver a message or –
David: But the cost. The cost [inaudible] [00:51:48] –
Ron: Oh, the cost.
So speed is accelerated. Volume is accelerated. Costs are diminished, alright? These are what sorts of things new technologies do. That’s their utility.
But of course there’s a possibility for a corresponding harm, alright? So if you just start with that, it may be that it takes a while to ferret out what the harms may be. The harms may be bots sending all sorts of fake messages to Facebook, that could be the sort of things they do.
But what – we’re not denying that when you have accelerated – media that accelerates the number of people that receive the message; that accelerates the time that it takes to produce the message; it also accelerates the possibility for harm. We don’t deny that. But that’s where our utility argument bounds and starts to come in –
Nico: But you talk about, in your book, how the Supreme Court hasn’t countenanced the harm argument very much in recent years.
Ron: Not implicit – we think it’s implicit in virtually everything they do, but not explicit. So we didn’t – Nor have they embraced –
Nico: Where content discrimination is an issue.
Ron: Nor have they embraced our utility argument as a norm. We say, “Oh, the high values are great, and they may work in tandem, but what is really driving this is a certain realism.” And what we try to do is bring the First Amendment jurisprudence; the kind of realism that was brought in the ‘30s and the ‘40s to commercial law jurisprudence by the legal realists who said that you have to look at the world in its context, not at abstract doctrines.
Nico: So if you’re removing the intention from the analysis, how then do you account for unprotected speech? Speech created by an intelligence that results in a true threat, or defamation, or incitement. Who is at fault there, if there is – if the person who created the thing, if their intent doesn’t really matter?
David: Well, but –
Ron: But their intent doesn’t matter for purposes –
David: of the First Amendment.
David: You have to make a distinction between First Amendment defenses, which is what we’re talking about, and the establishment of a crime. Intention is always going to matter, for determining whether there is the existence of a crime. Intention is always going to matter if it’s the existence of a tort, right?
Now will these things – will these doctrines, these legal categories – change over time, in order to accommodate the second order robotics? That’s what your question is.
First order, we don’t have any issue because –
Nico: Yeah, I don’t have any issue.
David: No. Because the agent, a principal –
Nico: It’s closer to the principal –
David: Right. And the principal is always held responsible for the agent’s intents, so there’s not really a question of who’s going to be liable in those situations.
But if you get a more attenuated relationship between the creator and the robot, it very well may be that no longer will negligence or intention matter for the creation of a tort. It could be that if the robot creates some tortious speech –
Ron: It’s inherently dangerous –
David: That’s inherently dangerous, that strict liability will be applied and there will be an insurance scheme that will be legislatively required for the purchase and implementation of any robot. So one could see that happening.
Nico: This is a very expansive view of the First Amendment, would you admit that? And what the entire thing covers.
Ron: Yeah, just as the technology is a very –
David: Very expansive –
Ron: Now you have an idea what it would have been like to live in the age of the Gutenberg press.
Nico: Well this is what I have a hard time coming to terms with, because part of me wants to say this time is different, but I’m sure everyone in every era has said this time is different.
The Catholic Church said, we have to shut these printing presses down because either they’re gonna feed the Protestant view that people should be able to mediate directly to God and not through the priest; or these printing presses are going to give rise to more French pornographic novels; or this will be seditious –
Ron: And remember it wasn’t until the 1950s – the 1950s – that movies were considered within the ambit of the First Amendment.
David: Ambit of the First Amendment.
Ron: That’s what I was gonna say.
David: But, I mean the printing press was considered an evil by the establishment because they saw the possibility for seditious speech, right?
Nico: Well, I know – do we have two more minutes?
Ron: I can give you one.
You get outflanked by Jane Bambauer in your – in the criticisms that she has. She says you guys don’t go far enough. What do you make of that?
Ron: You know, like I said, it’s still early in the day. Our minds are still open.
You know, it may be that a year-and-a-half from now, when we revisit this, we have some new takes on things and that Jane’s ideas inform that. We’re open. We’re not categorical – canonical – in that regard.
David: By the way, I will say this; when our theory seems so extreme to the naysayers, hers is, like, you know, is impossible –
Nico: Well she’s – hers rests on the theory that you also need to protect thinking and discovery, which is – you didn’t go over there, and we don’t have time to go over that in the rest of this podcast.
I want to thank you both so much for coming to FIRE’s D.C. offices today –
Ron: Thanks so much for having us.
David: Thank you. It was a delight.
Nico: That was Ron Collins and David Skover and their book is Robotica: Speech Rights and Artificial Intelligence. It’s available wherever fine books are sold.
This podcast is hosted, produced, and recorded by me; Nico Perrino, and edited by Aaron Reese. To learn more about So to Speak you can follow us on Twitter at Twitter.com/freespeechtalk or Like us on Facebook at facebook.com/sotospeakpodcast. You can also email us feedback at firstname.lastname@example.org and if you enjoyed this episode, leave me a review on iTunes, or wherever else you get your podcast. As I say every week, reviews help us attract new listeners to the show.
And until next time, thanks for listening.
FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.