The Daily Signal - How AI Is Testing the ‘Bounds of the First Amendment’

Episode Date: October 18, 2024

Artificial intelligence technology is making its way into more areas of daily life. But there are still many unknowns about AI, including major legal questions about the ways the technology should be ...governed, and which AI-generated speech is, or is not, protected under the First Amendment.  Generative AI, in its most basic form, is “trained on vast amounts of data,” according to Ryan Bangert, senior vice president of strategic initiatives at Alliance Defending Freedom. “It ingests petabytes of information in order to learn how human language works, in order to understand how it is that human syntax grammar is structured, and then it predicts what comes next.” Generative AI is “not a mind, it's not a consciousness, it's not a human being,” Bangert says. “It's a piece of software, a very complex piece of software, that's fulfilling an algorithmic function.” Therefore, he adds, generative AI is “not a First Amendment rights-bearing entity.” In their new paper, “The Ghost in the Machine: How Generative AI Will Test the Bounds of the First Amendment,” Bangert and Jeremy Tedesco, senior vice president of corporate engagement at Alliance Defending Freedom, parse the relationship between AI and the First Amendment.  Bangert and Tedesco join “The Daily Signal Podcast” to discuss the fight to protect free speech amid rapidly changing AI technology use.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:05 This is the Daily Settle podcast for Friday, October 18th. I'm Virginia Allen. AI technology is making its way into more and more areas in our lives, but there's still a lot that is unknown about AI. And there's also some major legal questions surrounding the use of AI and how it's used. There's also questions about the First Amendment and its effect on AI and how those two interact together. Jeremy Tedesco is a senior vice president of corporate engagement at Alliance Defending Freedom and Ryan Bangert is a senior vice president of strategic initiatives and special counsel to the president at Alliance Defending Freedom. And both Jeremy and Ryan recently published a paper titled The Ghost in the Machine, how generative AI will test
Starting point is 00:00:52 the bounds of the First Amendment. Well, they join the show in just a moment to discuss that. How exactly AI will indeed test the bounds of the First Amendment. Stay tuned for our conversation after this. Pro-life, pro-women, conservative, and feminist. We're problematic women. The radical left has no box for strong independent women who believe in traditional values and love America. So you might say we're problematic to the left narrative of what a woman should be. Here on Problematic Women, we sort through the news to find the stories you care about.
Starting point is 00:01:28 Join Problematic Women on the Daily Signals YouTube and Rumble Live every Wednesday or listen on your favorite podcast app. Well, Alliance Defending Freedoms, Jeremy Tedesco and Ryan Banger, join me now. Gentlemen, thank you so much for being here to talk about an all, really an all-encompassing topic, the topic of AI that so many Americans are watching, still trying to fully figure out, but really appreciate y'all's time today. Thank you. Absolutely. Happy of you here. So, Ryan, let's start with this big picture of what exactly do we mean when we say generative and. thoughts of chat GPT come to mind and deep fakes.
Starting point is 00:02:11 But what exactly is generative AI? Well, Virginia, thanks for the question. I think if we had a four-hour podcast, we might be able to dent this a little bit. But basically, I think if you've been reading the news at all, you're getting the impression that AI is going to eat the world. And we wanted to ask a very simple and narrow question, which is, what are the implications for the First Amendment of this new tech. This is a technology that can replicate human images, human voices. It can write things that didn't exist before. It can create new media.
Starting point is 00:02:44 What does that mean for the First Amendment? And we encountered this argument as we were looking at AI that robots have First Amendment rights. That somehow an AI bot would have a First Amendment right to speak. And that just struck us as implausible. So we did a little bit of research. We did some digging. And ultimately it turned into a 42-page paper analyzing just that question. And we had to start with, well, what is generative AI? What is it? What is this thing?
Starting point is 00:03:11 It's really just an algorithmic prediction engine. It's software, very complicated and very sophisticated technology that ultimately does one thing. It predicts what comes next. So if you ask it a question, if you enter a query, it will simply use complex mathematical calculations to predict what should come next. What's the next word in a sentence? What's the next sentence in a paper? paragraph, what's the next thing? And that's all it really is. Very complicated, but ultimately a very simple concept. And it's trained on vast amounts of data.
Starting point is 00:03:45 It ingests petabytes of information in order to learn how human language works, in order to understand how it is that human syntax, grammar, is structured. And then it predicts what comes next. So that's all it really is. And ultimately, it's not a mind. It's not a consciousness. it's not a human being. It's a piece of software, a very complex piece of software that's fulfilling an algorithmic
Starting point is 00:04:11 function. And our thesis is simply this. That is not a First Amendment rights-bearing entity. A computer, just like a cat in the Pace Law, there was a case that actually was decided by a court years ago that asked the question, could a talking cat enjoy First Amendment rights? And the answer was unequivocally no. Why?
Starting point is 00:04:32 Because a cat is not a human being. Just like a cat is not a human being, an algorithm, a bot is not a human being. And because an algorithm, a bot is not a human being, it doesn't have First Amendment rights. Okay. So, Jeremy, I want to come to you and say, what were some of the big questions, kind of those overarching questions that both you and Ryan were asking as you looked into this topic of, Does AI have First Amendment rights, which, as Ryan just so articulately stated, ultimately the conclusion is no. But as you all were navigating looking at this topic, what were the questions you all were asking each other, yourselves, looking at what does the law say about this? What does history tell us?
Starting point is 00:05:20 Well, I mean, the one thing Ryan left out is that as, you know, children of the 80s, our biggest concern is whether Terminator was right. Is Skynet a thing? and if it is, does it have First Amendment rights? So happily, our paper concluded that SkyNet does not have First Amendment rights if anybody ever manages to design it. But, I mean, that to some extent is a serious question. People who are in the AI field are theorizing, you know, that kind of autonomous, you know, almost human-like capability from these kinds of creations. But that's not what the paper's about. I'm being a little bit funny and facetious.
Starting point is 00:05:57 You know, I think the big question for all. us from a First Amendment standpoint is when is the First Amendment implicated? One of these companies able to articulate a credible First Amendment right in the output of a, you know, a generative AI tool like Chad GBT or something like that. And so, you know, one part of the paper that I'll talk a little bit about is just what does the First Amendment case law say right now? And the reality is that doesn't say a lot. It's also new.
Starting point is 00:06:28 There was a case, right, it's very new, but there is a lot of litigation going on these issues. So we're going to have rulings and lower court judges are wrestling with these things right now without a lot of guidance. So the net choice case that came down from the Supreme Court just this last term in June gives just a little bit of guidance. In that case, of course, involved the Texas and Florida laws that were attempting to impose essentially a viewpoint neutrality requirement on social media companies and the content that they hope. host on their platforms. And so the question in that case is whether that those flaws were permissible under the First Amendment or not. And the court was very divided. It was a fractured decision. But in the end, five justices did say that in a narrow ruling that the algorithms that Facebook and YouTube use for their news feeds and for their home pages are protected
Starting point is 00:07:26 speech. They are essentially the way in which those platforms implement their preferred speech standards, which are communicated or represented in their community standards. So the court said in a very limited sense that there is some First Amendment protection for the way in which those algorithms, some of which are based on AI technology, is moderating speech on those platforms. But there's a lot more to this question that at least four justices disagreed with that and said also, in articulate, I think one of the principles that Ryan and I really promoted in the paper, and that is judicial modesty from the judges that are facing these issues. We don't want to rush to confer First Amendment protection on technology we don't understand, because that could
Starting point is 00:08:16 have bad implications. But we also don't want to hold back First Amendment protection from situations where clearly there are First Amendment rights implicated. And I think the touchstone Ryan already talked about is that it needs to be human-initiated communication. Whatever the output is, it has to be a product of human will. It can't be the product of some machine process that even the people who created the machine are surprised by, you know, what the machine said. The other thing about the net choice case is that Justice Barrett's concurrence, I think,
Starting point is 00:08:50 gives us an inkling on where things could go. And she talks about the concept of attenuation. And essentially what she's saying through that concept is there is some point in the process where the humans who created the generative AI machine and the output that it produces, there's a break. There's a break in the link between human will or the human intent to communicate and what's actually outputted by the computer. Now, she didn't give us a hard and fast test, a bright-line test for how to make those determinations.
Starting point is 00:09:24 But I think she's right, and she's hitting on this principle that our paper talks about, which is that free speech exists and our Constitution protects free speech because free speech is a natural right. Natural rights are rights that humans have. We have a right to communicate. We have a liberty to express the thoughts and things that are in our heads verbally through publications and words. in other ways. When computers are producing some kind of communication, that doesn't necessarily mean it's protected by the First Amendment unless you can create a clear link between the humans who
Starting point is 00:10:01 created that product and the output and those humans intended the output that people are receiving on the other end. It does seem like a really thin line, right, to have that distinction of, okay, this is something that was, you know, fully kind of under the control of a human and a human kind of directly telling AI essentially what to say produce. And then like how it seems like a very fine line then to cross the line and to know, okay, now AI is essentially thinking for itself, speaking, if you will, of its own will. How do you go about distinguishing that, especially for a technology that so many people, including myself, are still trying to fully
Starting point is 00:10:44 understand. Yeah, that's a great question, Virginia. I think there's a couple of examples that we can give you that help define that line a little bit better. And one of the things we talk about in the paper that I thought was very interesting to me that I didn't appreciate fully is that generative AI is what they call indeterminate. It's stochastic. In other words, it is so sophisticated. There's so many parameters, and parameters are simply these variables that are used to decide how to generate a completion in response to a query. There are billions and billions of decisions that are being made each time a query is run through an LLM. And it is so complex and so sophisticated that an engineer who designed the LLM cannot tell you with any certainty exactly what the output is going to be in response to any query.
Starting point is 00:11:33 Now, it will generally be within a certain range, but it will never be identical. And that's because it's indeterminate. And so how does that relate to what Jeremy just said about the First Amendment? Well, let's look at some real practical examples. For instance, many of your listeners may be familiar with the Google auto-complete fiasco from this past summer when in response to queries such as assassination of TR, attempted assassination of TR, Google was responding with things like attempted assassination of Truman, not Trump. I tried that multiple times, tried to get Trump to come up, and it wouldn't. It wouldn't come up. And why was that? Well, was it because it was a
Starting point is 00:12:13 because Google engineers back there sort of blocking all these queries? No, it's because they had an AI program running in the background. And it was directed to avoid inciting violence. It was direct, in the words of a Google lawyer who responded to an inquiry from Jim Jordan, he said that Google had programmed its algorithm to avoid searches related to political violence. And those searches, in his words, were, quote, out of date. Those rules were out of date. because they didn't anticipate an actual assassination attempt on Trump. And so going back to what Jeremy said, were those auto-complete responses, Truman,
Starting point is 00:12:53 or another one said, President Donald, and some of the responses were Duck, or Donald Reagan. Were those responses intended by Google? Well, not really. Google looked at those responses and said, oh, no, our algorithm is misbehaving. It's responding to these queries in a way that's out of date. That's not really intended by any human engineer at Google.
Starting point is 00:13:14 That's an example of an algorithm responding to outdated rules that aren't encompassing new information. What's another example that we can point to? Well, there was a very interesting case, a very tragic case called Anderson involving a TikTok video called the Blackout Challenge. And it was a challenge. It was something where it was challenging young adolescents, especially females, to attempt to blackout. They were putting ropes around their neck or engaging in activities what caused them to blackout. So this young adolescent girl sees this TikTok video. She's going down the rabbit hole on TikTok. This video pops up. She decides to take up the challenge and she goes and hangs herself in her mother's
Starting point is 00:13:59 closet and dies. And her mother finds her. It's a tragic story. But this is an example of TikTok's algorithm with pushing these videos to adolescents based on information. in their profile. Now, is there anyone at TikTok is going to say, yes, we intended to send the blackout challenge to this young woman at this time to create this result? Absolutely not. Is that their speech? Well, this case, it ended up with a divided panel. Two judges said, yes, this is TikTok's speech, which means they can't shelter under CDA Section 230. But another judge said, I'm not entirely convinced this is their speech. Do they intend this result? Maybe they can't enjoy a first amendment protections. So this is an example where AI technology, because it wasn't,
Starting point is 00:14:45 it's not the intentional, volitional act of human being may not constitute speech. So these are two examples of where the rubber really meets the road when it comes to these arguments. Wow. So then, Jeremy, how do we keep companies really focused on upholding the First Amendment in this rapidly changing world where now, whether it's TikTok, Google, all these companies, employ AI on a daily basis, and they're running so many things behind the scenes. Yeah, I think we have to put pressure on them from multiple different avenues to continue to make free expression and free speech at least one of the North Stars, if not the North Star, to the extent that they're involved in providing the primary public forum in which people
Starting point is 00:15:34 gain access to information and communicate their ideas on a daily basis. And I think a lot of these companies have really neglected that. So one of the things we do is a viewpoint diversity index, which rates companies on their respect for free speech and religious freedom. And that looks at a lot of the policies and practices of these companies. Do they have speech-friendly policies? Are they programming through their policy decisions? Censorship right into the heart of their tools.
Starting point is 00:16:02 And in many instances, the answer to that is yes. They definitively are. All the big tech companies. If you put their scores up on our index, they're the lowest scoring companies because they have the kinds of policies that any First Amendment novice would understand if you were regulating a free speech forum would clearly violate the First Amendment right out of the gate. There'd be no question. Now, yeah, these are private companies. They can do what they want without worry about the First Amendment, except we know, one, what I said before, they are regulated. the primary public forum of today.
Starting point is 00:16:40 So it's important that they learn lessons from the First Amendment. Second, the government is coercing, pressuring, you know, exercising, persuasive, you know, measures to get these companies to censor at their behest. And so the more of these companies have policies that allow this kind of censorship, the more paths it gives to that kind of government abuse as well. So, you know, we score these companies. We talk to them. We have meetings with their executives. we help shareholders raise shareholder resolutions to hold them accountable when they have bad policies and practices. And, you know, I think this is a long-term game and there's a lot of different pressure points that we have to put on the companies to hold them accountable.
Starting point is 00:17:21 Yeah. Well, we saw that kind of interaction between social media platform and what governments are doing, state governments are doing, Ryan, specifically in the state of California, and especially during election. and there was a fake campaign ad that was satirical that someone created that was intended to act as if it was a Kamala Harris ad, but very clearly was not and made her look quite ridiculous. And then there was a lot of blowback, obviously, from the Harris campaign and from California Governor Newsom saying, well, you can't create this kind of thing because it was very professionally created. and to someone, you know, from the outside, they, you know, who knew nothing about the candidates in the campaign. I'd say, well, that looks like a professional campaign ad. Talk a little bit, if you would, about that situation and how Alliance Defending Freedom is involved in these videos that have, are often referred to as deep fix,
Starting point is 00:18:20 where you're using someone's voice to get a message across that is not actually anything that they have said before, or is so spliced together. It might actually be taking their words, but it's splicing them together to make sentences that they never said. That's the great question, Virgin. This is the flip side of what we just talked about earlier with the Google auto-complete fiasco and the TikTok challenge, the TikTok blackout challenge. This is an example of where artists, people, in this case, it was a handle on X called Mr. Reagan USA. And the Babylon B also has an X account. Both of them were creating satire and parody videos using AI to make these candidates look ridiculous. And it was obvious satire, it was obvious parody. No one would have mistaken this for the real thing. And they were doing it to comment on political campaigns and to comment on issues of the day.
Starting point is 00:19:12 And Gavin Newsom saw this and was just outraged and said, we've got to stop AI from being used to emulate candidate voices and images in a way that makes them look ridiculous and may harm, quote, harm their electoral prospects, which is sort of the whole point of a campaign, right? So they pass these laws and they prohibit the use of AI, to create materially false images of candidates that would harm their electoral campaigns or otherwise make them look ridiculous. And both Mr. Reagan, USA, Chris Coles,
Starting point is 00:19:41 and the Babylon Bee filed lawsuits. And the court evaluated this. And what the court found was very interesting. It said, you know, this is an example of core political speech. Throughout American history, commentators, journalists, activists, citizens have used satire and parody to critique those in power. And this is no different than the use of, then ages gone by, the use of the written word or political cartoons.
Starting point is 00:20:08 It's just the use of new technology to critique these candidates. And what the law that Gavin Newsom's sign did was, it said it carved out satire and parody, but then it required that a disclaimer be written across the entire face of the video in the same font size as the largest font size otherwise used, which in most cases blocks out the entire video. so it totally eradicates the effectiveness of the satire. And it more or less was a death knell for satire and parody using AI technology. And the court said, this is core protected political speech. And even though it's not accurate, it's not true that the candidate said that,
Starting point is 00:20:46 there's no carve out in the First Amendment for satire or parody. In fact, that's protected by the First Amendment. The only thing that's really that governments can do with respect to elections is they can prevent speech that compromises the integrity elections. And there was absolutely no way that the law that Gavin Newsom signed was, in the words of the law, narrowly tailored to advance the compelling government interest
Starting point is 00:21:10 of protecting the election integrity. Why? Because the response to saturn parity is counter speech. All Kamala Harris had to do was say, that wasn't me, and that's ridiculous. I'm not the ultimate diversity candidate, blah, blah, blah. And so that's what the court said. It said, you absolutely can respond to this in a way
Starting point is 00:21:27 that's effective. You don't need a law to step on core protected political speech. And why is that important? Because in our thesis, those parodies, those satires, are the intended message that was to be communicated by the user of the AI. In this case, the Babylon B, Chris Coles used AI to send a volitional, intentional message through parody and satire to critique a political candidate. That's an example of how AI can be used in a way that is fully protected by the First Amendment. I want to take a minute here as we close to get final thoughts from both of you on kind of where we're headed in this space, what we're likely going to see on a legal front. Like you all said, this isn't going away.
Starting point is 00:22:12 There's a lot of questions that are still trying to be answered by the courts and the American people are still grappling with this. So what do you all think we're going to see in the coming months, years? and how should Americans be thinking about this topic of AI and the First Amendment and how they interact together? Jeremy, I'll start with your thoughts. So I think, you know, what maybe what I'll focus on is the potential for AI to be a real engine of escalating the censorship problem online. So I'll stay on kind of the foreboding side of your question. AI makes the ability of companies to censor speech of governments to censor speech.
Starting point is 00:22:59 We know the government is funding projects to come up with AI tools to scrape the Internet and determine who's engaging in misinformation, hate speech, disinformation, who is a risk from a financial perspective because of their reputation. These are tools that will be used, if we're not careful, by large corporations, tech companies, financial companies, to determine whether you can have access to services, the ability to communicate your message, be platformed. You know, these are essential services that we must have to live our lives and for our voices to be heard and to live freely in a society like ours. And so, you know, I think we should be paying close attention to the way in which this technology is being developed, how it's being deployed by those who are, you know, dreaming up its applications, you know, and ultimately doing everything we can to hold corporations and government accountable for the way in which AI can be abused to harm core freedoms like free speech and religious freedom. Yeah.
Starting point is 00:24:09 Ryan, what would you say we're going to see in the coming? years. Yeah, no, I think that Jeremy's exactly right. An artificial generative intelligence is what many experts call an omni use technology. It's a technology that cuts across many different use cases. Many of the use cases for artificial generative intelligence have nothing to do with free speech. It can be embedded in any number of applications. And so I think increasingly you're going to see AI being used in a number of different ways. But some of those applications, because of the ability of AI to emulate human language necessarily will touch on the First Amendment. And I think we're going to see a need for courts to very carefully parse whether or not that content, that speech,
Starting point is 00:24:52 that language as being generated by AI is actually expressive of human intent and volition. I think that you're going to increasingly see instances where AI bots are being used to defraud people. We're already seeing that today. The FCC recently said that AI-generated voice calls, robocalls, are covered by the Telephone Consumer Protection Act. And why is that? Because you can generate, you can emulate realistic human voices and use those to generate robocalls at scale and en masse. That's an example of maybe use of AI to generate spoken content that can be regulated by the government. You also had a case very recently in New Hampshire where a political consultant sent a robocall emulating Joe Biden's voice into the state,
Starting point is 00:25:42 urging Joe Biden voters during the primary to stay home and not throw away their vote in the primary, save it for the general election. But if you think about that is sort of bizarre, he's being prosecuted right now for impersonating a candidate in New Hampshire. So we're going to have some of these really hard-edge questions that arise. And as Jeremy said earlier, the courts need to take a step-by-step approach to this and ask those hard questions, is this a message? Is AI being used to communicate a message on behalf of an individual? Is it communicating on behalf of a human, or is this simply a bot gone wild?
Starting point is 00:26:19 Yeah. Ryan Bangert and Jeremy Tedesco with Alliance Defending Freedom, gentlemen, thank you both so much for your time and breaking down what is an ever-changing and very complex issue. Really appreciate it. Thank you. Absolutely. Well, and for all of our listeners, if you want to read that paper from both Jeremy and Ryan, you can find it on the Alliance Defending Freedom website.
Starting point is 00:26:43 Again, the title of the paper is The Ghost and the Machine, how generative AI will test the bounds of the First Amendment. But we are going to leave it there for today. Thank you for all of our listeners for joining us today. Don't forget to hit that subscribe buttons. We never miss out on brand new shows from The Daily Signal. And if you would, take a minute to leave us a five-star rating and review. We love hearing your feedback.
Starting point is 00:27:03 We'll be back with you around 5 p.m. this afternoon for our top news edition. The Daily Signal podcast is made possible because of listeners like you. Executive producers are Rob Lewy and Katrina Trinko. Hosts are Virginia Allen, Brian Gottstein, Tyler O'Neill, and Elizabeth Mitchell. Sound designed by Lauren Evans, Mark Geinney, John Pop, and Joseph Von Spakovsky. To learn more or support our work, please visit DailySignal.com. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.