Call Me Back - with Dan Senor - A techno-skeptic on the A.I. revolution - with Christine Rosen

Episode Date: July 17, 2023

Dr. Christine Rosen is skeptical of all the techno-optimism around the coming era of artificial intelligence. In this episode, she responds to our recent guest, Tyler Cowen (episode # 120). Christine... Rosen is a senior fellow at the American Enterprise Institute, where she focuses on American history, culture, technology and feminism. Concurrently she is a columnist for Commentary magazine and one of the cohosts of The Commentary Magazine Podcast. She is also a fellow at the University of Virginia’s Institute for Advanced Studies in Culture and a senior editor in an advisory position at the New Atlantis. Previously, she was a distinguished visiting scholar at the Library of Congress. Christine is the author or coauthor of many books. Her next book is called The Extinction of Experience. She's also a prolific opinion writer – not only on the pages of Commentary, but also the Los Angeles Times, National Affairs, the New Atlantis, the New York Times, MIT Technology Review, Politico, Slate, the Wall Street Journal, the Washington Post, and the New England Journal of Medicine.

Transcript
Discussion (0)
Starting point is 00:00:00 We still don't have an industry standard for social media platforms. We can't even get our act together on something that simple. We need to do that with AI. A moratorium isn't necessarily a good idea, but I'm not sure how we do that in a country that can't even decide what it stands for as a country anymore. What is our position as a global leader? We're in a state of anxious identity right now as a country. And I think the AI stuff has created more anxiety in part because we are
Starting point is 00:00:25 feeling a little bit uncertain coming out of the pandemic, coming out of a political system and a sort of polarized political culture that doesn't let us have debates very reasonably anymore. on this podcast our guests usually offer up a healthy dose of doom and gloom on everything from geopolitics to economics and popular culture but a recent guest tyler cowan who i thoroughly enjoyed was surprisingly upbeat about the world he thinks thinks America, even its major cities, he also applies this to Europe as well, can rebound. And a big factor in his optimism is the revolution in artificial intelligence that we're about to live through. That conversation generated a lot of heat from some of our listeners. I mean, some found it interesting and illuminating, but others, not so much.
Starting point is 00:01:25 And foremost in the not so much category was Dr. Christine Rosen. So I invited Christine on to give us her, let's call it her techno pessimist's view of the AI revolution. Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on American history, culture, technology, and feminism. She's also a columnist for Commentary Magazine and one of the co-hosts of the Commentary Magazine podcast. Christine is also a fellow at the University of Virginia's Institute for Advanced Studies in Culture and a senior editor in an advisory role at the New Atlantis. We've had guests on associated with the New Atlantis in the past, including Eric Cohn, one of its founders. Previously, Christine was a distinguished visiting scholar at the Library of
Starting point is 00:02:10 Congress. She's the author of numerous books and the co-author of many books. Her next book, coming out soon, is called The Extinction of Experience, and she's a prolific opinion writer, not only on the pages of Comment commentary, but also the LA Times, National Affairs. I mentioned the New Atlantis, also the New York Times, MIT Technology Review, Politico, Slate, the Wall Street Journal, the Washington Post, and the New England Journal of Medicine. Christine Rosen on the coming era of artificial intelligence. This is Call Me Back. And I'm pleased to welcome to this podcast, Christine Rosen of the American Enterprise Institute
Starting point is 00:02:50 and Commentary Magazine, and most importantly, a regular co-host of the critically acclaimed Commentary Magazine podcast. I always say critically acclaimed whenever I have pot hordes on, and not once has someone questioned, where's it been reviewed?
Starting point is 00:03:04 I was just about to ask. Our Apple reviewers, some of them would disagree with critically acclaimed. Not once has anyone ever said critically acclaimed, so I'm just going to keep doing it. Christine, thanks for being here. Thanks so much for having me. So the reason I called you and said, let's have a conversation is because a couple weeks ago i had tyler cowan on this podcast where we talked mostly about ai and then you all on the on the critically acclaimed commentary podcast daily podcast you guys reacted to the conversation with tyler and i would say a couple of you on the podcast matt and john were somewhat upbeat but then you brought like the doom to the conversation you were you were you were the doomer you're the doer and basically
Starting point is 00:03:53 saying tyler's too much of a techno optimist and all these people are too much of techno optimists and um we should be worried so i we're gonna we're gonna in a moment, get to why we should be worried. I want to give a fair hearing to kind of counter Tyler's take. Before we do, I just want to just get a little bit of background about you, because what our listeners may not realize is, first of all, you've written about this issue extensively, including know more than one uh large essay for commentary magazine on the topic of social media and tech and um so you come by your shall i say dare i say techno pessimism uh honestly this is not like a fresh a fresh hot take you also taught a course at tickfa actually that my son was in was one of your students um on social media. So you've been trying to get young people to
Starting point is 00:04:46 understand the good, the bad, and the ugly of social media and how to think about it responsibly. So before we get to AI and the issues that Tucker and I got into, can you just, I mean, sorry, the issues that Tyler and I got into, can you just talk a little bit about how you've gotten interested in the issue of technology and its role in society and humanity and sort of the liberal arts and how we learn and how we think? How did you get into this particular area? Sure. Well, I'm actually trained as a historian. So I got a PhD in history, and I studied history of science, studied the eugenics movement in the United States, the people who wanted to improve the human race through better breeding. So a lot of the work I do now has grown out of my research
Starting point is 00:05:36 into what happens when the best and the brightest, the elite, the technocrats, the people who know everything and really want to solve all the big problems, do that without thinking of two things. One, what people actually want and how they behave and human nature. Those two forces throughout history have governed a lot of how we are able to manage problem solving. And so what I found with my historical research is that when you make these broad, often progressive, optimistic efforts to completely transform human behavior, to make us better, faster, stronger, all these things, they're more productive, more efficient, all of these things. All of these things are good, by the way. I'm not judging these as bad goals. The means matter to get to those ends. And often,
Starting point is 00:06:22 because of human nature, the means adopted can also be can often be quite repressive, they can be anti democratic. And they can all be invoked in the in service of a larger goal that has as its side effects people's actual lives. So in the case of the eugenics movement, you had the most progressive people in this country, arguing that forced sterilization is a progressive measure to prevent the wrong sorts of people from having kids because we wanted a healthier society. Now, we can look back at that now and say, well, that was just terrible. We would never do that. But of course, at the time, that's precisely what was considered,
Starting point is 00:07:00 you know, the enlightened way of looking at solving a human problem. So I come out of that historical background, spent a lot of time in archives. I have a lot of respect for the slow and arduous process of trying to figure out these problems from a historian's perspective versus, say, a political scientist's perspective or an economist's perspective. So I bring that into a lot of these debates. And then a couple of friends and I founded the New Atlantis 20 years ago. Eric Cohn. Yes, Eric Cohn of TICFA, Yuval Levin. Who's a close friend and both Yuval and Eric have been on this podcast. Yuval and Adam Kuyper and Eric Brown and I founded this little journal. And luckily, when you start your own publication, as you know, when you start your own podcast,
Starting point is 00:07:41 you can do whatever you want. So those guys just let me go off on crazy tangents. I started writing about a history of the remote control, which became an essay about how we're kind of habituated to more on-demand content. And I ended up writing about friendship, which became a longer essay about early social media companies, MySpace. I was writing about MySpace. I tell that to kids nowadays. They're like, what is that? I'm like, go back into the mists of the past and you will learn about MySpace. I tell that to kids nowadays. They're like, what is that? I'm like, go back into the mists of the past and you will learn about MySpace. So I was able to look at our use of personal technology and to ask questions about what motivates us to embrace these tools. How does it change our behavior? If it does, what does it improve? What unintended consequences often emerge from these sorts of tools?
Starting point is 00:08:25 So when you launched the new Atlantis, which was primarily focused on bioethics, and it was sort of the Leon Kass era at the Bush White House, George W. Bush White House. So when did you make the leap from that to tech and social media? When did you dial into, wait a minute, so bioethics is one category, but social media is like a whole other world that we should be worried about? When the first social media company started, MySpace, early Facebook, I had a lot of friends who were early adopters. I have a sister out in Silicon Valley. And the conversations around how people's behavior was changing rapidly to suit the platform, whether it was MySpace or Facebook, really fascinated me because it struck me as
Starting point is 00:09:11 something that was happening very quickly. Whereas I know human nature is very slow moving. We have evolved over a very long time to react to certain things in certain ways. And to think about ways of altering our behavior quickly is not easy. So I started to wonder about that in the collection of friends and the way that even the word friend was changing as a result of these platforms. And then I started looking at early online dating platforms as well. I wrote an essay about romance in the information age, again, very early, pre-Tinder, pre-smartphone even. And just the way that people were both so enthusiastic about these tools, but also a little bit naive about the unintended effects it had on
Starting point is 00:09:53 their own thinking, their own way of perceiving others. The idea that ranking your friends and ranking your dates and ranking all these things was just, of course, that's how you do it. It's more efficient. But how that can also undermine serendipity, how it can also undermine the patience and tolerance that's required to really get to know another person before you judge whether they're worthy of your time. But could you not say that, or could you not reason that every new technology, ones that have transformed our lives for the better. Like I presume you think everything from Gutenberg's printing press to Google search have augmented our abilities in ways that are incredibly productive. And yet, you know, the Gutenberg's printing press led to incredible dissemination and proliferation of bad information. And, you know, produced, you know, we were able to have the Bible reach far and
Starting point is 00:10:46 wide and also Mein Kampf reach far and wide. So you could have made the Google search, you know, enabled us to augment our skills and producing and writing and thinking and researching. And yet, again, incredible spread of misinformation. Like, couldn't you, these concerns you have, you could apply to every innovation in history. And so again, before we get to AI, just take those topics you were hitting just now, like social media, couldn't you make the same argument about, yes, yes, it's unfortunate that people rank friends. And that's weird, like really, really weird. On the other hand, if you look at, you know, the Arab Spring in 2011 and how technology enabled the Arab Spring in a way that created all these citizen activists. We'll get to your recent essay in commentary about the future of cable
Starting point is 00:11:39 news, but I don't jump into it now. But even you cite that in 2004, Dan Rather was toppled by a blogger who was able to use technology and sort use to some degree, or the tools you use, come with it some downsides. And that's kind of normal with every innovation. Absolutely. And look, the real difference here is the pace and scale of the change. So the printing press, we had a couple centuries to really acclimate ourselves to it. If you look at the telegraph, if you look at the wired telephone, the adoption of these technologies took time. And with the time, we also had a shift in behavioral norms, in social norms. We were able to adjust at a slightly slower pace. We do not have the luxury of that time now. Changes happen very rapidly. We can even go through with AI. I jotted down some
Starting point is 00:12:47 dates just to show us how rapidly it's developing. It does change more quickly, and I think we're not necessarily wired to adapt as quickly as change is happening. So that poses one challenge. The secondary challenge, and I would say even with all the benefits of a lot of social media platforms, a lot of destruction of the gatekeepers in media, for example, which I do think was a necessary thing, is that it's very easy to tear things down and it's harder to rebuild. So we're in this new process, I think, both with social media platforms and with the new media, where we're trying to figure out how to build a new thing that functions under new conditions. And that's going to be a lot of trial and error. That's going to be a lot of trial and error. That's going to be a lot of upheaval.
Starting point is 00:13:26 It's going to be a lot of misinformation that people are going to come across as a result of this. But I worry, the danger I always feel that's in the back of my mind, both with these platforms, with AI, with any of these new tools, is how are we, what kinds of behaviors are we habituating ourselves to? Are we trying to become more like the machines? Are we trying to make the machines more worthy of us? Trying to make the machines function in a way that is an extension of man, as old theorists used to say, of technology? Or are we becoming
Starting point is 00:13:55 more like the machine? How many of us have had the experience once a week of having to prove you're not a bot, prove you're not a machine? You got to go on and click all the pictures. We're being trained in certain platforms and in certain spaces to be more efficient, to be more machine-like. And that is fine with certain tasks. But when it comes to developing deeply rooted communities and human relationships, we do not need to be more like machines. We need to be more human. We need to actually be more patient, more tolerant in ways that I don't think we can design an algorithm to do for us. So bringing it to this AI moment, I put people's, I mean, this is going to be sort of a crude categorization, but I put people's reactions to it in like one of four or five categories.'s the oh my god ai is the terminator and
Starting point is 00:14:45 it's gonna sky net exactly exactly and and we're training these these machines that are gonna like kill us all and ignore our prompts and do what they want to do there's so there that's like the truly dystopian fear i guess another slightly less dystopian but still pretty dark view is just it's going to just dramatically heighten inequality or exacerbate inequality. And it's going to lead to like socioeconomic strife to, you know, in ways that we couldn't even, you know, could never possibly imagine. That's sort of another category. Then there's the sort of what I would call like the Derek Thompson category from The Atlantic, which is, you know, AI may be good, AI may be bad. The problem is it's just not that the qualities. It's a gimmick now. It's a gimmick.
Starting point is 00:15:32 Like that's kind of his beef with with AIs. It's just people are having fun with it, with chat GPT. And but they're not like really using it to solve real problems. It's a gimmick. It's fun. It's akin to the iPhone being released at first, which was fun and neat, but clunky and the app store didn't exist. And it just didn't have all these tools that we've become dependent on. So it was a fun thing to have, but it wasn't as transformative, at least at the time. Then there's the Tyler Cowan, Mark Andreessen,
Starting point is 00:16:03 techno-optimism. And so you don't fall into any of those categories. You're not in the Terminator category. You're not in the, I don't think you're in the dystopian kind of, right. You're mostly concerned with the impact it has just on human beings and human interactions. I want to quote here from an essay that you and I were talking about offline by Mark Andreessen, where he kind of goes, his essay is titled, Why AI Will Save the World. Very subtle title. Subtle. Tell us how you really feel, Mark.
Starting point is 00:16:35 Well, you know, he was famous for saying software will eat the world. So this was his upbeat sort of twist on that, I guess. And so I guess he sent us an email. He blasted us an email. It's like a rant. He's explained it. It was his version of, I'm mad as hell. I'm not going to take it anymore.
Starting point is 00:16:54 He was tired of hearing from people like you, I guess, who were saying that there was huge problems with it. So he decided to just put finger to keyboard and lay out why we should be more optimistic. And I'm not going to obviously quote extensively from it, but he says here, and I'm quoting this part, he says, perhaps the most underestimated quality of AI is how humanizing it can be, how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really
Starting point is 00:17:26 does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer. So Christine, why are you against a warmer and nicer world? Yeah, so this piece drove me absolutely bonkers. By the way, we'll post the piece in the show notes. It's a, look, I appreciate his optimism. I share his optimism when it comes to what artificial intelligence is already doing
Starting point is 00:18:05 in not artificial generative intelligence, but strict old fashioned AI and what it's doing in the biomedical fields, the way like you can synthesize proteins in a split second. You can do all of these. Yeah, amazing, amazing things. And where there's a tool that you can clearly see down the line, true benefits for humanity, humanity saving lives that's a real benefit so on on that score i'm quite we got a version of it during the pandemic right mrna vaccine exactly exactly which they literally had code on on the on the virus like in january that and they and the um you know the heads of some of the drug companies said the code they used to create their mnr vaccine was the code that landed in their inboxes in like in January. Exactly. Of 2020.
Starting point is 00:18:51 Well, and think of like cancer screening, you know, the scans, having radiologists work with an AI tool that can pinpoint and find patterns in a moment that would take a radiologist a lifetime to be adept at finding. Now that you don't want to take the human out of that loop ever, in my opinion. But there are these hugely powerful, positive things that are going to come out of these tools. Where I depart from the Andreessens of the world is this idea that we should replace human interactions with AI chatbots, for example. So here's the whole thing that humans have a struggle with. We can't sit in a room by ourselves alone doing nothing and be happy. We need other people. We need communities. We need a sense of purpose. And we had an experiment. We had a global experiment in that during the pandemic. Exactly correct. Exactly. The pandemic really drove these lessons home.
Starting point is 00:19:37 It drove home the lesson of the need for face-to-face communication, the way that mediated interaction. Although brilliant, I'm talking to you by looking at you over a wonderful screen so that we can see each other's expressions during our conversation. I'd still rather talk to you over coffee in person, that you're going to get more from people when you're in person. The connection is better. So I think what worries me about Andreessen is saying he sees the AI chatbot as a perfect replacement of the human. It's like, look, humans are impatient. They get tired. They're not always empathetic. Wouldn't it be great if you always had something at the push
Starting point is 00:20:08 of a button that was those things for you? I would argue, no, that's not good for us. It is not good for us to do that. So in that sense, this idea that you can replace these deeply human needs, which give us a sense of meaning and purpose and belonging and ground us in communities that also, by the way, remind us that we are embodied physical creatures with frailties and that our struggle to overcome those frailties and to deal with them emotionally and physically and psychologically and spiritually, that's what makes us human. So no, I don't want a perfectly empathetic, tireless AI chatbot replacing my friends who are sometimes cranky and impatient with me because their human reaction forces a fellow bonding with me that says, you know what?
Starting point is 00:20:53 She's having a really bad day. What can I do to help her? I'm not going to feel that for a chatbot. I'm going to say, come on, why aren't you giving me what I want? We are already a pretty entitled species. I don't think we need tools that make us more narcissistic and more entitled. Okay. So I want to quote further because this is exactly what Andreessen gets into. So I'm quoting here. In our new era of AI, every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.
Starting point is 00:21:23 The AI tutor will be by each child's side every step of their development helping them maximize their potential with the machine version of infinite love i'm sure you love that phrase the machine version of infinite love every person will have an ai assistant coach slash mentor slash trainer slash advisor slash therapist that is infinitely patient infinitely compassionable compassion infinitely knowledgeable he goes on and on. The AI system will be present through all of life's opportunities and challenges, maximizing every person's outcomes.
Starting point is 00:21:52 Every scientist will have an AI assistant collaborator partner that will greatly expand their scope of scientific research and development. Every artist, every engineer, every business person, every doctor, every caregiver will have the same in their world. Now, some of this you don't, you're not horrified by. I assume the scientist having a collaborator is. As long as it's not a hackable collaborator. Yes. Yeah. But, but it gives them more horsepower, makes them more productive, can synthesize, you know, 50 research papers in one email. Right. But this is the stuff that that that you know the machine version of infinite love you're basically saying children should not have to learn i mean sorry children should have to learn
Starting point is 00:22:32 what it's like to have to work with people in the world who don't give you infinite love oh yeah well we've both had i have i have uh kids you have kids yeah and think of the tyranny of the three-year-old the natural built-in tyranny of a three-year-old who's like, I want it. I want it now. Imagine a world where that three-year-old, and this would be the three-year-olds in the developed, you know, sort of well-off West to begin with, could turn and say, get me this. And it would appear, get me that. We already have studies, by the way, of how children use smart speakers, how they can summon information and they'll walk into rooms and demand something. This is not a skill we want young children to have. When I had kids, we had stairs and they would have to sit on the step when they sort of acted out. It's like,
Starting point is 00:23:15 you sit on the step two minutes, you set a timer. And watching them struggle with their sense of self-possession in two minutes, that is necessary. That's how humans learn self-control. It's how we learn to understand our own emotions. Like, why did I blow up? I'm sitting on the step wondering why I demanded that cookie. That's the process of becoming human. And if you outsource demands to something that always gives you what you want, you will never have to learn those lessons and you will become an adult that expects the world to instantly gratify every need you have. Yeah, I've got to be careful not using names here. My son, a couple of years ago, had a math tutor who was both indispensable to his, and fantastic teacher,
Starting point is 00:23:59 indispensable to his kind of turnaround in math, which had been a challenge for him. This is not Eli, it's my other son. And at the same time, every appointment, in-person appointment, all his sessions with her were in person. He was often late for. And finally, one day she fired him. But life lesson, right? Right, right.
Starting point is 00:24:19 That's my point. She fired him as a student because he was always late. And he was horrified because he was like, I can't I won't be able to make it in math if I if I don't can't see it. I said, dude, get on the phone and make your case and like come up with your parole plan, like figure out how you're how you're going to fix this. And like, I'm not saying my son is going to be a mathematician, but I do think that kind of experience is the opposite of what Mark is envisioning with the ever accommodating, ever patient, ever I'm here to serve you AI tutor. And can we also acknowledge that he's describing a one-way relationship? Look at what this thing will give to you. The problem that we know from human-computer relationship studies,
Starting point is 00:25:10 we have decades and decades of social science and computer science research that shows we will impart motives and feelings and behavior to the AI that it doesn't even necessarily have. And we do that because as human beings, that's how we learn to connect. And this started with Eliza, the chatbot, which people instantly thought had feelings
Starting point is 00:25:31 because it was talking to them. And the sophistication and the level of mimicry that we're going to be able to do with AI and already can do to some extent with ChatG GPT is going to exacerbate that likelihood. And again, for adults, it's bad enough, but for a child to emotionally invest in something that has absolutely zero emotional investment in the human being in return, it's simply programming. Okay. So let me ask you, did you feel about a couple of random examples?
Starting point is 00:26:03 Do you think that, I mean, you're basically saying it's de-skilling us, if you will, as humans, de-skilling us of human skills. So did you think that autocorrect de-skilled us or digital maps in a way de-skilled us? Yes, they all do. Yeah. And again, the trade-off we make, so take GPS. GPS is indispensable to people, most people these days. I use it all the time when I travel. But it does change our perception of things. First of all, I grew up learning how to navigate on paper maps. I took a road trip in grad school with, you know, a car with, it didn't even have power
Starting point is 00:26:41 steering standard. You know, we drove across the country. We had paper maps. We got lost across the country. We had paper maps. We got lost all the time. We had to stop. We had to put the map on the hood of the car and figure out where we were going. Ask people for directions. Ask human beings, where are we?
Starting point is 00:26:52 Where do we go? Where can we stay? Where can we eat? Which kind of taught us about the places we were. It was, I will never forget that trip. It was a wonderful, wonderful experience. And incredibly scary at times where you're like staying in a dodgy hotel and you're like, we don't even know where we are on the map. Those are good experiences.
Starting point is 00:27:08 GPS is much safer, more efficient, puts you right at the center of the map, but you don't know where you're going or how. And I think we've made a tradeoff there where we've decided that the efficiency and the ease is worth it. And so that's good. I think for the most part, all those little anecdotal stories we heard at the beginning of the GPS era where people drove off cliffs or into lakes, we hear fewer and fewer of those. But we do lose that navigation skill. I make my kids learn how to read a paper map because that is something that most kids don't have to do. Yes. You do that? Yeah, I do. We put out the map and I'm like, okay, find where we're going because we take a drive up to Maine every summer. And I want them, if for some reason the GPS went out'm like, okay, find where we're going because we take a drive up to Maine every summer.
Starting point is 00:27:50 And I want them, if for some reason the GPS went out, to be able to find their way if they had to. It's a skill. So we can choose to keep those skills going. We can also decide, you know what, the tradeoff is worth it. I think GPS, the tradeoff was worth it, even as it has led to a decline of certain skills, but the human skills, the emotional skills, the reading each other's feelings, the understanding, the true empathy skills, I really am concerned about the de-skilling that's already taken place even before we've gotten into the world of perfect AI chatbots. So I'm sympathetic to a lot of what you're saying. And then I'm just sort of, it's kind of coming up against the kind of the hard rocks of reality of not just technology progress, depending on how you look at it, but just the world extent that there's some kind of unified vision for the what ai should be in china it is quite daunting uh and in terms of i mean you already look at what they've done with 5g and the influence
Starting point is 00:28:55 and the way they use tiktok and the way they use sort of digital authoritarianism to control their population i mean they have a vision talk about dystop, they have a vision. Talk about dystopian. They have a vision for how to use AI and they want to be an AI superpower globally. And they have reach. If you look at their, you know, their one belt, one road strategy and their kind of death trap debt. What is it? What was it? Whatever the debt, death trap diplomacy around the world and gaining all this leverage around the world, AI could take that to a whole other level. That's the reality. That's what we're up against, the United States. So at a practical level, do you think there's any way to put the
Starting point is 00:29:37 brakes on this without putting us at a massive disadvantage with adversaries like China? So I know that this is often the way it's posed. If we don't do it, if we don't do this well, i.e. if we don't have any breaks and just go ahead, get stuff to market as quickly as possible, then bad guys in China will. And I know that that's true. That's likely true. I don't think that's an unlikely scenario at all. However, I do think we need to think through previous revolutionary
Starting point is 00:30:07 moments and how we dealt with them. So think of a lot of people point to nuclear, but I think this is much more like recombinant DNA research, where we could permanently alter what it means to be human. We have the skills. Now, with CRISPR, we can do these things. There is a global consensus that was reached pretty quickly that that's a really bad idea, because down thePR, we can do these things. There is a global consensus that was reached pretty quickly that that's a really bad idea, because down the line, we really have no idea what that would generate in terms of humanity's future. So I think that we need to treat AI in the same way. Now, we got China sort of on board with some of this. I mean, they have punished scientists who've meddled and used CRISPR to do actual germline engineering. There are ways to stop this. I will say my big concern with China, quite frankly, is the ghost worker economy that allows AI to function. The 20 million workers
Starting point is 00:30:57 who do this, largely in the global south, in Africa, in Nairobi in particular, who do the actual human labor of training these systems so that they can identify things so quickly. There are human beings behind the curtain here that no one ever talks about when a glamorous CEO testifies before Congress. Human beings paid piecemeal, pennies sometimes on piecework. It's digital piecework. China is much more of a power in the regions where those human workers who create the AEI are functioning. So the influence in those regions and our need as a global power to counter that, I think is important. We're not doing that now. I don't think, however, as the United States,
Starting point is 00:31:39 which is a beacon of values, virtues, I like to use the word virtues. Nobody uses it anymore. I find values kind of tepid in this context. We are supposed to stand for something and saying, and we are also still the leader in a lot of this technology. So standing up and saying, you know what, we're doing this, but here's how we're doing it. We have an industry standard that won't go beyond this point. We have oversight boards that do X, Y, and Z. My concern is that the industry itself in the US has no interest in that. We still don't have an industry standard for social media platforms because the companies can't agree. They will not sit down around a table. Now, they all kind of claim to want it, but we can't even get our act together on something that simple that we know to have some impact on people's lives. We need to do that with AI. A moratorium isn't necessarily a good idea. And I think quite frankly, strategically for the people who are a little behind, it lets them catch up. But we do need standards that allow still for new entrants into the market.
Starting point is 00:32:34 I'm not sure how we do that in a country that can't even decide what it stands for as a country anymore. This is where I think the AI argument, we're talking about values and what are our values anymore? What is our position as a global leader, we're talking about values and what are our values anymore? What is our position as a global leader? We're in a state of anxious identity right now as a country. And I think the AI stuff has created more anxiety in part because we are feeling a little bit uncertain coming out of the pandemic, coming out of a political system and a sort
Starting point is 00:33:04 of polarized political culture that doesn't let us have debates very reasonably anymore. But you said earlier, and I think you have some dates there, maybe the speed with which AI is developed. So can you just... Yeah. So I decided to, I was like, well, maybe I'm exaggerating. I wanted to look back. So 1997, the infamous Deep Blue chess match where Deep Blue defeats Kasparov. So then flash forward, 2016, Deep Mind wins in AlphaGo. Now, this was a version of AlphaGo. Go is a very complicated game. This was a version of AlphaGo.
Starting point is 00:33:40 There's like tens of thousands of moves in AlphaGo versus chess, which is a fraction of that. So much more complicated than chess. It solves it, but it does it by being trained against human matches. It learned by seeing how humans played AlphaGo. That's in 2016. Flash forward to AlphaGo now. It's called AlphaGo Zero. It trains itself on games it plays.
Starting point is 00:34:02 The human is out of that loop, and it beats humans all the time. So alpha goes zero. So again, that's just five years ahead. 2019, Pluribus. This is one that actually shocked me. A program called Pluribus defeats human players in Texas, no limits, Texas Hold'em poker. Poker is a game where as human beings, we think, well, you're really good if you can read people's tells or you can bluff.
Starting point is 00:34:25 The computer beat us at that. And then finally, you've got Cicero developed by Meta, which this was the one that was really intriguing to me. So it was playing a game called Diplomacy. Again, something that involves strategy making, understanding motive. And it was very skilled at winning using deception. It figured out how to be deceptive and the humans didn't know. And that to me just shows that's from 1997 to 2023. That's a very short span of time where the tools that we've developed in many ways have figured out how to
Starting point is 00:34:57 outsmart us. We slow biological creatures and we can't always explain how they figured that out. That's the black box part that I think the Andreassons and the Tyler Cowens of the world just kind of sail right by. But if you talk, I've talked to AI researchers who work on a lot of biomedical issues, which is why I'm very optimistic about what's happening in that field. But a lot of them will be honest with you and say, we figured out how to do this. And when we went back and tried to figure out how it did it, we couldn't explain how it did it. Now, that doesn't concern them because they're dealing with very limited sort of goals with a lot of safety and guardrails up that they put in the
Starting point is 00:35:36 beginning. But not all AI researchers are going to have those narrow goals and guardrails. And so if you can't explain how something learned to do something, how are you going to reverse engineer it if something catastrophic gets out of control? That's where I think the catastrophism is not entirely crazy, because you've got to know how to go back and reverse engineer. So Cowan used this thing where he did a percentage like, oh, well, if the plane was 90% likely to land safely, wouldn't you still get on the plane? Well, AI researchers themselves have described their work as building the plane while it's taking off. So I'm not getting on that plane. So I think we got to think about the likely risk of some of
Starting point is 00:36:15 this stuff. So November of 20, November of, wait, November 22 is when, wait, November of 22 is when, wait, November of 22 is when ChatGPT was released, right? So, I mean, was it on your radar before then? A little bit, a little bit, but only in sort of, it was mentioned here and there in articles I would read and sort of pretty tech-specific journals where people chat about that stuff, but yeah. Okay. So the, so, but you were focused on social media before that and social media kind of building, developing, progressing, you know, uh, becoming permeating so many parts of our lives for about a decade and a half. Um, the computing power here and the intensity of it is like at a whole other scale.
Starting point is 00:37:02 So sure. We should have these protocols. Sure. I mean, I, even if I were to agree with you that we needed these protocols and we needed to, you know, kind of, um, develop some kind of universal in the U S universal theory about, um, uh, where, what, what parts of, cause you're not saying shut it all down. You're not saying halt it all down. What areas of AI should be left to kind of do its own thing and where we need to be a little more thoughtful and less accelerated about. But like at a practical level, again, okay, fine. I mean, even if I, but it just strikes me the speed with which this is moving. I mean, our government doesn't even, I mean. Yeah. Well, that's the concern. I mean, everybody knows whenever a tech CEO testifies before
Starting point is 00:37:44 Congress, it's always very amusing to see the everybody knows whenever a tech CEO testifies before Congress, it's always very amusing to see the congressman like, they're still back in 1992. No, no, they behave like help desks. The members of Congress, they're like, so wait a minute. You're telling me what now? Yeah. No, but that, so this is the moment we're in. And that's why I actually think that neither the doom, the total doomsaying, shut it all down, and there's a contingent of those folks, or the it's all fine, just let it run wild. Both of those are extremes
Starting point is 00:38:10 we cannot really accept or should not accept. There is a middle ground. The problem is, I think, for one thing, that the leaders in that middle ground, again, I've talked to people who run these companies, there is zero incentive for them coming from either the risk of regulation or legislation, or, you know, fear of being held accountable later on if something does run amok. They have only one incentive, get to market first, get to market, get what we're doing to market first so that we can be the first ones in this space doing using AI in this way. And I understand that I am a free market person. I think this is a generally healthy impulse. The challenge I have is that all the mechanisms we have in place, whether it's regulation, whether it's lawsuits, for example, litigation is a pretty good tool
Starting point is 00:38:55 if something goes wild and harms people to deal with it. Where's responsibility here? So when you talk to the people who do this research, if they can't even pinpoint, well, the algorithm did it. Well, that's not really legitimate. If you're being sued, the whole company is going to get sued. The way responsibility is dispersed in a lot of these companies with regard to AI, the outsourcing of a lot of the sort of data input that comes in to generate these models, it's just very complicated. And if the people who are using it can't explain it in a clear-eyed way, how are any human beings, just the average Joes like me, going to be able to demand that they be responsible for what they produce down the line if it harms people? You are a teacher. The, you
Starting point is 00:39:37 know, Tyler said on my podcast, on this conversation he and I had on my podcast, I said, does this mean the end of homework? He says, homework's already over. It's over. Like that is over. He says, whatever you can get students to do in class will be where the real learning happens. Um, but homework is just, you know, you learn to develop other skills, but, but homework, the way you would think of homework, which is thinking, reasoning, writing, researching. Unless it's done in class, kids are not going to learn how to do it. So this strikes me as something that's not... There's a net gain and a net loss here. The net gain, I love it that you probably grew up in the era where you had to handwrite your exams in a
Starting point is 00:40:19 blue book, right? Remember when people actually taught handwriting? That was eons ago. But there are these skills that we will, in a weird way, have to bring back because— By the way, I still learn something in a better way, and the longevity of my memory for it is higher when I actually have to write rather than just scroll or type or whatever. There is cognitive research that backs that up. Students who take notes by hand in a lecture retain more information than those who use keyboards because they have to summarize in their head while they're writing. They cannot just do a transcript word for word. So the way our brains are designed is to write, to slow down the thought, compress it, summarize it, put it on paper. So I think that those skills, if we end up weirdly having to bring those back, that's a net positive. The net negative is, and I deal with this with students I teach college level,
Starting point is 00:41:15 I teach everywhere from junior high to high school to college level. The idea that every bit of the world's knowledge is available to them with a Google search or online or in a Wikipedia article is something I really try to dissuade them of that idea, that notion. So much of our knowledge is still embedded in undigitized archives, in old books, in places that they might never come across because they assume everything's on the screen in front of them. And the concern I have with these AI generated chatbot, you know, chat GPT, for example, it will fake things. So you can say write a scientific article with citations about X subject and it'll create it and it'll look perfect. Some of the footnotes will be faked. Some of the research will be faked. It will seem absolutely plausible, but it is not
Starting point is 00:42:05 true. And so the difference between plausibility and truth is, I think, the distinction that as teachers, we're going to have to make with our students going forward all the time. You can say, yeah, that seems plausible. Is it true? We do this already with misinformation and disinformation, but the scale at which we're going to have to do this and in the number of fields that this is going to become important, particularly scientific research, is a challenge. I mean, we already have a replicability crisis in social science research. That's been going on for a while. Imagine a world- Which is what? Can you explain that? So you have a lot of these like, gee, wow, that's totally nifty social science research
Starting point is 00:42:40 experiments. They get written up in the papers. People are like, that's incredible. Look what they learn about human behavior. Two years later, someone tries to replicate that experiment to see if the results actually are legitimate. No, not replicable. A lot of the times, these are like one-offs, probably badly designed studies that don't end up telling us anything about ourselves. And this is how we learn about what it means to be human, by studying our own behavior and doing it in a systematic way. My concern with a lot of the AI-fueled research developments is that these summaries, these quick turnarounds, they can be useful if the limitations are understood by the researcher at the get-go.
Starting point is 00:43:15 Where they become harmful is when a high school AP biology student gets a chat GPT summary of something that's incorrect or misleading, and then that's a building block for them down the line when they're in medical school. Like you can see a long-term effect of this kind of vaguely wrong but plausible information. We need citizens in this country to actually have some faith and trust in the integrity of the information that they're learning. And that's the role of parents and teachers to make sure the information their kids are getting has integrity. Two brief topics, sort of related but unrelated.
Starting point is 00:43:52 We touched on it, alluded to it earlier on, and I know you've written about it and thought a lot about it. The long-term implications of the COVID lockdowns on young people, you believe, even with the studies coming out about how certain students are behind in certain academic subjects and increasing rates of teen depression and loneliness and even teen suicide, even with all of this information that is now available to us that sort of capture, chronicle this period that we went through during the pandemic of mass school lockdowns.
Starting point is 00:44:32 You believe we still haven't fully – we still don't fully appreciate how bad it was for young people and the long-term implications. That's right. And I really am concerned that we're not holding responsible the people who brought this on our children. I mean, I joke, but I'm only half joking. If I were, you know, half the age I am now and a young lawyer, I would try to sue every teachers you did in every state that kept public schools closed. My kids are public school students.
Starting point is 00:45:00 They were out of school for a full year. They were high schoolers. They had the benefit of, you know, me being able to really supplement what they did. And I did do that. So they were okay. They still lost a lot of learning. I mean, there's still huge gaps that they found, you know, academically or social learning both. So their entire cohort, they're about to be seniors in high school. Their entire cohort socially is fascinating to watch because they are not, they really are about a year and a half behind because in ninth grade, that first year of high school, they really were all separated. And as much as they've tried to make it up, they are kind of emotionally not at the level that I think they would have been without those lockdowns. And so my concern also is the long-term effects of this. If you lose, it's particularly acute for young kids, kids in elementary school who are learning
Starting point is 00:45:48 the building blocks of reading and writing and social, emotional ways of understanding each other. They lost, some of them lost a year and a half. That will have long-term echo effects 10, 20, 30 years down the line. Their anxiety around leaving the house even. Their anxiety around what is actually... Yeah, I mean, you hear these stories, and I actually have spoken to parents,
Starting point is 00:46:12 many parents who have young kids who, because of all the safety precautions that they saw adults in their world creating for them, these are not parents who were crazy about any of this stuff. They just followed what was advised. You know, anytime someone gets sick, the kids are like, well, we have to test. There are all these protocols that they just grew up assuming was normal that aren't normal. They were excessive. And so the parents are having to kind
Starting point is 00:46:33 of teach them, well, that was just pandemic era stuff. We don't have to do, it's just a cold. We don't have to test for that. It's just the flu. You're just going to rest. You don't need any, but you know, but the fear and anxiety, that stuff comes out down the line. And we see, I think, some of the mental health crises, it was building before the pandemic. What exacerbated it, particularly for young people, was the sense of isolation. And they would spend a lot of time online and they would be chatting with their friends, but they still didn't feel connected to them in some way. It didn't help them.
Starting point is 00:47:02 For some kids, it was fine. But for kids who were already trying to deal with the mental health problems, it made it worse for them. So that kind of stuff is where I think it was both a wake-up call, but actually I fear we will not hold the public officials, the public health establishment, and the teachers' unions accountable for what they did to an entire generation of children. It's a tragedy what they did to those kids. Yeah, I agree. We got to, I guess, find those lawyers who can take this on. Come on, you young lawyers, get out there. I'm too old. Okay, I want to just wrap with your piece. I think it's the most recent issue of commentary. It is. You titled The End of C cable news, which you pivoted off of both
Starting point is 00:47:47 Tucker Carlson leaving Fox News and Chris Lick stepping down from CNN. And I'm just going to quote from you here. You say, the overall effect for consumers is that the news is digital and atmospheric rather than coming from a particular voice. This has resulted in declining audience loyalty to individual news gathering institutions and greater engagement with the social media platforms that serve up information like a hyperactive associated press, a 21st century wire service with memes. I like that. But the part of this quote actually that got me the most was that news is atmospheric. That is exactly how I feel when I give talks or lectures or, you know,
Starting point is 00:48:25 I'm often asked, well, tell us about your media diet. I get that, a version of that. Like, what do you read? What newspapers do you read? So I stack up like the FD in the Wall Street Journal, New York Times every day and just go through it all. And I try to explain, like, where do I get my news from everywhere? Like, it's, you know what I mean?
Starting point is 00:48:42 And so the atmospheric is exactly right. So based, and I encourage people to read this piece, which I think is excellent, and we'll post it, but so what is your big point? Like where, it's not about just the end of cable news, what you're writing about, it's about the future of news. Right. So I started with, like you, I was actually quite an enthusiast of the decline of the gatekeepers. When the internet came along and smart people could fact check in real time the misleading
Starting point is 00:49:09 statements of the New York Times or a CBS News anchor, this was good. This was actually very democratic. It was very good populist, I would say. You now have to distinguish between good and bad populism. But it was a bottom-up movement to hold accountable people who otherwise had not been held accountable for their misdeeds when it came to facts in particular and political and content, particularly ideological, political content, all along the spectrum. Again, all for the good, if you like it. If you like your left-leaning news, you watch MSNBC. If you like it right-leaning, you watch Fox News. That's fine. They compete, healthy, go for it. They fact-check each other, there's some sort of weird balance there. But social media really upended all of that, because we're just getting the information in little micro doses it doesn't really matter where it comes from and we don't really check where the original source is
Starting point is 00:50:09 so you read the tweet do you ever click do you click on the story and read it sometimes many people do not or they just get a feed and so everything is given the same priority so it's like here are pictures from my vacation there was was a volcanic eruption. Oh, the election was stolen. And it's all scrolling along in the same, given the same prominence. And there's not an encouragement of depth, depth seeking, right? So I think the danger with social media, Tucker is a perfect example. He leaves, well, he's technically still with Fox News, but he's broadcasting his own show on Twitter. He has no gatekeeper.
Starting point is 00:50:42 It's just him in a cabin with a microphone and an audience. I like the comparison to the Unabomber. And this, I wrote this before Ted Kaczynski's death. So I was like, oh, well, that was weirdly timed. But it has a, he has a feeling of like, I, and he says, like, now I'm going to tell you the truth. I'm going to really cut loose. So when he cuts loose, what you see is a man who's sharing deeply anti-Semitic stereotypes about global leaders, who is just conspiracy mongering and making wild theories, kind of homophobic rants against senators. I mean, just kind of what your crazy uncle used to do when Facebook first came out, right? And he'd be like,
Starting point is 00:51:17 I think this is happening. And it is very QAnon-ish. It got a lot of viewers. So my question then is, if that's actually the place, that is the future. The future is a lot of viewers. So my question then is, where if that's actually the place that is the future, the future is a lot of people with big personalities and crazy theories and no fact checking going on Twitter and doing this now to I will say one caveat to that. The the community notes function on Twitter is brilliant. And I love it. It doesn't always work perfectly. But that's an example of the gatekeeping effect still in effect on one of the people who blew open the gatekeeping. So I like community notes. It actually holds two accounts pretty even handed politically too. Like people, people love to fact check others and tell them they're wrong. And that community
Starting point is 00:51:58 notes function is useful. I don't think a lot of younger readers in particular who do everything is atmospheric news for them. The problem becomes when we are very tribal politics, very polarized culture, and then you add into that a lack of integrity and some of the information. Well, he's just telling us the truth. They all lie to us. Okay. Post-COVID, there's a lot of truth to the idea that the officials in charge often tell you the noble lie so that you don't freak out. There's evidence for that. There's also evidence that he's just kind of gone a little crazy. So how do you come to what democracy used to always do best, which is everybody arguing and then coming to some compromise, some sense of agreement so we can move beyond that. I feel like we're all really
Starting point is 00:52:48 trapped in the same constant battles. Social media drives them. It's not caused entirely by social media. It's caused by the way we behave as human beings. So that is my concern. The future, cable news can't save us. It's kind of dead. Social media is going to make this problem worse. How do we come out? What's next after this that's going to get us away from that? The one hopeful opportunity is I do think you will see increased investment in highly personalized, in a healthy way, highly personalized personality-driven news. So for example, not to plug your critically acclaimed podcast, but the commentary podcast has a personality to it. It has a real personality to it.
Starting point is 00:53:27 I know from when I'm on it, they talk about you guys like you're siblings. We squabble like siblings. They squabble and they analyze. It's like the Kremlinology of like, oh, John and Christine argued about this. They got a little tense, but then Matt came in and Abe was a little quiet,
Starting point is 00:53:44 but he made that key point. AI can't do that. By the way, I see this with other news sources, not just podcasts, but even print. Some of these sub stacks I read, some of the Jewish Insider, which is this daily news feed I read, it has a real personality to it geared towards people with a real interest. It has has values it has values and principles exactly right and and a voice and a personality that i don't think these large language models are going to be able to replicate at least anytime soon so i do think you will see a doubling and tripling down investment in the kind of more um thoughtful um personality driven news some of it will be crazy yeah you know so some of it will be crazy.
Starting point is 00:54:26 Some of it will be unthoughtful. But it's a free country. We're free speech people. You've got to take the crazy to give everybody a voice. That's the deal. All right, Christine, thank you for doing this. Thank you. It's a pleasure.
Starting point is 00:54:41 I hope to have you back on. We will post a couple of these pieces we discussed in the show notes. And, you know, thanks for sort of bringing a douse of, of crushing morosity to the call me back podcast. Happy to, you got your, your,
Starting point is 00:54:54 your on brand and that we, we shouldn't get, we shouldn't let Tyler get away with his, you know, uncrushing hyper optimism. So, so the counter has been laid out. Great.
Starting point is 00:55:04 Thank you so much. That's our show for today. To keep up with Christine Rosen's work, you can track her down at the American Enterprise Institute, AEI.org, or at Commentary Magazine, Commentary.org. And her very good piece on the end of cable news that we wound up talking about we'll post that in the show notes as well but you can find it at commentary.org call me back it's produced by alan benatar until next time i'm your host dan senor

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.