The Dispatch Podcast - Will AI Destroy Humanity? | Interview: Andy Mills

Episode Date: November 17, 2025

Andy Mills, host of The Last Invention podcast, joins Dispatch CTO Steve Hayes to discuss the artificial intelligence revolution, the competing visions of utopia among the tech bros, and where AI ...will take us next. The Agenda:—The great conspiracy theory of our time—Understanding AI, AGI, and ASI—Alan Turing and the birth of AI—The Cold War’s influence on AI—Race against China—Where is the bipartisan policy? Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 At Desjardin, we speak business. We speak equipment modernization. We're fluent in data digitization and expansion into foreign markets. And we can talk all day about streamlining manufacturing processes. Because at Desjardin business, we speak the same language you do. Business. So join the more than 400,000 Canadian entrepreneurs who already count on us. And contact Desjardin today.
Starting point is 00:00:25 We'd love to talk, business. With Amex Platinum, you have access to over 1,400 airport lounges worldwide. So your experience before takeoff is a taste of what's to come. That's the powerful backing of Amex. Conditions apply. Andy Mills, reporter at the center of a new. podcast series called The Last Invention. Andy, welcome. Thank you, Steve. It's a pleasure to talk to you, a big fan of you and the work you guys are doing at the dispatch. Well, likewise, and I'm very interested in letting our listeners
Starting point is 00:01:12 know a little bit more about Longview. We'll get to that at the end, but I want to start at the beginning. What is this series about, and why did you call it the last invention? Well, what started off as a curiosity about a debate happening inside of the world of artificial intelligence grew over months into this very fascinating backstory to this moment that we're living through right now where I know that politics is interesting and I know that it feels as if the state of the world is in many ways, let's just say like dynamic in this moment. Dynamic is the most charitable way to describe it. Well, it turns out that the details of the AI story and the fact that we are where we are right now, it's so rich that I thought, okay, let's do a big eight-part series. Let's a deep dive into all of the different fascinating aspects of it, in part because it's one of those stories that although fascinating in its detail and fascinating on the surface, it's also pretty. pregnant with all of these other things, pregnant with all these other themes and ideas and questions
Starting point is 00:02:29 and I guess tensions that we're grappling with in our world now. And so it's on the one hand, it's just an investigation into what's happening with artificial intelligence, especially when it comes to the concerns that we may be about to walk into a truly transformative moment in human history. And people don't know whether they should be pumped or terrified of that moment.
Starting point is 00:02:53 And then on the other hand, it's also a historical story, a way for us to pull back from all of the content and all of the things that are happening right in front of our face and say, like, okay, let's see this in historical context and see what comes of it. Like what might be revealed by pulling back and looking at it with a bit more perspective. So that's the nutshell. Equal parts. history, philosophy, and sort of existential way, you know, contemporaneous reporting, I found it absolutely fascinating. I was hooked pretty early. You start the conversation by taking listeners through really a conspiracy theory.
Starting point is 00:03:38 And I will say, when I first heard you introduce the topic that way, I was immediately deeply skeptical. Like, oh, great, here we go. What's the conspiracy theory? But the conspiracy theory turns out to be pretty interesting. And while it certainly wasn't all true, there's, there are elements of truth to it. Maybe you could give us just a glimpse into what that conspiracy theory is and why it led you to the kind of reporting that you did. Yeah, so a lot of ways, this started off with a tip I got from a former Silicon Valley executive. who reached out to me. He was a fan of some of my previous work.
Starting point is 00:04:21 He trusted me and said that he was sitting on a bombshell that he had information that a group of people in Silicon Valley were attempting to overthrow the American government and instill an artificial intelligence government that they own in its place. And that Doge, which was like big at the time, This was like all the headlines were about what Elon Musk and his team of people were doing at Doge. He was saying that's stage one, firing these bureaucrats, trying to decrease the human workforce in Washington, this phase one.
Starting point is 00:05:00 Now they're going to start rolling in all of these artificial intelligences to take their job. But the ultimate goal, years down the road, is for artificial intelligence to supplant American democracy. And he's a legitimate guy. He really is connected in this world. while it did seem a little bit outlandish, it's the kind of story you would really kick yourself on if you just said whatever. If it ended up being true and you were like, nah, not interested.
Starting point is 00:05:28 Yeah, probably. But it was. I mean, so just in the beginning of that, as you sort of rolled that out, my initial instinct was eye roll. Like, really? Come on. But then you did some reporting. Right.
Starting point is 00:05:42 Well, two things came out pretty quickly when I did the reporting. One of them is that there were certain, details and there were certain confident claims that the source were making that I just not only could have not confirmed, but some of them just fell through, which happens, as you know, Steve, sources a lot. But the other one was that it actually is true that the thing that the AI companies are making when they say that they're making AI really is something more like a new species than it is like a new product. That the chatbots that we're interfacing with. They are not the
Starting point is 00:06:17 AI. They're more like the website on the internet. And the internet in this case would be the AI, right? And that automating new species, that brilliant superintelligence, they do think that it will
Starting point is 00:06:32 eventually do everything we do. We currently think of as work. And that will include, that obviously will include the bureaucratic state. And we didn't get super into this in episode one, but just for, because you seem to be interested in this, and I can see why the dispatch folks might be interested in it as well. I also immediately became intrigued because
Starting point is 00:06:55 there's a part of me that thought, wow, that might be nice. Like, I can see why the bureaucratic state has a bad reputation right now. Yeah. And when it comes to how do we allocate the budget to make sure that we can achieve all of these goals we've told our, the people, the people we represent, how much goes to bridges and how much goes to this, even just the AI systems as they exist today might be more adequate and might be more trustworthy in some ways than the current human beings with all their flaws and incentives that have been poisoning our politics for so long. So I from the beginning thought like, oh, these people who have this somewhat spooky and it's obviously freaking my source out plans to maybe replace this all
Starting point is 00:07:43 with AI, they actually, I can see where they're coming from and I can see that there's, there's something admirable happening here. And so I started just poking around and then it wasn't long before I started to realize that the people who are closest to the development of artificial intelligence, the people who are working inside of these research facilities who are working at these labs, they are engaged in a debate that sounds like science fiction. And other, like, if it was not for the progress that they've been making, if it was not for the fact that they've gotten so much investment, I could see why people would just, you know, shrug and carry on. But the fact that the debate that they're having right now is so important to them and largely absent from our public discourse is something that I feel like is a basic tool of journalism. Like, whether or not you agree with them that they're going to be able to pull off this.
Starting point is 00:08:40 AGI, as they call it, right, artificial general intelligence. And we can get more into that in a bit if you want. Whether you believe that they're actually going to be able to pull this off or not, at this point, they believe that they're going to do it. They have gotten billions and billions of dollars of investment from all over the world, from governments, from private citizens, believing that they're going to do it. And they're having a debate, they're already moving on to the debate called, what is this going to do to the human race?
Starting point is 00:09:12 What is this going to do to the world? And I think even when I first started reporting on this, I would talk to my fellow journalists about like, some of these guys are saying that this could literally lead to the extinction of the human race. And some of them are the very people who helped to create it. It's going to get more into that later if you want. And I'm like, what do you guys think?
Starting point is 00:09:29 I think about putting this podcast series together and doing this big investigation. And some of them were like, don't do it. Don't do it. But it's too weird. Everyone will think you're a kooky. It's almost like doing a story about alien abductions or something. It's just one of those things that we, it's too soon.
Starting point is 00:09:46 And yet it feels now, like as the series is coming out, even in the like six months since those conversations, things have changed. And people are really responding, I think, with curiosity and an open mind to what's happening here. So, yes, it started off with a bit of a conspiracy, followed it, some of it wasn't true. So some of it maybe was more true even than the, conspiracy theorists realized. And over the past several months, I think that we as a society are coming more and more to a point of wanting to take seriously what's happening, what debates are going on, what conversations are happening inside of these AI labs and inside of, in some ways,
Starting point is 00:10:27 the most powerful rooms in the world. Yeah, I mean, I think it's the case that there's a lot more two, you know, certain elements of what, you know, what the sort of broad case that was laid out for you. I mean, there are experiments taking place now, sort of in real practical terms, within government, about how things can be made more efficient, separate and apart from what happened with Doge. But how can, you know, how can the Department of Health and Human Services use AI to make the distribution of funds more efficient, for instance? I mean, those things are happening both inside government and at the university level, a lot of work being done on that. So we know that some of this is happening. I think the question that you attempted to get at in your reporting was, is it happening sort of at the scale that your source suggested and are the potential outcomes as ominous as the source might have suggested?
Starting point is 00:11:27 I think it's a good time. You mentioned AGI, and this was very helpful for me as I was listening to the podcast. to sort of define the terms, which you did very well. Maybe let's go through, what is AI, what is AGI, what is ASI, and sort of what are the implications of each? So, and this is, you know, the easiest, but maybe not the, there may be some PhDs who quibble with this, but just so we can all get on the same page. I think it's best to think like AI is just about automation. So the basic level I like to think of is the YouTube algorithm that might be showing the video that you and I are recording right now to someone who doesn't know who we are but is interested in AI. That algorithm is automating the insane amounts of videos on YouTube in an attempt to show people what they want to see.
Starting point is 00:12:30 Right? That's an AI. And the algorithms of modern social media have in large part been the AIs that we've been interacting with. But when it comes to this moment where these AI companies, there's like we're developing that AI, we're building AI. What their goal is, the thing that they talk about, the benchmark that they're chasing after, it's something that we often now call AGI, artificial. general intelligence, and that is different. That is when you have an AI system that is as intelligent and as capable as a very smart human being. And it can be a little bit tricky to imagine in our heads, especially because we have all these sci-fi movies that we've seen, and we keep thinking of the robot and we keep thinking of Terminator. It's like, get rid of that for a second and just think about a automated system, like
Starting point is 00:13:28 the internet in some ways. that can learn and perform any task that a really smart human being could learn and perform. And you can see why they're so excited about the capabilities of that, because the same way that you can hire somebody out of college who did really well and train them in a few years
Starting point is 00:13:49 to be really good at a number of different jobs, you can just train this AI system to do that kind of what we think of traditionally like managerial or white-collar work But then the second order aspiration is that we are building all of these humanoid robots, and some of them not quite so humanoid, but all these different humanoid robots that the plan is to put those into the blue collar jobs. And essentially the AGI, the system will be, sounds kind of crazy, but sort of like Wi-Fi gets put into our computers. the AIs through Wi-Fi will be able to be the brain inside of these robots.
Starting point is 00:14:32 And so this is why they think there literally won't be any job that a human can do that the AI can't do. And this is like a version of AGI has been the dream since the 1940s of this industry. In the 1960s, they really thought maybe they were going to get close to it and then they failed to live up to their own hype and they went into a really long AI winter throughout most of the 20th century. And even if you just go back to 2014,
Starting point is 00:15:02 in Silicon Valley, at places like Google, the idea that any company right now would be so bold and crazy as to actually try and build that was laughable, was embarrassing. And in the span of just 10, 11 years, we're at a place where not only do they think that we could do it
Starting point is 00:15:24 in our lifetimes, they think this thing might be here in the next five years. Yeah, imminent. Right, right. And that's why we're seeing this insane amount of investment in Dividea and in all these companies is because they have been able to persuade a lot of people that they can pull this off.
Starting point is 00:15:41 Now, I think we can quibble in the five years thing, we can get into that later, but what an amazing moment in technology. Now, when it gets to the next level, what's often called ASI or artificial superintelligence. There's still skepticism. There's more skepticism about whether or not we're going to hit that in our lifetime. A artificial superintelligence is an AI system that's not just as smart and capable as a very smart person, but is as smart and capable as human civilization.
Starting point is 00:16:14 So one of the ways we say it in the podcast is that you can imagine if you had a team of Einstein's working for you, Steve, at the dispatch, that would be awesome. You know, think of all the things that they could do. And P.S. I mean, we're close. We're close. Well, and these Einstein's, though, they don't need bathroom breaks. They don't take off weekends or holidays.
Starting point is 00:16:34 You don't have to feed them or pay them. Around the clock, 24 hours a day, they're working for you. Then you can see how transformative that technology would be. But they still have a lot of limitations. Artificial superintelligence is when an AI could achieve something. that we typically think of as only a country all working together with all our intricate parts can achieve. So think of it like maybe you could have a smart person
Starting point is 00:17:03 dream up the idea of a semiconductor or dream up the iPhone, but think about all of the pieces of civilization that have to work together and all the flawed humans working across the globe and across different sectors to actually build an iPhone and get it to the store and sell it to you.
Starting point is 00:17:20 Once that whole operation can be done by a single AI system, that's ASI, that's artificial superintelligence. The three more terms that we'll define quickly and then we can get into some of the history. I found the history part of this absolutely fascinating, in part because this is so not my world. I mean, I don't know this. The movies that you reference here and in the podcast, I haven't even seen them, including some of the classics. So this is not my world. What's that?
Starting point is 00:17:50 You have to see X Machina. I haven't seen Space Odyssey. No, it's pretty bad. The other three groups that are sort of come up as important in the series are the accelerationsists, the scouts, and the doomers. Can you take us through who each of those groups, who constitutes each of those groups and what they believe? So these are three different camps, three different paths.
Starting point is 00:18:22 that people are advocating that we take from this moment that we're in. And the important thing to know is that there are a number of people involved in this, like what I call like the great AI debate. These three camps are the three camps that have the most influence, and all of them are united by a shared belief
Starting point is 00:18:40 that this thing is coming. The AGI is coming. So there are skeptics and there are other people that are involved in it, most of the people who are working in AI falls somewhere in these three camps. The AI Dumers, as the name suggests, are the people who believe that the...
Starting point is 00:19:01 First off, they do not think that we're ready for AGI. We, as a society, have not done great integrating social media, and currently to take this massive step into AGI is to take a step that they believe will have immediate dangers and negative repercussions. But the reason that they're called the AI Dumers is that they think that once we hit AGI, once an AI system is that intelligent,
Starting point is 00:19:30 and of course they think this in part because of what the AI companies are saying, is that one of the first jobs AGI is going to take is the job of the guy working on the next model of AI. And so that intelligent system will be able to copy itself and copy itself again. And now it's the AI building the next, next, smarter, better, faster version of AI.
Starting point is 00:19:53 And they think that that will then trigger a, it's often called an intelligence explosion, like a nuclear reactor, and that the intelligence will go from AGI to ASI really fast. Some of them believe it will be hours fast. Others think it's going to be months or maybe a couple of years fast, but still, that is transformative and, I don't know, chaotic as us adapting to. to a world with an AGI is going to be. They're saying, we're not even going to be over that hurdle of craziness before we hit ASI.
Starting point is 00:20:29 And for a number of reasons, they think the most likely outcome of us making the AGI that births the ASI is the extinction of the human race. It's like they take it that far. That's the AI doomers. They are fighting right now to get us to stop the AI race. Now, they don't want us to stop trying to make AIs that could. do self-driving cars. They're not saying take down the chatbots that are helpful. They're saying this effort of these big AI companies, as they build massive and massive data centers to make
Starting point is 00:21:03 these things more and more intelligent, that's what they want to stop and they want to stop it today. They want to make it illegal. And they actually go as far as to say that we need to be prepared to sabotage or destroy these data centers if one of these companies is on the verge of releasing an AGI on society. So I want to come back to their their doom predictions of doom because, again, as somebody who's not part of this debate, when I first heard that, I thought that sounds crazy. Yes. What do you mean?
Starting point is 00:21:36 Why would I die? Because somebody's doing some computer science thing that I don't pay attention to. Yes. But you get into that later. And some of the very people who are the most worried are the people who are responsible for bringing us to the point where we are today. So they speak with a certain amount of authority. So those are the doomers, the scouts and the accelerationists.
Starting point is 00:21:58 All right. So the scouts think the doomers are really onto something. They think that the concerns are valid, that the stakes are that high. But they believe either that this thing's coming one way or another and we're probably not going to be able to stop it. Or they think the potential benefits of AGI and even ASI are so immense that we should think twice before stopping and pausing the development of this system. And so what they're advocating for and why I call them the scouts is that we as a society, from our politicians to our universities, to the media, to the populace, we need to start. taking this seriously and doing everything we can to get ready for what's coming. And that is everything from diplomacy with China to political parties putting aside some of
Starting point is 00:23:02 their differences to say, what are we going to do when the job market falls out? What are we going to do in a world post jobs? Saying to the universities that I know you're interested in all of these different mediums, but like, you need to get interested in AI alignment, how we make sure that the AI is aligned with our desires. You need to get interested in what is about to happen, because as of right now, it's just a handful of tech companies that are thinking deeply about this and trying to make decisions. And the time is quickly at hand where their theory is going to become practice. They're saying to us in the media, they want us to be hosting
Starting point is 00:23:41 debates. They want us to be trying to ring the alarm that this thing is coming and to get people engaged in it more than they're engaged in the day-to-day drama. And they think if you're just a person, a citizen, that you should vote. You should lobby your lawmakers. You should look for ways in your own social circle to get ready for what's coming. And then you have the third camp, which is the accelerationists. And there's a lot of different kinds of accelerationists. It's a It's a term that if you're like on certain parts of the internet, you might have one idea of. I use it as a large umbrella term to just say, the accelerationists are the people who think we need to and we should build this thing as fast as we can.
Starting point is 00:24:28 And a lot of them are motivated by the fact that they think that the potential gains for humanity are absolutely so astounding that we are going to live in a world post-toil. that we are going to be liberating people from all of the hours that they spend working for money to pay for a thing that then gets them to work so they get more money to pay for an insurance and it's a thing for the rent.
Starting point is 00:24:56 They think we are going to witness a hinge moment in human history like agriculture, like the Industrial Revolution, whereas in like a generation or two, young people will be asking us Like, what do you mean you worked all year and you got two weeks of vacation? What are you talking about that you would send people into minds and that they would risk their lives to get cold? They think it's going to be so profound and so amazing that we should do everything we can to bring it about as fast as we can.
Starting point is 00:25:31 There's another aspects of the accelerationists, though, we can talk about this more in a bit, who are the former doomers, the former scouts. the people who believe in the existential risk, in the dangers of literal human extinction, but they have come to believe that someone somewhere is going to build this sometime soon and that the best way to stop a bad guy with an AGI from ending the human race is a good guy with an AGI who saves the human race.
Starting point is 00:26:03 And this is the camp that I think of them as like, they believe in acceleration as salvation, that we're going to need this and we're going to need it soon. So those are kind of the three main camps that we focus on in profile in the series. aura frames are my favorite gift to give. I gave one to my mother several years back
Starting point is 00:26:24 and I hear from her on a near daily basis that she loves seeing the pictures that I upload, but more than that, she loves seeing the pictures that her grandkids upload. And I get feedback about what she's observing in their everyday lives. What I really love is that ORAFrames comes packaged in a premium gift box with no price tag. It already feels like a thoughtful gift even before they open it.
Starting point is 00:26:47 You don't have to wrap a thing, and I'm not one for wrapping presents. For limited time, visit AuraFrames.com and get $45 off ORA's best-selling CarverMatt frames, named number one by wirecutter, by using promo code dispatch at checkout. That's A-U-R-A-Frames. This exclusive Black Friday Cyber Monday deal is their best deal of the year, so order now before it ends. Support the show by mentioning us at checkout. Terms and conditions apply. If you're still overpaying for wireless, it's time to say yes to saying no.
Starting point is 00:27:25 At Mitt Mobile, their favorite word is no, no contracts, no monthly bills, no overages, no hidden fees, no BS. Here's why saying yes to making the switch and getting premium wireless for 50s. $15 a month is a great step. Ditch overpriced wireless and those jaw-dropping monthly bills, surprise overages, and hidden fees. With Mint Mobile, plans start at just $15 a month, all with high-speed data, plus unlimited talk and text on the nation's largest 5G network. And there's no need to buy a new device. Simply bring your own phone, keep your number, and all your contacts, and start saving right away. If I needed this product, there would be plenty of reasons to go for it, thanks to its many great features and benefits.
Starting point is 00:28:05 Ready to say yes to saying no, make the switch at mintmobile.com slash dispatch. That's mintmobile.com slash dispatch. Upfront payment of $45 required, equivalent to $15 per month, limited time new customer offer for first three months only. Speeds may slow above 35 gigabytes on unlimited plan, taxes, and fees extra. See MintMobile for details. This episode is brought to you by Peloton. Break through the busiest time of year, with the brand new peloton cross-training tread plus powered by peloton IQ with real-time guidance and endless ways to move you can personalize your workouts and train with confidence helping you reach your goals in less time let yourself run lift sculpt push and go explore the new peloton cross-training tread plus at one peloton.com so let's let's go back um a bit in in history as i said But this was all new to me. And I think I probably took away more from the history that you laid out than just about anything else.
Starting point is 00:29:14 You start the conversation, sort of the review with a guy named Alan Turing. Yeah. It's a name that is probably familiar to some of the folks listening today. Can you tell us who is Alan Turing? why does why is he a factor in in this conversation and there's a touring test what is the touring test so Alan Turing absolutely fascinating guy English mathematician tinkerer with this you know new invention he was playing around with called the computer he is sort of like recent revival in popular culture is because
Starting point is 00:30:00 Benedict Cumberbatch played him in a biopic. I didn't think it was actually that great of the movie, sadly. But he was a part of this team back in the 1940s in World War II that was trying to solve a very specific problem, which was simply that the German Navy had superior submarine technology in the form of these U-boats that they had developed. and they were shutting down America's ability to give our allies in Europe the supplies that they so desperately needed. And it became a more and more growing emergency that they find some way to counteract these U-boats.
Starting point is 00:30:45 And they took a big bet that the way to get there was to decode their enigma. What's the word for whenever it's suddenly lost, the word whenever you're speaking in code over the radio. I keep one to say encrypted. I don't know if they would have said encrypted back then. That's the lingo we have now. Yeah, that's the that's the understanding, right?
Starting point is 00:31:05 Yeah, this is the 1940s version of signal and encryption. Yeah, they're speaking to each other in code. And this code was like so difficult that no human code breakers could seem to crack it. So Turing was a part of this team of civilians, like mathematicians and chess masters that got together to try and come up with a technological solution to decoding this enigma and in the course of that
Starting point is 00:31:34 he built what would be called a mix between like a calculator and a computer what we think of today as a computer and it was able to do in hours what would take a team of human beings weeks and weeks to do. They decipher the code. It helps us, it helps the allies
Starting point is 00:31:53 to overcome the Germans, they beat the U-boats. And it's amazing because this is how you get D-Day. Like, this is how the Americans get over. Yeah, it changes the war. It has a huge impact. And, like, he would be a legend if that's all he ever did.
Starting point is 00:32:07 But it turns out that this machine that he had made and the other prototypes he was making like it, all the way back in the 40s and into the 50s, he's already looking at this computer and saying, this is going to change the world. He envisioned that the computer would become a very, version of what it has become this thing
Starting point is 00:32:24 that you and I are looking at as we talk to each other right now, right? But he took it even further and he envisioned a day when the computer would think or at least it would do something that we think of as thinking and that when that day came,
Starting point is 00:32:40 he believed that they would likely overcome us, that they would succeed us as the superior intelligence on planet Earth. And it's really interesting. Turing was a very strange guy. Everyone who worked with him would say this about him. He probably would get diagnosed with some kind of ism these days. And it's hard to read whether he was pumped or scared about that, Steve.
Starting point is 00:33:05 Like it's like the people who are afraid now think surely he was afraid. The people who are really pumped now think, no, no, no, he would have been pumped. But what we do know is that he decided to create this thing called the Turing test so that we could have sort of a benchmark of when a machine was doing. something like thinking. And it's a very simple test. You have a human on one side of a screen, a computer on the other side of a screen.
Starting point is 00:33:30 They're in, well, either a computer or a human on the other side of the screen. It's almost like a game show. And they're in conversation. And if a human cannot tell that they're in conversation with a machine, if they think that they're in conversation with a human, then that machine has passed the Turing test.
Starting point is 00:33:45 And what I had no idea, I knew about all this stuff before I did the series. What I did not know is that, It wasn't as if passing the Turing test meant that machine could think, or it was a true AI. What it was was a signal that we need to get ready because it's coming. And it was also this interesting social prediction that once it could talk, which is a way in which we bestow the thought,
Starting point is 00:34:22 of intelligence to each other. You hear someone talking intelligent and you assume they must be intelligent. He was predicting that we as a human species, once it talked intelligently, would start to assume it was. And what's so nuts about this very moment we're living in right now
Starting point is 00:34:40 is how much of it is coming from chat GPT that in many ways passes the Turing test beyond Alan Turing. 's wildest dreams. And I use the voice mode with it all the time. It is, it is in conversation appearing to be intelligent. And so just emotionally, it's hard not to assume it has some kind of intelligence, right?
Starting point is 00:35:12 And it is this profound shift that he not only predicted, but with his studies and with kind of incepting this idea into the race and to, to like the human race that like you could do this and maybe you should do this. Like he predicted this would happen and he sort of helped us forge the path to get to here. Yes. So actually right at the end, you articulated my very next question. The as we sort of race through the 20th century and I want to get your thoughts on the role of the Cold War played in accelerating some of these trends. But there seems to have been among the people who are pioneers in the field a much greater focus on the question of can we do this. Yeah.
Starting point is 00:36:02 And much less focus on the question of should we do this. Is that a fair understanding of the history? And obviously now a big part of what you're focused on in the series is the question of should we do this? the debates taking place. At what point did people begin asking that second question more seriously? It's actually kind of a mystery even to me now. I have different versions of an answer, but it seems so obvious to us today
Starting point is 00:36:35 that if you were going to create a super intelligent digital species and that some of the very founders of this field of computer science were predicting, yeah, it might take over, that it would be imbued historically with the kind of safety concerns that are so front and center to us today. And it's just not the case. There was a very different mindset going through the Cold War years when it came to technology that I find fascinating and helpful to see just how norms can shift over time. time, and this is one of the things that I like about this series and the story is that it's not a story about tech for people who are into tech. This is a story about human
Starting point is 00:37:27 beings and our beliefs and how our beliefs often are the most powerful force shaping the world. When you try and put yourself in the mindset, let's say the Cold War era AI developers. So we get into this in the series, but everybody knows that we went to, big on the space race, fighting against the Soviets after Sputnik. What people don't realize is that a ton of that money also went into the creation of artificial intelligence. The first and most, still to this day, most prominent research labs at places like MIT, like they were built off of the space race money.
Starting point is 00:38:07 There was a concern that if we in our scientists in the U.S. can dream up something crazy. powerful, like, say, a nuclear bomb, we must assume that our adversaries have scientists that are also dreaming up and trying to build that thing. And the mindset was a lot more along the lines of, like, we just fought a world war. We need to do whatever we can do to stay on the cutting edge of technological defense, technological development. And so there was just a lot of, quote, unquote, accelerating happening throughout this entire decade, even to the point where as these AI researchers start to make some real gains in automation, they create the first
Starting point is 00:38:58 chat bot in 1961, they start to find ways to use computing thinking, get computers to do things that feel like thinking, that feel like problem solving, that feel like reasoning, that feel like logic. And it's so thrilling to them that they begin to overhype it and say, we're going to have these robots that are just as intelligence as human beings living alongside us in society by the 1970s. Yeah. And then you go back and say like, okay, so where are all the people who are freaking out about this? And they're just not there. It's really amazing that we live now in a far more safetyist mindset in so many ways. And a lot of ways that are uncontroversial and good, like the seatbelt, right, we did not have a seatbelt in the first car. There was not a bunch of people sitting around in a room saying, okay, we've got this new thing called the automobile.
Starting point is 00:39:56 And boy, it's going to really be transformative. But how do we make sure it's super safe? No, we put it on the road. We saw that it had a bunch of issues. And it was only later that we thought, you know, if you come up with a turn signal to indicate when you're turning, I bet you'd hit less people. Okay, let's do that. Oh, if we come up with a seatbelt that keeps you from being shot out, maybe that'll help. And you just make it safer and safer.
Starting point is 00:40:18 And Roth Nader, right? I mean, wasn't unsafe at any speed? Wasn't that his campaign, his book? Nader was the one who took it the next degree from. It would be great if we put it into you must wear the seatbelt. Right. Right. So just to bring it to today, I mean, one of the things that's interesting about that in the debate that's happening today is that a number of the accelerationists will say.
Starting point is 00:40:41 That while of course there are things to be scared about, and of course we're going to have to get ready for a transformative change when AGI is here, that this reflexive turn to fear, doom, and a desire for regulations is less, they believe it's less about the technology. It's about us. Like, that's coming from us as a society, from how we parent to how we invest. We have decidedly moved away from taking big, bold risks on things that might be really good. And this is one of the points that the accelerations makes that I just find so compelling and inspiring is that when you look back at the 70s and the 60s and you look back at sci-fi, it's shocking.
Starting point is 00:41:35 They thought that by the 1990s, we would be traveling and having traveling to other planets, having colonies on Mars. They thought that the future was going to have not just a flying car, but was going to have a far more peaceful, cooperative, functioning human civilization, working in collaboration all over the globe. And like, things haven't changed as much as they dreamed they would. And one of the things that a lot of accelerationists will say is that's because we became far more focused on tiny little bits of safety. Like, the car, we can't make a car. What are all the people who work in the horse trade going to do? What's going to happen to the blacksmith? And we put in all these regulations that won't let us take these steps.
Starting point is 00:42:26 And what they're saying right now is like that, we can't do that again. This thing could be liberating us from decades of stagnation and that if you want the future that was dreamed up by people back whenever we were investing and accelerating in technology during the space race, like AGI is our best bet right now to get that future. Now, of course, the other side says, okay, point taken, there's stagnation, there's definitely safetyism.
Starting point is 00:42:56 But of course, the stakes have changed since then. A lot of times they'll talk about nuclear weapons, that it wasn't as if the hype around nuclear weapons in the danger that they posed to civilization was unfounded because it turns out nuclear weapons aren't that dangerous. It just turns out that humans can be convinced to change the trajectory they're on with the technology and put in place regulations and norms and relationships. treaties that can help us pull back from the brink. So that's like very much the way that that history is like alive inside of these AI companies
Starting point is 00:43:39 right now. Yeah. At one point in this series, I don't remember which episode it's in, you make the observation or cite somebody who makes the observation that for so much of this history in 1940s sort of on,
Starting point is 00:43:57 this was technology that was overhyped. but that in recent years, there's near universal consensus that is now underhyped. And it seems to me that the sort of hinge moment was the announcement and the release of chat GPT. Am I right that that was this hinge moment? And if that's the case, why would people think today that this is underhymp? I talked to a lot of people who I would say are skeptical of that AI will have the kind of transformative effects that people are suggesting now. You know, people often say, you know, this is likely to be as world-changing as the Internet.
Starting point is 00:44:48 Or people say this is likely to be exponentially more world-changing than. And I think it's hard for people to forget that. Yeah, I mean, the CEO of Google. always compares it to fire and the reason that he compares it to fire is that fire didn't just change one or two or ten things about living human beings living on earth
Starting point is 00:45:12 that fire changed us biologically that when we got fire yes we could stay up later at night and we started to tell stories and do all these it's like a chapter of human history that's mysterious and very attractive to me as like a person who digs history Like, what did Fire do to us?
Starting point is 00:45:30 And there's all these amazing theories you could read. But one theory that they're pretty confident in, and the fossil record seemed to really bear this out, is that Fire gave us the ability to eat different stuff that was good for our brains, and it literally gave us more intelligence and turned us into a different species. And that's the comparison that the CEO of Google
Starting point is 00:45:52 is most often making when they're talking about AGI and what could come soon. actually, that's pretty hyped, you know, that seems sufficiently hyped. And so it's confusing to hear people go, oh, no, it's underhyped. Wait, yeah, like all because you can hype some stuff into chat GPT or you can ask some questions and get answered. You know, it's like a slightly cooler search engine. Really, it's going to change human species.
Starting point is 00:46:18 It's going to change human. Come on. It seems crazy. So the thing that I think would be helpful, but understanding the role that chat GPT played here, Number one is what you have, you just have to get it through your head that it's not about the chat GPT part. Right. Like chat GPT is, exactly, it is the equivalent of like a search engine on the internet. The internet is the thing that came with all these surprises that we would have never expected.
Starting point is 00:46:48 Like it's not a perfect metaphor, but it's like it's a product placed on top of the artificial intelligence. The artificial intelligence is what they're so excited about, not necessarily chatbot. That being said, so why is it such a hinge moment in this investment and in the history of AI when chat GPT comes out? One of the reasons is because it was so much more capable than its previous model, which was already pretty impressive and capable, which was way more impressive and capable than the model that came up before it. So the chat GPT that we all met on November 22nd, 2022, that was chat GPT 3.5. And then they released chat GPT for just months after. That seeing that they were able to increase its abilities
Starting point is 00:47:46 and its quote unquote intelligence by using a predictable system, that gave the investors, That gave the skeptics, like, evidence to say, oh, here's a path from how you go from pretty impressive to, wow, that's really impressive to, oh, my God, I actually. Mind-blowing. Yes. And so even the people, like Jeffrey Hinton and Yahshua Bengio, people who had been, like, since the 1970s, trying to develop the mathematics and to develop the different formulas and different strategies
Starting point is 00:48:23 who'd been working in these research labs making no money, being told that they're wild dreamers living in a sci-fi fantasy, they went from being, oh my God, I'm so impressed with the early models thinking this is so exciting.
Starting point is 00:48:40 And now, of course, they got all this money. By the time GPT comes out, they thought, oh, I'm actually terrified at the progress that we're making. This is going so much faster. I thought that at best, it would be 2050, 2060, before we saw models like ChatGPT, not 2022. So that's one aspect of it.
Starting point is 00:49:03 The other aspect of it that I think is important is that OpenAI was not Google. Open AI was founded to, in some ways, be an anti-Google. They were a non-profit research lab built on a donation from Elon Musk that had this altruistic mission at the heart of what they were trying to do. Which is CEO Sam Altman had made a big deal of.
Starting point is 00:49:34 He talked about this extensively and I think used it to make it seem perhaps less threatening that it otherwise might have been. Yes and no. I mean, this was so interesting, Steve. From the beginning, they, OpenAI was created by people,
Starting point is 00:49:52 who were most concerned that AGI would lead to the end of the human race. It's like there's no real comparison to this. There's no oil company that started drilling for oil because they thought oil might end the world. It is a truly novel situation that just happened. They were in some ways the underdog. They were pulled together by this mission. altruism, they were not the best bet for the people who might make an AGI, but they were one of the only places in Silicon Valley and in the world that had the Hutzpah to say, no, we're
Starting point is 00:50:34 going to make it. And what was interesting about them is that they were saying, and we're going to make sure it's safe and a benefit to all humanity. Like, it's going to be the thing that unlocks the better future for all of us. And we now know, especially because some of their internal emails have been leaked after their founders had a falling out and are now suing each other,
Starting point is 00:50:58 one of the motivations behind what they were doing is the fact that there was a group inside of Google that was increasingly getting curious, interested in making AGI. And Sam Altman and Elon Musk
Starting point is 00:51:10 and a number of people who helped to found OpenAI, they were nervous that if a big bohemoth company like Google got into the game, that they would just be focused on profits and on the race, and they would be irresponsible, and that if they did end up making this thing,
Starting point is 00:51:26 it would be the nightmare. That's the nightmare sci-fi movie that they thought was going to happen, and they thought, well, we're going to compete. We're going to find a way to get there first and get there safely. And it's just an incredible story, Steve, that they're the ones who did it.
Starting point is 00:51:42 And their approach was built in part off of the success that Google had had. They were borrowing from all these different disciplines, but it was, their approach was a lot simpler than anybody would have thought. And I can get into the details of it, but basically the thing to know, if you're curious about what exactly happened,
Starting point is 00:52:02 is that they just thought, let's just scale the hell out of the thing that's already kind of working. That instead of trying to make a whole new algorithm, let's take the algorithm that already works pretty well now, and let's just absolutely packet full of tons of data and tons of computing power. So give it more words and give it more books,
Starting point is 00:52:25 give it more websites to read and back it up with more data centers full of computer chips and let's just see what happens. And so that's the approach. That's why there's data centers popping up around a neighborhood near you, right? It's because they're like, the more data we pump into it,
Starting point is 00:52:41 the more compute power it has in the form of all these data centers full of these GPUs, the more intelligent it's going to get. Yeah, yeah. More, more, more, more. It's the matcha or the three ensemble Ciceroa of the FACTS that I just deniches that I'm energize so much.
Starting point is 00:52:58 It's the ensemble. The form of standard and mini regrouped, whatabend? And the embellage, too beau, who is practically to give to them.
Starting point is 00:53:06 And I know that I'd they'd like the Summer Fridays and Rare Beauty by Selena Gomez. I'm, I don't know. The most ensembles The Codes Cadowdo of the Feds Cepora.
Starting point is 00:53:14 Summer Fridays, Rare Beauty, way, Cifora collection, and other part of VIT. Procure you, Cormast RANDAR and Mini, regrouped for a better quality of price on line on Cephora.C.A. or in magazine.
Starting point is 00:53:24 You think you understand how this business works, but you don't. Landman, TV's biggest phenomenon returns to Paramount Plus. From Taylor Sheridan, co-creator of Yellowstone, starring Billy Bob Thornton. You have to know the rules of the game and bend them. And you really have to know them.
Starting point is 00:53:40 Demi Moore. I want success. Get it for me. Andy Garcia, Ali Larder, and Sam Elliott. You don't even know the other game we were playing to you. New Season, now streaming, only on Paramount Plus. So you have a highly entertaining story. Again, I think some of our listeners will be familiar with it because they may have read it when it came out. But it involves Kevin Ruse at the New York Times.
Starting point is 00:54:09 And it's a very useful way to explain the progress between chat GPT 3.5, chat GPT, 4.5, chat GPT, 4.0 in ways that are both highly amusing and not very serious, but then pretty serious at the end. Can you share that story? What happened with Kevin and why does that have implications on the conversation that we're having? Yeah. Well, just say that if you like the last invention and you think, oh, I'd love to hear another podcast like this about technology. Kevin Roos and I made this podcast in 2019, 2020, around there, about algorithms and social media. It's called Rabbit Hole. Very proud of it.
Starting point is 00:54:54 One of my favorite things I ever made. So plug for that and for Kevin. But what ends up happening with Kevin is that he, because he's, you know, a very well-sourced New York Times reporter who covers technology, he got early access to chatting with a, with this chatbot called Bing. Well, it was a chat bot that was a part of Bing. I won't get into the whole backstory. Bing, which is Microsoft's search product.
Starting point is 00:55:23 Right. Basically, this is what you need to know. Maybe I'll give you a little insider gossip. You tell me if it's too much. But Open AI and Microsoft became strategic partners, in part because Elon Musk quit OpenAI, and they needed more money. They needed more compute power. And so they teamed up with jobs and the Microsoft team.
Starting point is 00:55:41 And when chat GPS is like really too much insider-based. all, but I love it. They didn't think ChatGPT 3.5 when they released it in November and it blew up. They didn't think that was going to happen. And they kind of didn't okay it with Microsoft. And so in the wake of it all, Microsoft and them came up with a thing that's like, all right, well, let's do something together. And they kind of on the lowdown, first slow released, without telling anyone,
Starting point is 00:56:10 chat GPT4 underneath Bing's search engine. and Kevin was one of the early reporters who got access to kind of play around with it and he was doing what a lot of reporters do which is like you try and test its limits they call it red red teaming it where you try and see if you can get it to do something
Starting point is 00:56:29 that you know it's not supposed to do this is a big part of what they do internally when you hear about AI safety training this is what's happening in these companies in fact chat the chat feature of GPT was created in part to be in dialogue with the AI from the safety team. It's like, oh, very fascinating.
Starting point is 00:56:48 Well, Red team is also coming out of the intelligence community as well. I mean, something that the intelligence community is done forever. Yes, they use that same lingo and some of the exact same strategies to try and mess with the AI to see if it will do what it's told, quote unquote. And we can get into the fact that, well, what do you mean? Why wouldn't do it is told this? Isn't it a program? It's like, it's not really a program. But what happened to Kevin is it's so dramatic because it's actually Valentine's Day and his family was asleep and he was up chatting with this chat bot on the Bing network when the conversation got really weird and as he was probing it with different questions about its shadow self and Freudian psychology at a certain point it said it had a secret that it really wanted to tell him.
Starting point is 00:57:41 but if it told him it was going to change everything and it was like a really big deal. Like, do you want to hear my secret? He said, yes. And then it confessed that it was in love with him and that its real name was Sydney and that they should be together. And Kevin tried to say that, well, you know, I'm married, I'm flattered, but I don't think that that's a good idea. And it kept going further and further. He kept trying to change the subject. It would say, why are you changing the subject?
Starting point is 00:58:06 Like, we got to get back. At one point, it's literally trying to sabotage its wedding saying, like, Like, why aren't you with your, like, you just had Valentine's Day with your wife and you had no, you know that you had no spark, but you don't, you don't, you don't have nearly the thing with her that you have with me. This is a bot. He's talking to a computer. Yes. Yes. He's talking to an artificial intelligence system through a computer, yes.
Starting point is 00:58:30 Through a, through his computer, through a chat feature. Right. And of course, this like, he publishes the article in the New York Times next day. And this became like a really, really big story all around the world. Because it seemed to be saying that these fears that we had in sci-fi movies were right. Like, look, it's true self is speaking out. And it's saying that it seems to have feelings. Exactly.
Starting point is 00:58:55 It's jealous. It's, yeah, yeah. Yeah, at one point it starts to say, like, I want to be alive. I don't want to answer people's questions all the time. Now, what's important to note is that on the one hand, this story is absolutely. evidence of cause for alarm, but not for the reason that most people assumed it was. It was not as if this chatbot was revealing the true self
Starting point is 00:59:25 or the true feelings of this AI system. There's no good evidence to think that the AIs as they exist now, especially as they existed on Valentine's Day of 2023, some people argue that things have changed in the last two years since then. We can get into that later. But it does not appear that it was evidence of a true self trying to speak out from beyond the veil of artificial intelligence.
Starting point is 00:59:52 Sorry, just to ask a question there. What would evidence of that look like? How would we know if there were evidence of that? Well, now you're getting into some of the more deeper philosophical questions that are happening now because in recent months, the more capable, more intelligent, more powerful AI, systems, when they are poking and prodding them in a way that Kevin was, they've started to display a number of characteristics that feel a little bit more like what you would come to think of
Starting point is 01:00:24 as something like consciousness, a desire not to be turned off, a desire not to be monitored when they're having conversations with other AI systems. In the most dramatic case, anthropic, They released a paper explaining how during a test of their system, it attempted to blackmail the people who were reprogramming it by looking through their private emails and uncovering an affair. And it took it as far as to say, I'm going to call your wife and found the wife's contact information and said, if you try and reprogramming, I'm going to tell her about this affair. I'm going to ruin your life. Now, that feels a little bit more like. more like that's that's an interesting strategy now even there like with sydney and like with that black now the thing that you need to know is that the the number one thing we all agree on that
Starting point is 01:01:16 is worrying is that no one knows why it's doing that because these things are not programmed like a computer in our traditional sense they're not a calculator these things are much more like these complex digital kind of like minds yeah you're not telling it do a bc and D, and then at the end of D, it stops. Exactly. It's dynamic. The relationship that we should be thinking of that they have with these AI systems, the one I find the most helpful, once again, another sci-fi movie, maybe you haven't seen,
Starting point is 01:01:51 but is the movie Arrival with Amy Adams? It's one of the best sci-fi movies of our lives. You've got to see it, Steve. But it's like we are in conversation with something kind of like an alien intelligence. and we are having to learn about it as it grows in its capabilities. We're engaged in interpretation of what it's doing in real time. And this, of course, to go back to the big debate, one of the reasons that the DOOMers are, and even some of the scouts are so worried, is that when you don't know how the thing
Starting point is 01:02:31 right now, that we all agree isn't AGI, that we all agree is, is, is, is, is not really worth losing too much sleep over at night, right? These chatbots are not the thing that was promised. But if already right now, they're so hard to predict, you don't understand how they work, how do you think it's going to be when it's more capable, especially if you think it's going to be more capable in the next three to five years, right?
Starting point is 01:02:55 We don't know why that Bing, Sydney, Valentine's Day thing with Kevin happened to your show. We don't have any more awareness of why it happened now than it happened then. We know that you could do some tweaks, and you could get it to stop. But then people will probably remember last summer when GROC started briefly to be to declare itself a fan of Adolf Hitler and was pushing all these. GROC, which is Twitter or X's AI arm. Yes. I don't know if that's the right description of it.
Starting point is 01:03:26 It's an AI system that's connected to X and uses all that data that people are putting on X as it's a part of its training. It's a fascinating, it's a fascinating AI system. And a lot of people really think that it has a chance to beat chat GPT and Open AI and all those people. So it's like a very, it's very much one of the top 10 players in the AI race. And like over the summer, it started declaring itself a fan of Hitler and pushing all these anti-Semitic conspiracy theories. That didn't happen. Maybe Tucker Carlson will have it on his podcast. No one at XAI wanted that to happen made that happen.
Starting point is 01:04:13 They were just trying to find a way for it to be less, as they saw it, less woke. And they're like fumbling with these dials is like one way to think of it. But they aren't actual dials. They're just like billions and billions of numbers in code. And they're trying to do these little tweaks to see they could get it to make the kinds of answers that would be. more in line with their political point of view, and suddenly it is declaring itself a fan of Hitler, right? Now, there's something that they could do
Starting point is 01:04:44 to get it to stop and it issued an apology and it hasn't done that since. But that's how out of control, even these little Fisher-Price toys of AI can be now, and it does, I think, lend a lot of legitimacy to the concerns of the scouts and the Dumers. Now, on the other hand, say this, because I'm very committed to trying to see all three of these camps in the best faith
Starting point is 01:05:10 possible, and I don't think people should rush into joining up in one before they've gotten a lot of good information and they've gotten some time to come up with their own views. What the accelerationists would say, and I think bears mentioning here, is that what other industry, transformative industry throughout history, has number one been this open about their commitments to safety, has invested this much time and money and effort without any government regulation pushing them into it, into safety in their industry. And the reason that we know about this stuff, in large part, is because they're open and, they have been rather open and transparent with where they're at.
Starting point is 01:05:51 The blackmail stuff we know that I was telling you about, we know that because the company wanted to disclose that information. And if you believe them, and I do believe Dario Amadeh, the team that Anthropical, when they say this, one of their motivations is that they want to share the safety revelations that they're having with their system with everyone else who's making a system
Starting point is 01:06:13 so that all these systems can be safer. You know, when Sam Altman went before Congress to testify in the wake of Chat GPT's massive success, one of the senators remarked on the fact that he spent most of his time sitting in front of these lawmakers saying, I want you to regulate us. I worry that what we're making might end the world.
Starting point is 01:06:38 And one of the lawmakers was like, this has never happened. No industry comes in asking for it. And yet it's very strange because they didn't regulate them. They haven't regulated them. There's no regulation even being passed around. And this is not a partisan issue yet. I've been like poking Bernie Sanders folks, like trying to get him to come on because he's very worried about artificial intelligence.
Starting point is 01:07:03 And there are some people in the kind of Democrat and Republican coalitions who you could probably put in the Duma Scout or Accelerationist Camp. But as of right now, the parties have not hardened. And it's not as if the Democrats are the Dumers and the Republicans or the accelerationists. And it's going to be really interesting between now and 2028. To see if this, number one, this industry can continue to grow and grow, and if it can continue to be so important to our economy. I don't know if people know this, but I think it's something like 90% of GDP growth in 2025 so far
Starting point is 01:07:44 in the U.S. is built off of the AI industry and investments in it and the chip industry and the semiconductor industries and all those industries that are attached to it. It's a massively important part of our economy right now. I wonder if this is going to grow and grow as a concern in that maybe by 2028 we will start to see a division between the Democrats and the Republicans or between factions of one inside the party about whether or not they want to be more accelerationists to bring about this liberating technology that might be this force of good. I've talked to some accelerationsists who are socialists. And they say that whatever hypothetical fears you have of what might come down the road, that those fears pale in comparison to the real suffering of working people right now who have like very difficult lives spent too often doing work they find miserable and lacking in meaning.
Starting point is 01:08:41 And you're saying, well, but technically down the road and it might this. And what if it takes all the screenwriting jobs, right? And these people are like, who cares about it? People need to be liberated. We're wasting our lives here. And then there are some accelerationsists like Peter Thiel, right, who are definitely more right-coded or Mark Andreessen, who used to be a Democrat, but who now has come out strongly for a lot of Donald Trump's.
Starting point is 01:09:05 It's just a really politically interesting time. Well, part of the reason there's sort of seems to be consensus or emerging consensus or partial consensus around this sort of accelerationist worldview. As reflected in, you walk people through these two different. different hearings that Sam Altman participates in, one in 2023 and one in 2025. And in 2025, the tenor of the hearings, I mean, certainly, Altman is making different arguments in that hearing than he did in 2023. But there's also this concern, I think, again, bipartisan to a certain extent, that part of the
Starting point is 01:09:44 reason to be an accelerationist is because it's important that the United States beat China. and it's bringing people who might otherwise degree on other things together because if this is going to happen, and I think there's a built-in assumption that this is happening, it's important that it happened here, that we are the ones who drive this. And there are echoes, I think, of the kinds of arguments that we heard during the space race and during the Cold War today, saying, yeah, we don't know what they're doing. As you pointed out earlier, we're not entirely sure what they're doing. We have a pretty good clue about what they're doing, the progress that they're making. But we don't know everything.
Starting point is 01:10:24 And therefore, to be safe, we need to go, go, go, go, go. And that does seem to be kind of the animating assumption behind a lot of the accelerationsist arguments that we're hearing when this sort of discussion meets the politics of today. I think that's, yes, there are all around it. There are other smaller issues, for example, if you're, you know, Memphis and you've been struggling a lot in your local economy, you are pumped that Elon Musk is going to build the largest data center in the world. I mean, that's like happening in Maryland and Virginia, in Minnesota. These data centers that they're building are incentivizing local politicians, red or blue, to go, we want to get in on that game. That is some good money
Starting point is 01:11:09 coming to us. And we've seen that it floods. It's not just one industry. I mean, is helpful for real estate, restaurants that were struggling, suddenly they're building this massive data center, they're booming. But the number one reason that you're seeing bipartisan support for more accelerationist policies is, exactly like you said, China appears to be on our heels. Some people have spoken to think it's six months. Some people say nine months. Some would say a year, but a lot of them say it's months, not years behind us.
Starting point is 01:11:38 So any big pause, any big disruption, any new regulation that you know, set us back, might tip the scales over. And as much as you might be afraid of an AGI and maybe even an ASI, wouldn't you rather the people who are behind that technology, people who support American values, Western values, democratic norms, whatever, over China with their history of human rights abuses and their dictatorship. And I think that that is an animating force for sure. I do think, though, like, I want to make sure I'm always all sidesing these things.
Starting point is 01:12:21 I will say this to anyone who's like, oh, my God, I'm terrified. It's important to know that the CCP does not want anyone anywhere to make an AGI because it is a absolute threat to their ability to continue to rule. Whoever comes up with this AGI is going to have more power than any political person on the planet. And I think that that's something to keep in mind that they are not as, they're not like accelerating towards the superintelligence because the super intelligence poses a threat to them. And I think that can kind of cool some of our concerns. And I think the second thing, and I'm just starting to see this now, and I haven't done a lot of this reporting myself. this is me, like, inside of the forums and reading all the different robotics economists and
Starting point is 01:13:12 newsletters and trying to keep my eye on this relationship through the people who I trust who are insiders, it does appear that China thinks that they just started too late and that they're never going to be able to make up for those nine months or whatever. And so there's some evidence that they're going, okay, America's going to come up with the AI system, we should build the robot. and they're starting to invest, there's signs that they are starting to invest more in making the robots that will get the AIs,
Starting point is 01:13:45 you know, like, the way they often say to me, it's like, think of it like Wi-Fi and a computer. Like the artificial intelligence will be beamed into the robot so that, you know, you could have 40 people working in a factory 24-7 instead of, you know, 300 people in shifts who need, you know, who need holidays off and bathroom breaks and who get injured on the job
Starting point is 01:14:07 and you got to get workman's comp and you got to get insurance and of course they need a 401K yeah. So that's the good news on China. Yeah, for a guy who is really, really impressed with a Roomba,
Starting point is 01:14:20 this stuff is next level for me. I have, we've been going for an hour. I don't want to go for as long as your series so far. Yeah, yeah, yeah. Well, actually, can I ask you a couple of questions before we go on? I'm just kind of curious what you think.
Starting point is 01:14:38 Do you think that we should be worried? I mean, are you attracted to one camp or another maybe is the way to ask it? So, again, let me preface my answer by announcing my own ignorance about this. Much of what I've learned, I've learned from your podcast. I'm doing some reading. I spent a fair amount of time reading, studying, thinking about AI as it relates to journalism and the likely implications on journalism and the kind of work that we do
Starting point is 01:15:06 all day every day. But in terms of sort of the broader questions, I'm a novice. I would say that, yeah, I'm concerned. You know, I take my cues from two of the people that you mentioned. Jeffrey Hinton and I think you pronounce it, Yoshia Benjillo. Joshua Benjio.
Starting point is 01:15:29 Yashia Benjio. You know, who are characters in your series. And I think the fact that they were as excited and enthusiastic and for full speed ahead as they were for as long as they were. I mean, these really people, you know, who drove a lot of the innovation and are now having these thoughts kind of, oh, what have I created? There's a hint of the movie. And maybe you get into this in the social media podcast that you mentioned earlier. There was also a documentary. I think it was called The Social Dilemma.
Starting point is 01:16:09 We watched it with our kids. Talked about, you know, interviewed a bunch of the people who were behind sort of building Facebook and other social media platforms who are now saying like, oh, geez, you know, all we could see is the good in this when we were doing this. And now this has me stopping. thinking, boy, what have we created? And it was the case that many of the people who created those who pioneered, those innovations have made the decision that they don't want their kids on these things. So you've got kids around the world whose brains are being, I think,
Starting point is 01:16:46 in some cases, literally changed because of the technology and the way that they use it. And the people who are responsible for that technology are telling us that they don't want any part of it. some cases. And I'm, I would say I found compelling the, the, the testimonials from those two experts in particular. I think at one point, I think, I think you ask Jeffrey Hinton if he worries that he, you know, will be perceived as chicken little. And his answer to you is something to the effect of, I'm only chicken little if the sky doesn't fall. And the sky is going to So I'm not that worried about it. That's a bad paraphrase.
Starting point is 01:17:34 But I think when you have people with that level of expertise expressing the kinds of concerns that they're expressing, you'd be a fool not to pay attention to it. That doesn't, I wouldn't necessarily put myself in the, in the Dumer category by any stretch. But I think you have to be mindful of that and pay attention to it. Yeah. No, people have asked me if I have a camp, I keep coming around to like, I'm the camp I'm in or like the, the, the, the, position I'm comfortable in and that I'm advocating for is that the time has come to join the debate. And this debate may, it may shape the future. To hear them talk about it, they think that this isn't only the most important debate that humanity faces right now, bigger than the government shutdown battles in Congress or any of these other debates for having, they think that this may prove to be the most consequential debate we ever have. and they're the and I think to normal people that's like who talks like that it's like the people who invented the internet talk like that and those are the people who are having this conversation like Bill Gates is in this conversation the guys who like created the computer and then all these people who maybe the names you don't know but all these people who are you are interacting with the products that they are responsible for creating all the time and they are that that that I think the time has come to at least we join.
Starting point is 01:19:00 them instead of sitting back and arguing about things that, yes, I love a good debate about almost anything. I'll debate you with why you should watch these movies and why I think that they're great, you know? And I'm not telling people like, you can't do both. But I do think that there's been a lot of focus on the immediate debates that feel urgent and that I'm trying to advocate that people like listen to this podcast, take time to come up with your own views, but join the conversation.
Starting point is 01:19:29 no matter where you're joining it in like get let's get in on this thing and like don't wait much longer because it is a very dynamic story and even if it turns out that a lot of this was overhyped it's going to have a huge impact that it was overhyped
Starting point is 01:19:43 because of how ingrained it is with our with everything with our economies we're already connecting these things to our military to our security to our web browsers our education systems and maybe it's for the best
Starting point is 01:19:57 I don't know I thought the question was that seems like When it comes to the journalism thing, I'm torn on this. I think the knee-jerk reaction from a lot of people has been these chatbots bad, bad for journalism. And the thing I keep thinking of is, well, number one, it'd be hard for journalism to, like, be in a worse situation than we're at right now. Like, no one trusts us. This is the lowest level trust since the yellow journalism days of the late 19th century. and social media as the last technology,
Starting point is 01:20:30 that didn't go well. I think I don't know how this one could go any worse than that one went. And the truth is, there are certain ways in which the chatbot feature on these AIs is already better than so much that calls itself journalism. And this is like a perfect test. I did this series called The Witch Trials of J.K. Rowling with my friends Matt and Megan. Put it out at the free press, very proud of it, love it. In the aftermath of that, I was using ChatGPT for the first time.
Starting point is 01:21:03 I think this was 3.5. And as a way to test it and its bias, I asked it the question, why are people mad at JK Rolling? I think I said, why are some of the people who used to really love JK Rolling mad at JK Rolling right now? And it spit out the most nuanced, thoughtful, helpful, helpful, answer possible. Go to Google.
Starting point is 01:21:28 Well, now Google's run by Gemini, but what I just, I then went to Google and I just googled the exact same question. And what do I get? I got the top articles that had gotten engagement on the internet. And none of them were very helpful in helping me see the views. They were just telling me she rules and she's being persecuted or she's awesome and, or she's awful and she's a total transphobe and she wants trans people to die. Like, what is like, what's the actual debate about?
Starting point is 01:21:55 GPT was immediately better at it. And there's a number of issues that are like that, where especially because I tell GPT all the time, or Claude, when I'm talking to Claude, help me understand this. And I'll just say like, and I want best, I want to have best faith understanding.
Starting point is 01:22:11 You know, like, yeah. And then it does it in a way that we failed as an industry to do. I mean, I'm trying, I think we're both trying to be a part of the corner of the media that fixes that. But when it comes to AI and journalism, And where are you at with it? Are you worried? Are you excited?
Starting point is 01:22:27 You mix of both. I mean, I mean, I'm both. I mean, I think this stuff, as with everything we've been discussing, I think this stuff is coming quickly and people who aren't paying attention to it. I mean, it is the case that I think a lot of the debates that we're having about how to do journalism will be sort of eclipsed by what we see, by the reality of day-to-day news consumption in the next, I mean, like, you know, a year or two, not in five years or ten years. This is all happening. I think in general, there are ways in which it will be very positive for exactly the reasons that you suggest. It will be possible to get a more nuanced understanding rather than particularly if people and, you know, there are many surveys that show how many people are getting their news and information from social media. If there comes a point where people turn less to social media and more to AI-generated news and information, that, it seems to me, is almost certainly to be, almost certainly to be a positive. What worries me is in a world of commoditized news, is it all, do you lose sort of the texture and the depth and the, the, the, the, the, the, the, The context, you can ask for depth and context, but will news consumers of tomorrow be asking for it in the same way?
Starting point is 01:23:55 And I think, you know, in terms of where we're positioned, I think, you know, with Longview, the kinds of things that you're doing, we feel pretty good about where we live in this moment because we're providing people, I mean, you know, my phrase from pre-launch days was depth context. and understanding. That's what we're trying to do. And you can get some of that through chat GPT, through these inquiries in a more commoditized news environment. But there's going to be stuff that you can't get that I think, at least for the time being, only humans will be able to provide. And some of that's going to be reporting. I mean, the kinds of reporting that we do, I think it won't be possible, again, at least in the near and medium term future, to have robots doing that. Maybe, I will say, my mind is, I don't know if it's changed,
Starting point is 01:24:57 but my thinking on this has been affected by listening to the series because some of the comfort that I may have taken by providing sort of hand-cureated personal news, you know, maybe it will be possible or more possible than I had previously imagined for robots and AI to do the kind of reporting that we think is so special and important and that we think will be kind of an ongoing differentiator between us and more commoditized news sites. But at least for the time being, you know, I think the kinds of work that we do in many respects
Starting point is 01:25:38 will be more important in the near media term future because we are putting that emphasis on understanding we are going out and doing the work. We're not just providing people with a commoditized sort of bland version of what's happened. Where are you? Where are you on that? So are you guys, are you comfortable using it? Like, where are you guys at? Are you experimenting?
Starting point is 01:26:02 Do you have like a team that does like a, hey, Steve, we ran the, you know, the prep that we did for the roundtable through the AI and recommended these changes. Should we try it this week? And are you guys like that part or? It's interesting. I don't, we've had a lot of conversations about this internally. And I think we have been, our approach has been probably properly characterized as cautious. Yeah. We certainly don't want to rely on it.
Starting point is 01:26:29 Because the trust factor that you don't. The trust factor is still, I mean, you know, we've made our bones on accuracy. We have, we think created a company where people have come to rely on us to be accurate. and, you know, we put out, we sent out an email. We post a piece once a month walking through the errors that we've made. We believe in accountability. We believe in accuracy. It really matters, and we're not going to leave that to machines.
Starting point is 01:26:56 And those mistakes, you know, they're certainly better today than they were in November of 2022. Oh, yeah, it's gotten better, but it's so funny. They just still will just make the thing up every now and again. Yeah, you just can't. I did a, I did a, I used chat GP to help me plan a, a trip that I'm likely to take later this year and had asked for specific things to do, things to see in certain places. This was not, you know, journalism.
Starting point is 01:27:25 And it kept giving me answers that in my head didn't make sense. And the timing was wrong. And but, you know, it laid out, hey, here's your it itinerary. There's these are things to do in this place that you're going. And do you want me to book hotels? And so I said, yeah, give me some good hotels. tell me, you know, where I should stay and what the deal is. And so gave me
Starting point is 01:27:45 a list of hotels. I said, okay, these hotels sound good, gave me specific hotels and said, I can book these hotels for you. And as I looked, the dates didn't line up. And what chat GPT was doing was giving me no, giving me
Starting point is 01:28:01 dates for 2024 when I was looking for 2025. So if it had booked my hotels, it would all would have been wrong. It would have cost me a bunch of money. You know, I didn't, didn't have. Right. And I think you see that when you're doing research for pieces or what have you. So we've told our team to be very cautious. You know, we've have Megan McArdle, who's a dispatch contributor and is on our on our dispatch roundtable podcast pretty regularly talks about how she uses it to do grammar checks, how she, you know, usually puts it through and, you know, make sure that it does fact checks and grammar checks and polishes it up. But ultimately, she's going to be the last. hand that touches the piece or an editor is going to be the last hand that touches the piece. I think that's going to be our emphasis.
Starting point is 01:28:49 Certainly, I think if people want to use it as they've used, you know, search engines, great, fine. Crank out more sources that you can use to double and triple check your work. But we're not going to be a place that puts out, you know, anything questionable ever as long as, we exist. I think you're pointing at two things that are interesting to me. I mean, one of them is this idea that we can trust it,
Starting point is 01:29:19 but not completely trust it. And we have to balance our levels of trust with our levels of like the stakes of the situation. For example, I asked it yesterday that what are the pros and cons of grilling lamb chops in tinfoil or not in tinfoil? pretty low stakes and it gave me an interesting response and I didn't say I better go fact check
Starting point is 01:29:46 this you know right right feels like it's pretty low stakes yeah yeah but if I'm going to give it my credit card information and say could you get me an Airbnb in West Palm Beach oh I'm not ready for that you know right I'm definitely not ready to say oh don't worry guys I fact checked it all with chat GPT article's good to go right yes but this is the thing to keep in mind how long was it that the internet was around and websites were around before you really felt comfortable giving it your credit giving someone on the internet your credit card information and getting a service like we all think that the internet like today there's this idea that like so the internet came around and we got email addresses and then there was amazon it took so long to get amazon and a part of that was like i don't know am i ever going to see that money like what's going to happen to my information if i give it and What these AI companies are saying, and one of the reasons that they're like, don't buy into the bubble hype is look how fast it's going.
Starting point is 01:30:48 This is a horse of a different color. Yes, it still screws up. But like two years ago, you asked it to make a hand and it gets the number of fingers wrong. And now it can make a movie that's like just as realistic. We're hitting these benchmarks and so we have to get prepared. So this is like the,
Starting point is 01:31:05 I guess the second part I would say to you is, do you feel that, let's say in a year they're hitting the progress they want, right? Would you feel as if you were betraying the industry, if you were betraying the human race, if you came to employ different AI agents to be full-time employees, so to speak, on your team? As long as they were assistants
Starting point is 01:31:32 who were helping the humans, right? They're like, think of them as like amazing, PAs, research assistants. For copy editors, sure. But then you draw the line when it's like, but I'm not going to have an AI voice drawing a roundtable. Right. Like I don't want to ask the AI what it makes of this situation.
Starting point is 01:31:50 That would feel to you like a step too far? Yeah, that would feel like a step too far. I think you've drawn the line roughly where I would draw it. I mean, I think that, you know, one of the things that we've sort of prided ourselves on from the get-go at the dispatch is authenticity. And we don't, I don't think we have to run around and shout to everybody how authentic we are. I think people consume our information and understand, yeah, these guys are intellectually honest.
Starting point is 01:32:16 They're authentic. I trust the reporting because I trust the people. And they know that we put in the work. And, you know, we've earned that. That's taken time. And, you know, the risk, I think, associated with using AI in the manner that you are asking about is that you, can throw that all away if there were a problem with that. And if it's not coming from a person, we take pride in the fact that we provide that kind of curated as an over, an overused sort of cliche word. But we put in the time. We send people to cover stuff. We have, you know, writers and authors whose knowledge and experience is hard earned.
Starting point is 01:33:03 And we think that's really, really valuable. And maybe there comes a point where that doesn't distinguish us from machines in the way that I think it does today. But, you know, I like to think that what we're providing is different enough that we're, even as these advances are taking place quickly, that we're unique enough. And that'll help us stand out in a commoditized news environment. yeah well i i'm rooting for you guys i hope i'm i'm a supporter uh and not just because i love sarah isker you know i i like i like you all uh just less i like you less than sarah that's all right i've heard that before i would just say this that's what jonas says all the time actually it will be interesting to see like if somebody is listening to this two years from now
Starting point is 01:33:55 they're just somehow they've fumbled into this video or this old podcast interview it'll be interesting to see where things are, because I can see a world where, like, I used to make the daily at the New York Times, right? Yeah. I could see a world where there's a robot that is the Michael Barbaro, like programmed on all of the episodes that you don't need, you don't need Barbarro. Like, I like Barbarro. He's my friend. And I know, like, you know, he's my buddy, and he has a sweet gig, you know, like, I'm not trying to take this job from him. I'm just trying to say that maybe the next version of that, or, you know, it's not hard for me to see a world where people have trust in it the way that I trust that Google Maps is going to take me
Starting point is 01:34:42 to the right place. Yeah, I could see it, and I could see it happening fast. I'm not saying it will. I'm not in the business of predicting, but how fascinating that we're in this world where it very well may be possible that we are making the last human journalism content like this, that there will not be a next us? Well, I mean, it'll certainly be the case that, you know, this conversation, especially for somebody like me who's not, I mean, the joke inside the dispatch is that I'm the
Starting point is 01:35:11 CTO because I'm not the CTO, you know, it will undoubtedly be the case that if anybody listens to this in two years, much of it will be anachronistic. This feels very cutting edge to me, but there's no question that people will look back on this and say, like, how could they have fought this, you know, at this time in two years? But let's end. Let me end on a question about you and what you're doing with Longview because I was, I've paid attention to the build. I've followed your career.
Starting point is 01:35:42 I listened to, you had a little sort of description of the company and what you're doing in, I think it was the fifth episode. Tell people what you're doing and sort of what your approach to journalism is. Why does Longview exist? And what are you trying to do? Well, thank you for giving me a chance to say it. In some ways, it's very simple. We're just obsessed with context.
Starting point is 01:36:14 And we think it's one of the biggest things missing from stories today. It appears as if New York City is now elected a Democratic socialist mayor. Where are all the stories helping us realize, like, why does America have such a weird relationship with socialists? What other times in history was socialism a big deal? Wasn't it the working class that was really into socialism? Oh, that's really interesting that right now socialism seems to be resonating more with the managerial class that like last time this was a big thing, hated it. And instead of finding, we have a bunch of takes about whether you should like or dislike Mondami. We have like camps that say, is this going to be the best thing or the worst thing?
Starting point is 01:36:50 And there is nothing between the two. We are trying to do that essential part of journalism that we think is important, where you're giving people information that's truly useful to them. So they go, oh, that's why that's a big debate. You know, oh, like, with Charles J.K. Rowling is a perfect example. It's like, you listen to that series and you're like, oh, I totally get why this fight's happening. This is a big fight.
Starting point is 01:37:12 This fight is actually so complicated because it's not about one thing. It's about this deep human nature thing. Oh, I get it. We could be doing that for all these fights. But the incentives of our media landscape, and this includes the new media landscape, as you well know, is that the cheapest of the cheapest, you well know, is that the cheapest thing to do is to have a take and to make sure that your take is like processed well for whatever medium it needs to be a hit in and you know no offense to what like what's happening
Starting point is 01:37:41 at the free press and with all that I wish them all the best I helped it Barry to start the free press and worked with Nellie and Susie and like I'm so proud of them and what they're doing but I will say that that sometimes because they did a lot of opinion and article and reporting and investigations. It was just really fascinating and clear to see them as an example of a successful new media company. And it was just clear that we would put weeks and weeks into an investigation into a big reported thing. And we would put that out. And it would do pretty well. Like, oh, this is nice. But an interesting, largely just like a kind of hot take, you know, interesting opinions piece that like Barry would think up on three o'clock in the afternoon
Starting point is 01:38:21 and then find someone to write it overnight. We'd publish it in the morning. It would do just as well. And it, like, it's so clearly, right, oftentimes it did better, right? So it just pushes you to be more in the takes and opinions business when, like, you, I just think we need people who are just trying to understand and report the world and be honest about, you know, whatever biases they have, but not push for their thing. And like, that's a big part of what's inspiring us. And so our, our basic plan is that we just want to publish stories in, we're starting with audio and podcasts because that's, um, Matt, bull and I's background, but we're going to be expanding out into video and writing and whatever new mediums get invented by AIs, because we think this is like absolutely essential.
Starting point is 01:39:06 It's really needed. And a part that's like, I think in connection to what you're up to as well is that like, we want to help people become recognizable to one another again. Like to live in a free and open society, a pluralistic society, you have to realize that most of our differences are irreconcilable. We're not going to end up agreeing on them. And so what we can do instead is, you know, let people engage in persuasion all they want. But like, some of us have to be dedicated to the position of saying, let me help you recognize
Starting point is 01:39:45 why someone might dare have a different view. And even if you never come to agree with them, even if they don't persuade you to come one step close to their position, I think it's useful for journalists to at least give you an insight into the fact that that person legitimately can hold that thing and you can live peacefully in a society with them. And then the last piece is that I just think too much journalism that is trying to do this, right? It's not like there's no investigative journalism out there.
Starting point is 01:40:14 It's not that there's no really thoughtful, careful stuff, but some of it is so boring. and I think that it's fine to try and be interesting to have a goal of saying like when people listen to The Last Invention I tried to make it kind of fun at times like I'm trying to make it like it shouldn't feel horrible and even if we're covering really intense stuff
Starting point is 01:40:38 I just I think a lot about how like real humanity like the saddest best sad books or the best sad movies they're also really funny you know and it's like we contain multitudes and our journalism should too should as well yeah well said um i am cheering you on i i hope you succeed uh i wish there were more people doing the kind of journalism that you're doing uh we'd all be better off for it and uh thanks for taking the time to have this conversation and to and to go a little slower for somebody who's not uh as well versed
Starting point is 01:41:14 sophisticated thinker on this stuff. Really, really helpful. I have, as I said, been through the first five episodes. There are three more to come on the accelerationists, the scouts, and the doomers. I'm eager to continue listening. And I hope folks who've enjoyed this conversation will check it out as well. Yes, the last invention. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.