The Weekly Show with Jon Stewart - Silicon Valley Goes to War

Episode Date: March 11, 2026

As reports emerge of AI-powered weapons systems deployed in strikes on Iran, we’re joined by Dr. Sarah Shoker, Senior Research Scholar at UC Berkeley, and Paul Scharre, Executive Vice President of t...he Center for a New American Security. Together, they examine how autonomous weapons and artificial intelligence are being integrated into military operations, investigate the relationships between Silicon Valley AI companies and the Pentagon, and explore if regulation is possible amid an accelerating arms race.  This episode is brought to you by: MINT MOBILE - Plans start at $15/month at https://mintmobile.com/tws BILT - Join the loyalty program for renters at https://joinbilt.com/tws SHOPIFY - Link in Description: Sign up for your $1 per month trial and start selling today at https://shopify.com/TWS  Follow The Weekly Show with Jon Stewart on social media for more:  > YouTube: https://www.youtube.com/@weeklyshowpodcast > Instagram: https://www.instagram.com/weeklyshowpodcast > TikTok: https://tiktok.com/@weeklyshowpodcast  > X: https://x.com/weeklyshowpod   > BlueSky: https://bsky.app/profile/theweeklyshowpodcast.com Host/Executive Producer – Jon Stewart Executive Producer – James Dixon Executive Producer – Chris McShane Executive Producer – Caity Gray Lead Producer – Lauren Walker Producer – Brittany Mehmedovic  Producer – Gillian Spear Video Editor & Engineer – Rob Vitolo Audio Editor & Engineer – Nicole Boyce Music by Hansdle Hsu Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:07 Hello, everybody. My name is John Stewart. Welcome to the weekly show podcast. We got a banger for you today. As you know, the world is hurtling in no small measure towards its utter and complete destruction. And there's a new wrinkle in destruction of our world. And that is that a lot of the weaponry that we seem to be deploying at the various places around the world are being controlled by not necessarily autonomous, but large language model, AI, and, Anthropic, Open AI, the same people that bring you clawed and chat GPT and help you break up with your boyfriend or girlfriend using a rhyming scheme that Drake would use. That's also being used to target and destroy our enemies. And it is incredibly chilling. And just recently, a huge controversy broke out into the open when one AI company, Anthropic, drew a line and said, we shall not. we shall not allow our product to be used in this way.
Starting point is 00:01:11 And then another AI company there, what do you call them there? The Open AI went, we will. That's cool with us. But it's a lot more nuance than that. It turns out there may not be heroes and villains in this story. But we are going to discuss it all today in an episode entitled, how are we all going to die? and when exactly is it going to be happening?
Starting point is 00:01:37 And we have two experts in the field of AI and how it is utilized, and especially within a military context. We have with us, Dr. Sarah Shoker and Paul Sherry, and let's just get to that. Ladies and gentlemen, we are delighted to welcome today on our continuing episode of,
Starting point is 00:02:03 I think we're all going to die. our guests today are experts in the field of how we are probably all going to die. Dr. Sarah Shoker, who is a senior research scholar at University of California at Berkeley and Paul Shari, Executive Vice President for the Center for a New American Security and author of Four Battlegrounds, Power in the Age of Artificial Intelligence. Thank you both for joining us here today. Thanks for being here. Sarah, let's start with you. You worked in AI. You explained just very briefly your area of expertise as we move forward.
Starting point is 00:02:42 Yeah, sure. So I used to be the lead of the geopolitics team at OpenAI. That was our research team. And we focused on a variety, on a portfolio of topics relating to AI and international stability. And currently in my role at Berkeley, I focus on new testing and evaluation methods for generative AI. models and their potential impact on warfare and military AI integration. Very, very apropos for today. And Paul, for you as well, where do you stand on studying AI, on military and AI? What's your background with that? Sure. So I've got about 25 years of experience in the national security field. I was an Army Ranger, did a couple tours in Iraq and Afghanistan, and then I worked for a while in the Pentagon as a civilian policy analyst, actually led the group that drafted the Pentagon's policy on autonomous weapons, which is still in effect today. And then for the last 12 years or so, I've been at the Center for New American Security,
Starting point is 00:03:42 researching, writing on this topic, trying to understand how is AI changing warfare? And how do we avoid some of the bad scenarios you're talking about? So this is perfect, because I think it brings in the perspective of, you know, Paul, you've been in the military, you worked in the Pentagon, you understand the ins and outs. Sarah, you've been at the companies that are developing these products. So let's just start for the basics. And I'm going to say this for my audience. Obviously, I understand how AI is used in the military.
Starting point is 00:04:14 Paul, very briefly, how does the military utilize AI? And how is that different from their general practices? So, I mean, it's not really. The military is using it like any new technology that they're going to try to find ways to be more effective, more efficient, much less than these computers and computer software and computer networks today. So the military doesn't necessarily see this as something special or different, but really a productivity tool, just like I think a lot of people might use a large language model.
Starting point is 00:04:46 An optimizer. An optimizer. Right? You're just optimizing if there's something a little bit different. Yes, slightly. But, Tara, so as you were working at OpenAI, when they talk about optimizing, are they developing at these companies? are they particularly developing for military, or is the technology that they're using just being
Starting point is 00:05:09 utilized by military? Yeah, so generative AI models are both dual use and also general purpose. They're dual use in the sense that they can be used for both civilian and military purposes for good and bad, but they're also general purpose in that they apply to a variety of domain. And so these are models that can be used in legal applications for software engineering tasks as therapy bots. We now know some people use them as. So it is not particular, they're not trained for particular use in the military.
Starting point is 00:05:45 But, you know, nevertheless, the military, I think, has been a keen adopter in the last year. I think it'd also be remiss if I didn't add that even though most consumers now primarily, interact with AI probably through these generative AI chatbots. AI is, in fact, a toolbox of methods. It is not exclusive to large language models or generative AI. And the military uses a variety of different AI techniques, such as, for example, machine vision, which is responsible for object recognition, facial recognition. So this is not just in the way I might use it where I would go on and go, I'm thinking of visiting the Jersey,
Starting point is 00:06:27 sure, recommend five different things. And then the AI will say, boy, that sounds like a great trip because my AI is relentlessly positive, much to my chagrin. And then it'll list me a few other things. They're not just using it in that regard. They're using the other tools of AI, which I guess would be optimizing for anything from targeting to maybe supply chain or any of that. Is that correct, Paul. Yes, you can think about maybe three different types of AI. One is something that's been around for decades, that's really like handcrafted software written by humans. Good example of this would would be a commercial airline autopilot. We kind of don't think of that as AI anymore, but once upon a time it certainly was. Military has a lot of things like that in radars and sensors and, you know,
Starting point is 00:07:20 fighter aircraft, that kind of thing. So already autonomous workings for some of their machinery? Maybe bounded autonomy, I would say. Like a missile, there's lots of missiles that once you let that thing go, it's not coming back. But the autonomy is pretty bounded in what it can do. Then you've got machine learning systems that might have a narrow application. So they're doing computer vision, as Sarah was talking about. Military uses these to analyze satellite images, analyze drone video feeds. The military is collecting more intelligence than it can possibly put human eyeballs on.
Starting point is 00:07:53 They're just aren't enough human analysts. but the AI can help you then, you know, look through these images and find targets and identify, you know, things of interest. And then there are large language models, which are these sort of like much more general purpose text kind of machines where you can feed in lots of data. You can have it analyze things. You can combine text and images and other types of data. And that's newer. And the military is also starting to use that as well. Now this is so in the public's eye, because I want to see if I can fill in the gap between what the public. may view this as and what the reality is. In the public's eye, it is sky net.
Starting point is 00:08:30 It is, you know, robots, titanium robots that can regenerate themselves that are walking autonomously over crushed human skulls and just firing what appear to be phasers at all kinds of different things. And you're saying, actually, it's the same shit that we're all using like at the office for the most part? I mean, for the most part, somewhat different applications, but I mean, it's the same types of things. And look, a lot of what the military does, to be fair,
Starting point is 00:09:02 are back-end functions, right? It's logistics, it's personnel management. Administrative and bureaucratic. It's administrative. That's like 95% of what the military does. Now, there's a different component that is actually battlefield capabilities, but a lot of the military use cases
Starting point is 00:09:18 are kind of mundane. So the battlefield, so let's get to that, because that's really where it appears this new controversy is, which is the battlefield. The controversy appears to be, and this began when Anthropic had drawn two red lines, the red line being that there can be no just autonomous kill chains. A person has to be in the kill chain and that the AI cannot be used for general surveillance on the American public or gross surveillance on the American public. Sarah, is that understanding of the controversy, correct? Are those the two lines that are drawn?
Starting point is 00:09:57 So I make a slight adjustment there, which is that they specified autonomous weapon systems, not kill chains in particular. Okay. What's the difference there? Tell me the difference there. Yeah. So an autonomous weapon system, according to the U.S. definition, and it's important that I'm noting that it is, in fact, the U.S. definition, because different governments define autonomous weapons systems differently, are weapons that can select and engage a target without human intervention. A human can be in the loop, but it's not required. These weapon systems can function without human, without human supervision. The language that's used in the DoD Directed 3,000.09 is appropriate levels of human judgment. And Anthropics' position was that they don't believe the models are sufficiently reliable, I agree, and that for autonomous weapon systems, they need a human in the loop, which is essentially already U.S. policy.
Starting point is 00:11:04 So the U.S. policy is the human is in the loop meaning, so let's walk through a scenario just to understand a little bit of what we're talking about. Let's say the AI is used to analyze satellite imagery and different targets. A human will then get the results. A human wrote the program, I'm assuming, to analyze it. A human will then get the results of this data that has been analyzed, make their selections, and then give an OK to launch certain weapons that may in and of themselves, be autonomous, meaning they'll guide themselves to wherever that target is. And is that a
Starting point is 00:11:58 minimalist description of how this might all go, Paul? Yeah, I mean, I think that's right. I think conceptually the idea it would be who's choosing the targets. If a human chooses the targets, then you'd say the human is in the loop, the humans making that decision. If the AI is choosing it or, you know, the human, the AI sort of recommending and the humans not really paying any attention, then you'd say, well, the machine is doing that, right? And so one way to look at this would be after the fact, something gets blown up. And you say, well, who said it was a good idea to blow this thing up? If the answer is, all the humans are like, I don't know, I didn't do it. Well, right, that's not a great outcome. I assume that will generally be the
Starting point is 00:12:39 answer, right? But like, right now, I think, I think we're probably in the case. I've certainly have no reason to think otherwise, where the humans are the ones making those decisions. Now, the AI might be helping to process information, helping to even maybe prioritize targets for people. But the debate between the Pentagon Anthropic is sort of a potential debate about where things might go in the future. I don't think actually it's a debate at the moment about using a large language model to like autonomously make these life and death decisions on the battlefield and then people aren't paying any attention. Sarah, does that sound, you know, is it that we're nervous that the computer will just decide on its own or that it will be wrong when it targets?
Starting point is 00:13:24 So let's talk about Iran for a second. Describe how a situation like that goes wrong and where the checks and balances are for that. So, uh, Claude in the Maven Smart. Okay, let me back you up real quick. You said Claude in the Maven Smart system. I can define my terms. I love the fact that it's named after something you could name your cat. Hey, Claude.
Starting point is 00:13:49 All right. So Claude is what? So Claude is the name that Anthropic gives to its flagship models, which is then used in the Maven's smart system. This is an AI-enabled decision support system that does a variety of things, including some of the tasks that Paul mentioned, like helping speed up efficiencies and logistics. but it has also been responsible for targeting in Iran. We now have confirmation there as well. And if public reporting is anything to go by in Bloomberg and the Wall Street Journal and others,
Starting point is 00:14:28 the first day that the production of 1,000 targets in Iran has largely been credited to the MSS, the Maven smart system. Now, who makes Maven smart system? Palantir does. I just, whoa, did you guys just feel the room get colder? Oh, the hair is. All right. So Claude, who is made by Anthropic,
Starting point is 00:14:53 and that is more of an interface that we are accustomed to using, what is its role in feeding information to the Maven Smart System, which is, I believe, a system we are less accustomed to using and is maybe a little less transparent. So tell us, tell us how that operates. Yeah, so the MavenSmart system has been
Starting point is 00:15:16 in use for several years now. The integration of Claude is, I think, relatively recent. I believe, in the last year, because Anthropic was able to gain access, go through the certifications to gain access to the government's classified networks. As far as we can tell, Claude right now has been used in targeting. And again, according to Paul, reporting, it seems that it has been used for target selection and then also target prioritization. The MavenSmart system itself is designed to pull in different data sources, so from sensors, satellites, and such, and try then, and Claude then makes those disparate data more readable to the human analyst. So it boosts efficiency in that way, but it does also, reading between the lines a little bit,
Starting point is 00:16:09 it does also seem to offload a little bit of human autonomy and decision making as well when it comes to that target selection and prioritization process. Quite frankly, you know, when you brought up a thousand targets, because I have no context, I don't know what I don't know. So I don't know if that's an unrealistic amount of targets. I don't know if that's, you know, I'm understanding that there are target rich environments, there are target poor ones, is a thousand in a day, you know, I don't know how they count it. Is that an unusual figure? Oh, yes. It's, I believe sent com said that it was 2x the number of targets in the 2003 shock and awe campaign in Iraq, just to have a historical. So 500 targets in a day was shock and awe, and this was a thousand. Now, I think we have to also
Starting point is 00:17:07 take into account Trump math, which generally is like, this is the biggest crowd ever to see, you know, in inauguration, and it wasn't. So how much of that is, do you think is Trump math and how much of that is an astonishingly high figure? I mean, it's being reported by Bloomberg, the Wall Street Journal, and the Washington Post all without an asterisk. Rags. Rags. I mean, they're all taking it at face value and it's acting as though it is seemingly plausible. So there is no indication yet at this point that it's not accurate. Stop paying for too much wireless just because, I don't know, that's just what I do. It's how it's always been.
Starting point is 00:17:58 That's just my company. Mint exists purely to fix that. Same coverage, same speed, just without the inflated price tag. You can change your coverage people. Mint is the premium wireless you expect. You know, your unlimited talk, your unlimited tax, your data, but at a fraction, what others charge. And for a limited time, you can get 50% off three months, six month,
Starting point is 00:18:22 12 month plans of unlimited premium wireless. The only thing keeping you from doing it is inertia, laziness. Don't be the beanbag chair. Be the... Trying to think of something energetic. Pogo stick? Be the pogo stick? That's probably not right.
Starting point is 00:18:37 Bring your own phone number. Activate with E-Sim and Min. start saving immediately no long-term contracts, no hassle, with a seven-day money-back guarantee and customer satisfaction ratings in the mid-90s. Mint makes it easy to try it and see why people don't go back. Ready to stop paying more than you have to. New customers can make the switch today,
Starting point is 00:18:55 and for a limited time, get unlimited premium wireless for just $15 a month. Switch now at mintmobile.com slash twes. That's mintmobile.com slash t-wes. Limited time offer. Upfront payments. $45 for three months, $90 for six months, or $180 for 12 months, plan required, $15 per month equivalent.
Starting point is 00:19:18 Taxes and fees extra, initial plan term only. Over 35 gigabytes may slow when network is busy. Capable device required, availability, speed, and coverage varies. See mintmobile.com. So you might say, Paul, let's say I'm working in the military. You work there and you've been researching this. Hey, Claude, I'm looking to take. take out all the radar installations in Iran.
Starting point is 00:19:46 What would be, you know, where would that, where would I do that and how quickly could I get it done? And then Claude would interface with Maven, which has all the data that it's gathered from, I'm assuming, satellites. And then it's translating that data that they understand through whatever intel they've gotten and they're going to place it into real world menus of what you could target. Would that be accurate? Yeah. So let me like explain what we know and then like what we could speculate reasonably about. Because it's like it is a little low pay. What we know and what we do
Starting point is 00:20:29 not know. We do know. But what do you think is stab at it? So all right. We know that Anthropics AI tool Claude is deployed on US military classified networks. It's integrated through the Maven smart system, which collects intelligence from different sources. And it's been used by the U.S. military in real world operations, including the operation against Venezuelan President Maduro and operations in Iran. And there's been some public reporting that's been used as Sarah was talking about in target generation and forwardization. Like exactly how we don't know.
Starting point is 00:21:00 So now I'm going to speculate about what might that look like. Speculation alert for Paul. Yeah. So that could look like where you're talking to an AI tool, and help me plan this vacation to the Jersey Shore. There's somebody who's an intel analyst or a targeting analyst who's going to these tools, and instead of having to manually go through all of this data that we have of, where are the radars and what is the imagery of them,
Starting point is 00:21:26 queries it in natural language, hey, develop me, for example, a prioritization of all of the radars that have already been hit and what the current battle damage assessment is of them, how much have they been destroyed, or they, we didn't hit them again for a follow-on strike, how much of them have not been hit yet. And let's put all that in a list, put in a database, let's prioritize it, and then let's match it to weapons that would be needing to take out these radars. Different types of radars might need different weapons.
Starting point is 00:21:58 And then let's match that to available aircraft to help build a strike package that would eventually go to like an aircraft gets a set of targets and weapons that are assigned to that target. And so, like, the technology is sort of being used throughout that chain to make it just easier for people to access and process this information. So we would be doing that anyway. It would just take longer. That's right. That's right.
Starting point is 00:22:23 Now we're talking about basically replacing the things that humans are doing with machines, speeding it up, making it a lot faster. The U.S. military set thousands of targets in Iran, having the ability to process that information at machine speed is very valuable for the military. And then because it's Claude, you could say, and now give it to me like your Ernest Hemingway. And then it would give you the targets in short, taciturn. It would just be very terse and go all there. So, Sarah, are we kidding ourselves then that there is a line? What is the controversy and how does it break down? What is Anthropics argument here?
Starting point is 00:23:01 Paul was saying earlier, it's really about the future. As it stands right now, what is the controversy? So I think the controversy in itself is a little mystifying because it sounds like the contract negotiations went south due to some, shall we say, strong personality clashes. If you look at the contract between OpenAI and Anthropic, they're actually relatively similar, if not the same. They've essentially agreed. Both companies have essentially agreed to both. both red lines. The contract that they have with the DoD or with Palantir?
Starting point is 00:23:42 Ah, so that's actually, we don't actually know about that yet. So stay tuned. All right. Let's speculate Samoa, people. Yes, stay tuned. It's not clear what model now Palantir might use or if they'll have an array of different models that they can choose from. So who makes the contract?
Starting point is 00:24:03 is it, does Palantir subcontract to Anthropic or Open AI? Or does DOD, who is the leading role in integrating these companies together? So it's not unheard of, in fact, pretty common for companies to come together and actually combine resources to create a product, especially for defense purposes. So, you know, for instance, the DIU trial and the DAWG trial, that's the Defense Innovation Unit and also the Defense Autonomous Warfare Group, have a call for building essentially attritable drones. And they've issued that call to industry and companies have in fact responded to that call by combining resources and submitting joint proposals. So it's not unheard of for companies to come into contract with one another. and then to approach the Pentagon.
Starting point is 00:25:02 So they'll do that together. Palantir and Anthropic or Palantir and Open AI will get together and say, we've developed this package using, you know, our product makes it more readable for humans. Your product makes it more. And so they'll bring it to DoD. And now, so the $200 million contract that Anthropic had,
Starting point is 00:25:24 Paul, do you know what that, they had a contract with DOD? What was that, what was that, for and for how long? Yeah, I think so this is where some of the details we don't really know. We know that they have an ongoing contract with DOD to deploy their AI tools on classified networks. We know they're being used through the Maven Smart System.
Starting point is 00:25:44 But a lot of these details of like, we don't normally get when defense contractors are working with the government. In fact, the like silver lining to this whole thing is the only reason a lot of these details are coming out is because this whole relationship blew up between Anthropic and the Pentagon. Otherwise, normally, like, they would have some deal about what the tools could and couldn't do. We would never know. And so that's like, you know, I think it's unfortunate, actually, that this sort of feud has spilled over between Anthropic and the Pentagon.
Starting point is 00:26:14 But it is really the only reason that we have this kind of insight, which is even still pretty limited on exactly what the terms of use of these contracts are. How opaque are these military contracts? I know the, you know, DOD is it's the only government, you know, agency that's never passed in terms of, internal audit. But how opaque are these? And the $200 million that they use, is that over a five-year period just to use their products on their classified networks? Yeah, I'm not sure that we know. Oh. Unless there is seeing more details than I have. You know, even as an employee, I do not have access to contract details. It's very tented in a lot of these companies and on a need-to-know basis. Now, these two are, is 200 million and on you, I mean, to me, that's an enormous figure.
Starting point is 00:27:05 You know, you're talking about the Pentagon budget in total obviously dwarfs that one trillion now as they're pushing forward, but still, it's an enormous amount of money. Do they have it with, do they have that with different companies? It's not, I mean, it's a lot of money for like a normal person. It's not a lot of money for either the Pentagon or for these AI companies. They're all dealing in billions and billions of dollars. This is just walking around money. I'm just going around a little walking around money to Anthropic, Open AI.
Starting point is 00:27:37 It's not quite money into the couch Christians, but like it's not a massive amount of money. And the direct cost to Anthropic of losing this contract is not substantial to them relative to like the scale of AI investment that's happening right now in the AI sector. How much of the contracts for like Open AI and Anthropic are consumer based? In other words, I pay 1195 to get your latest model. And how much of it is corporate based and defense based? Do you guys have a sense of that? Yeah, I mean, I think Open AI is right now for 2026 projected to,
Starting point is 00:28:16 generate about $25 billion in annualized revenue. The majority of that is coming from subscriptions to its models. I think Anthropic is in a similar ballpark where they're on track to generate, I think, about $19 billion in annualized revenue. Anthropic is a little different from Open AI in that it has prioritized enterprise contracts earlier on. But there is, I think opening eye strategy, and this is public, has been targeted towards generating more enterprise contracts in the future. But I do think that the majority are still coming from, you know, individual consumers, developers. Right. So the reason I bring that up is it does mean, because we're talking about they're opaque and they're tented,
Starting point is 00:29:09 but it does mean that the consumer has some influence here in that the government, is not their sole benefactor. It really is individuals. Anthropic says, I'm drawing a moral line. Whether that moral line is an actual line or it's already been traversed by whoever knows is a real moral line or not. An OpenAI says, I agree with Anthropic
Starting point is 00:29:39 and we are drawing the moral line here, autonomous, weaponry and mass surveillance. Anthropic loses the $200 million contract, and that same night, Open AI announces, hey, we just signed a big deal with DoD. How real is that moral line that Anthropic drew? And how real is the backlash
Starting point is 00:30:06 against Open AI for suddenly appearing to have turned around and said, oh, they won't do it? Okay, we'll do it. Yeah. I mean, look, the backlash is real, and it's happened from some AI scientists, Anthropic, vaulted after this controversy right to the top of the charts in terms of downloads in the App Store. So I think that's happening. The dollar amounts for both these companies are relatively marginal compared to all of the other non-defense investment.
Starting point is 00:30:37 The bigger risk for Anthropic is going to be actions that the government is already taking against the country. company, labeling them a supply chain risk and going after them in that way, which would designate other defense contractors saying they can't use Anthropics AI tools in the furtherance of their defense contracts. And then other steps the U.S. government might take to retaliate against the company. They talked about using the Defense Production Act to seize control of their AI models, for example.
Starting point is 00:31:06 So those are probably like the bigger risk. It's not so much the dollar amount of the contract. And Sarah, was it a real line? And it appeared to an outside observer that OpenAI immediately reversed their moral position, given what what you guys are both saying is a very small amount of money comparatively for their bottom line. I'm not sure if there is an actual reversal. I do think that the military usage policies that are often designed by these companies are meant to preserve optionality for its leadership. there was a lot of backlash that, you know, I saw it in real time a lot of the AI community still congregates on Twitter. And Open AI hosted and Ask Me Anything on Twitter in response to that backlash, which I think illustrates, you know, the fact that the public can act as a pressure point on these companies. But what we ended up seeing as a result of that AMA was not necessarily an alteration to their previous policy, but adding more language. to explain their already existing position,
Starting point is 00:32:15 which in practice, again, doesn't seem to be all that different from Anthropic, but I think the communication strategy is maybe a little different. Right. I mean, I don't know if it's the cultural fascination with the so-called Great Men of History, but I really would resist any kind of narrative that tries to identify a hero and a villain
Starting point is 00:32:41 in this story. I'm not necessarily sure that those are appropriate roles for either Anthropic or Open AI. But to Paul's point, I think part of the sympathy that's been directed at Anthropic is because they have been the target of government overreach. And so I think it's possible to hold two ideas in one hand here, which is that Anthropic has been unfairly targeted. but at the same time, these two red lines that have been identified by both companies are probably inadequate. And the public does not actually have to accept those two red lines as the threshold, you know, threshold of risk. Imagine if you had some kind of like a rewards program, you know what I'm talking about, like a Miles program, et cetera.
Starting point is 00:33:32 But it's a rewards program that you pay rent through and then earn points for travel, dining, shopping, etc. 2026. If you're still paying rent without built, come on, brother. It's a loyalty program for renters that rewards you for your biggest monthly expense, which is rent. We built every rent payment earns your points. You can redeem them, flights, hotels, lift rides, Amazon purchases. So much more. It's a loyalty program that you pay when you pay rent that helps you get out of the place you're renting for like a week and go have fun. And by the way, built members can earn on mortgage payments. You got a house, whether you've got an apartment,
Starting point is 00:34:12 whether you're sharing it with three of your knucklehead friends. I don't know what your life is. I don't know what you're doing. You can even redeem Bill points towards your next rent credit or even a down payment on a home. It's simple. Pay rent is better with Bill. You're paying rent anyway.
Starting point is 00:34:27 Get something for it. So join the loyalty program for renters at joinbilt.com slash T-W-S. That's J-O-I-N-B-I-L-T.com. slash TWS. Make sure to use our URL so they know we say it. But are we kidding ourselves, Paul, in that, you know, look, if we think through history, there's no human advancement that hasn't almost immediately been sought by the military for advantage, whether that advancement is sonic or chemical or biological.
Starting point is 00:35:04 You know, Sarah mentioned two departments over at defense that I would pretty much assume nobody who's listening to this has ever heard of. You know, I think we've all heard of DARPA. But there are development groups, I'm assuming, you know, they said when they went into Venezuela, they used, you know, the Havana syndrome liquidation new weapon that, like, you pointed at people and their insides melt. Like, we are throughout history, any advancement that a human can think of, their military wing is immediately going to try and utilize for some advantage, no?
Starting point is 00:35:43 I mean, yeah, but look, two of the examples you gave, they're chemical and biological, we do have regulations on how they're used. We have conventions banning chemical and biological weapons. Right. But people still use them. People still use them, right? But not everyone. And they've been sort of by many states, they've been treated as unacceptable weapons.
Starting point is 00:36:03 Now, you get some paris, you get some outliers, you get people like Saddam Hussein. and Bashar al-Assad, who are going to use them still. But most states have given up those kinds of weapons, and I think it's better that they have. So the question with AI is not actually, are we going to use AI in the military. None of these companies are saying don't use AI in the military. The question is, should there be any rules? And if so, who sets those rules? Because, like, the sort of crazy thing about the dispute about autonomous weapons is as near as I can tell you.
Starting point is 00:36:32 No one is actually saying we're going to use a large language model as an autonomous weapon today. That'd be crazy. If you have a large language model writing email for you, you better fact check that email, right? Because they do weird things sometimes. The question is, who gets to set the rules? And the Pentagon's answer is, we get to set the rules. We don't want these companies dictating to us.
Starting point is 00:36:52 And these companies in many of the scientists working there, they have a lot of discomfort about how the technology might be used going forward in the military. Sarah, you know, when it says about who sets the rules, is it the company or is it the military? So we also, and I've read about this group, they are, they're called Congress. We don't hear much from them. They're, it's this group of generally older white men who, once their past retirement age, enter into the legislative house. Is Congress, are they utterly rudderless here?
Starting point is 00:37:30 Are they just overmatched? Do they have any role to play? What can we expect and what should we expect from them? Hmm. From that wasn't optimistic? You know, let's start small. Start asking questions. I, you know, I am somewhat sympathetic to this idea that, you know, private AI companies cannot be setting the rules in foreign policy. But one of the issues that I see today is, and I think this does track with a role potentially for Congress as well, is that AI companies are, in fact, influencing foreign policy. It may not always be through the back end and through their contracts through the Pentagon, but they're certainly donating significant sums to lobbying efforts
Starting point is 00:38:21 and tying those donations to, you know, U.S.-China tech competition and arguing that a low or no regulatory industry. environment is a requirement to, you know, quote unquote eat China. And they're supporting potential, you know, political campaigns that agree with that perspective. And so this conversation is in fact coming for Congress. And they probably better be equipped at the very least. And, you know, I actually think Paul may even be a better person to speak on this in particular, since he is, in fact, in D.C. and I would be curious to hear from him what the general reaction has been from Congress on this issue. But I can say that AI researchers typically are very keen to discuss their work.
Starting point is 00:39:12 And I've in fact never met a keener bunch of people who are willing to talk about, you know, the risks and opportunities related to AI models. So they are, you know, you can always send them an email. Yeah, I think they're pretty eager to have those conversations. Paul, so what say you down in, down in Washington? Yeah, I mean, look, I'm here in Washington now. I can see the White House out of my office window here. I'm not going to pretend things are super functional in Washington, but, you know, I think we have seen government engagement on some of these issues. And there are a lot of tools that Congress can use to have oversight of the military and intelligence communities.
Starting point is 00:39:50 One is passing legislation, which may or may not be the right answer in some cases on the domestic mass surveillance stuff, maybe. And the autonomous weapons, maybe not. We might want to maintain some flexibility there. But there's other things. Congress could hold hearings. Congress can get people from executive. Yes, they could. They could, right?
Starting point is 00:40:09 That is correct. They get people from the executive branch, come in and brief them. Say, hey, well, what are you doing with AI? And if you want to keep it classified, Congress can do classified briefings to educate them about what's going on instead of the military. Congress can use tools like procurement and acquisitions. Congress has the money. They are the ones that are allocating money to,
Starting point is 00:40:28 the military and intelligence community. And so that is a tool that Congress absolutely does use already to fund some projects and not fund others. And so like there's a variety of tools that Congress has potentially to influence these things. And I think the model of who should be setting the rules, maybe it's our democratically elected representatives. That is probably the right approach. Well, that's what I was saying. But to Sarah's point, you know, look, these guys have more money than anyone. right now the money is in AI now obviously they're using a lot of those billions to build data centers
Starting point is 00:41:04 that we have sort of no idea where those are all going but 25 million here 25 million there Elon Musk puts 350 million into political campaigns the amount of money that's flowing from these from the tech sector is is like nothing we've ever seen before do you think that's had the effect that maybe the AI companies want, which is to regulate us would be they've portrayed it as national security risk, they portrayed it as it would cause us to lose to China. Has that been effective? Or is it that they're overwhelmed by not really understanding the nuts and bolts of AI? You mean Congress, not understanding the nuts and bolts of the AI? Congress. That's right. Yeah. I mean, I think there's a lot of, I've actually been super impressed when I
Starting point is 00:41:57 I speak with, I mean, you can always find video clips online if some Congress member not understanding something. I would use them on the show. Yeah, you know, I mean, like, okay. But I think, like, I've been impressed when I speak with members of Congress and their staffs, how knowledgeable many of them are about the technology and what it can do in its limitations. So I think there's always work to be done in terms of improving tech literacy in Washington. But I think some of the bigger challenge is just sort of getting over the hurdles in passing legislation and getting agreement whether that's around federal regulation of AI or data privacy or social media or other types of that's actually really hard for Washington to do to pass legislation on these kinds of issues.
Starting point is 00:42:40 Sarah, you know, you spoke of this earlier. It's this great man. Here's why I'm very nervous. I've met a couple of these folks. And they do not seem particularly enamored with humans. I don't want to say outright misanthropic. But, you know, Peter Thiel was asked famously in a conversation, you know, should humans continue? And, you know, he paused, I think, for a pretty considerable amount of time before he went like, well, you know, and transhumanism.
Starting point is 00:43:12 I once asked Sam Altman about the disruption that AI is going to cause to our workforce and that small amount of time in which it's going to cause it. And his response was just, he literally just looked at, the question was five minutes long and he just went, we'll be okay. You know, how concerned are you with these great men and how great they actually are? And what is their connection to, do they understand the damage that they also can do? Or are they megalomaniacs? Well, I mean, I can't look into anyone's, you know, heart and mind. But I would say that if they're able to cause harm, it's only because they are powered by immense wealth and the high valuations of these companies and also by institutions that allow for
Starting point is 00:44:10 corporate donations and excessive individual donations as well. So they're essentially enabled by our current institutional structures. In terms of whether these companies discuss the downsides. I mean, I joined in 2021. I left in 2025. There was a period where I think that was the, you know, dominant topic, you know, topic of discussion, right? Are these tools actually going to increase productivity? Are they going to replace tasks? Are they going to replace workers? Can they enable the proliferation of potentially weapons of mass destruction? And there was testing and evaluations that began to try and answer those questions. So I think certainly the researchers at these companies have tried to make a concerted effort.
Starting point is 00:45:00 But these companies are also complex organizations, and they're always factions that are budding heads, right? Some people do prefer a low-to-no regulatory approach. They don't want to see state legislation. They prefer everything at the federal level. And then there are some who are at these companies who are actually quite supportive of state-level legislation. So it really depends.
Starting point is 00:45:23 I mean, I think of, you know, Open AI and Anthropic and, frankly, other companies is often going through eras where certain factions went out over others, and that's what ends up setting the cultural mood of the company. Do they understand the weight of what they're making? You know, I can't help but go back to Oppenheimer. And I, you know, when you have something that looks like it could be, uh, it's extermination level type technology, positive and negative. I mean, if we split the atom one way, we get energy that can power the world. If we split it this way, you can blow it up. And we all
Starting point is 00:46:02 know which one we tried first. And it felt like the people who were making that weapon did it under the crucible of the Nazis. And so they developed it with this idea that, well, if the Germans get it, were all done for. But it was clear that they at least felt the burden of that. Paul, in your experience, are they feeling the burden of this? Because what Sarah's talking about is, well, they did go through all that testing. We don't really know what the results of it was. And they seem to have gotten past that reservation. I mean, the AI scientists and engineers that I speak with, particularly those in the frontier labs, are very concerned about AI risk. They, I think, understand better than anybody, actually the downsides of the technology, the way that it could
Starting point is 00:46:51 be abused, the way that it could just do sort of strange things that might be surprising. I think one of the challenges here is there are incentives for the companies to move fast, to ship their products, because there's the sort of perception of a winner-take-all dynamic in the marketplace that we have seen in other tech industries and operating systems, handsets. Well, yeah, I mean, in a way, right, a sort of commercial. race to dominate the marketplace. And that does drive incentives.
Starting point is 00:47:21 And these companies need a lot of money to build the data centers for train the AI. So I do think the individuals take it seriously. And I think some of the companies, I mean, if you look at what Anthropic just did, I mean, they sort of stuck to their guns on this decision in a way that is going to be costly for the company. How costly I think we just don't know, but they decided to do that. So I do think the companies take these issues pretty seriously. And if I can also add, I mean, just at the risk of potentially misspeaking, the testing and evaluations that were done and are continue to be done at these companies, they are often released publicly.
Starting point is 00:48:00 But, you know, of course, in certain areas like, you know, C-Burn, so that's chemical, biological, radiological, and nuclear testing, and then also cyber, there are greater restrictions placed around what. can be shared with the public, but there are even reports, summary reports about what that testing looks like. And then a lot of the benchmarks that are used by AI industry are, in fact, publicly available. It just so happens that testing and evaluation of these large language models is still in a relatively nascent phase. And it's not always clear what the best way to test these models are if what we're trying to do is use them as proxies for social impact or risk. And is that, you know, the famous one is now, you know, if you remember the movie War Games, and it was, you know, the first sort of kind of dystopian look at what would happen when computers
Starting point is 00:48:53 take over was the Matthew Broderick movie from when I was a kid. And it was about a nuclear war game gone wrong. And the computer just started launching, you know, nuclear weapons at all the different countries. And at the very end, the computer said, the only way to win is not to play. with AI, apparently, it was more apt to launch nuclear war than humans or standard computers. What do you know about that testing? And is that apocryphal or is that, did that really happen? I mean, it did really happen. I think a variety of researchers at academic institutions have now managed to replicate the findings.
Starting point is 00:49:39 the models have a tendency to escalate more aggressively than humans would. And it's not really clear why the models do that. One theory is that in the training data, aka the intranet, political scientists have a tendency to study wartime escalation rather than de-escalation. So that may influence how the models respond to these war game-type simulations. But, I mean, that in itself is, of course, a cautionary tale around using, these models for approving the use of force or for decision-making or frankly even for war gaming and simulations. Is it possible, Paul, that AI, because of how adept it is at creating
Starting point is 00:50:25 these targets and all these other things, that it actually made going into Iran more appealing that before the age of AI, we might have been more circumspect about the top of the time of the time. type of attack that we launched, are we seeing barriers to military action fall because of how quickly these models can they bring a sense of false confidence? I mean, I don't think today that's true. Like, I don't think AI was a factor in President Trump making this decision. I think it was based in large part on the U.S. strike against Iran last summer, against the enrichment program, being very successful and limited.
Starting point is 00:51:06 and then the rate against to grab Maduro being very successful and limited. And this sort of like, okay, having a couple perception of having a couple wins under his belt, I'm using the military, it seems to be effective. No downside. Sure. Right. So I think those are probably bigger factors. I think what you're describing could be a risk going forward, right?
Starting point is 00:51:24 So one way in this could be a risk is some of the things that militaries count and try to calculate when they measure military power are things that you could see and you can count, you can count how many tanks somebody has, how many airplanes. how many ships. Then there are some things that matter a lot that are hard to count. We see this unfolding in the war in Ukraine, the morale of the troops on the battlefield. The Ukrainians are fighting for their homeland. The Russians are conscripts. They don't want to be there. The leadership, the quality of the unit cohesion. Those things matter a lot, but they're really hard to measure. So one possibility going forward is you could see a world where as more and more military power
Starting point is 00:52:02 gets embedded into software and data and AI. It's kind of hard to measure that. It's like, well, we have this AI and it's amazing and it's wonderful and ours must be great. And there's this, it becomes harder for militaries and countries to sort of gauge what their relative level of power is. And you might see more miscalculation. You might see countries sort of assuming, well, we have this wonderful technology and we can win and the world will be over quickly and we'll all be home. And as it feels not to be true, countries have made this mistake before. That's what happened in World War I. Right? It's so like, we've made it quite a few times. Humans have done this. So that's not a, that's, I think that is a possibility that could happen, but we're not there today.
Starting point is 00:52:41 Sarah, has anybody studied the confidence, you know, there's a certain thing in bars, like there's a beer courage. You get a couple of shots and you get a couple of beers and you're like, you know, it turns out, I'm a tremendous MMA fighter. And I think I'm going to, you know, you get a weird confidence from alcohol. I find you get a weird confidence when you use AI. when you use those models, you tend to be much more assured in your decision-making because you feel like you have this kind of infallible being behind you. Has anybody studied AI confidence in decision-making? Because I feel it when I use it for the mundane tasks that I do.
Starting point is 00:53:30 You know, I'm not sure if I've seen anything. like that. That's a really interesting, that's a really interesting point. I mean, I think what you're referring to, I've heard some people talk about chap-boss or frankly, any type of statistical analysis that's used to make decision-making as applying this mathematical veneer, right? It makes us feel better because it's therefore objective. And it removes the human qualitative or subjective element to it. You know, the issue that I just keep going back to is, of course, that these models are not always going to be reliable because they are, in fact, statistical prediction machines. I mean, they're useful, don't get me wrong, but they're not, they are inevitably going to output something
Starting point is 00:54:16 that is incorrect. And so being able to keep appropriate human judgment and to create a system in such a way that people do not abandon their critical thinking skills is a very very very important. It's a very very important facet, I think, to any type of human machine teaming that we're seeing today in military AI integration. Is that something the military is concerned with Paul? Because, you know, in looking at it from, like, let's say from an educational standpoint, there's been a lot of studies that show that when kids start using this, their ability to do that, to think critically, to reason and all that falls, that it becomes this crutch that when utilized, you no longer develop those kinds of skills, and ways of thinking,
Starting point is 00:55:03 does this become a crutch for the military to use? And the second part of that question is, are we ignoring this whole other area, which is, hey, Claude or hey, Maven or whatever it is, design me five nerve agents that the world has never seen before. You know, is that another usage that we're not, so far we're only talking about chain of command. is there a whole other area we're not even really thinking about?
Starting point is 00:55:32 Yeah, well, that is certainly a risk, the potential for AI to enable biological weapons and to maybe even lower the barrier to countries, to non-state groups, to terrorists, to do so. Maybe not today, but that's a concern down the road. I think in terms of military, you say, the military is actually pretty keenly aware of, for people in uniform, they understand the responsibility that they have. okay, if they're going to launch this missile, they own where that missile goes. And I think there's a couple of concerns. One would be making sure that they really understand this AI system, like, what is it going to do?
Starting point is 00:56:08 Is it going to do something strange? Is it going to fail? How's that going to work? Ensuring that there's human responsibility and accountability, I think it's actually quite important to the military that's like sort of part of the military ethos. But it's challenging for a lot of these AI system because it's not like a traditional computer program where, okay, there's an accident. You go back and you say, oh, this is.
Starting point is 00:56:28 the line of code that caused the problem. Now the answer is embedded in this massive neural network with billions of connections. And you're like, why did it do that? I don't know. And so it gets into these issues of trying to evaluate the model's performance. What are some conditions in which it might be biased in certain ways? They tend towards sycifancy, towards basically telling you the answer that it thinks you want to hear. Well, that could really be a problem in some national security applications. You're an Intel analyst. And you're like asking some questions. And it's like, well, you know, this is what I think you want to hear, right? So that was Napoleon's whole issue.
Starting point is 00:57:04 They were like, sure, boss Waterloo. What a great idea. You should go there. Yeah. Now, this is going to sound ridiculous, but does it do like what it does with us, which is, would you like me to give you a 10-day bombing plan? Would you like me to add in other targets that may seem ancillary but might have military? Like, is it, is it that?
Starting point is 00:57:28 casual when it's describing, you know, what it wants to do next and how quickly does it do that? I have never used the Maven Smart System, and so I don't actually know what, you know, what the personality of the chatbot is. Or is that what they use Claude for? Yeah, I mean, you bring up an interesting point, though, right, and that these models can be fine-tuned with different personalities to be either, you know, more acquiescing, less acquiescing, We know that users, of course, like to be fond over a little bit. But it's possible that it's not presenting information in the most neutral way out there. We just don't know publicly, I don't think.
Starting point is 00:58:12 Right. Do you know, Paul? No, I don't know. It's an interesting question. I think one way to think about these models is they're sort of role playing. They're playing a role that's in their training data. And then that can be fine-tuned by additional training that they get from the companies. And so that's why you get this sort of personality, different personalities among the different models.
Starting point is 00:58:31 So it's an interesting question of like the ones that the military is using or the intelligence community. What are they sort of trained on? And are there hidden biases that might be kind of subtle that are hard to detect? I mean, that's, I think, a difficult problem. Or not so hard. And I just got a chilling feeling that they're training it on the head set. And so they plug something in and the model just pops back.
Starting point is 00:58:56 Hell yeah! Let's do this! I told you about my invention. The crumple, the crumple. It is a topographical blanket for dogs, but not the same topography at each time. Every time you throw it on the ground, it changes its topography. It is an amusement park for your dog
Starting point is 00:59:22 to find a place of comfort and warmth, but also with interest. It's not the same old, oh right, this is where I put my right paw and this is where I curl my butt. No, it changes every time. It gives them. It's like visiting.
Starting point is 00:59:38 It's like Epcot. It's an Epcot Center blanket for the dog. And have I started this business yet? I have not. That's right. To the great dismay and disappointment of our audience. And maybe humanity writ large, I have not started my crumple business.
Starting point is 00:59:55 And I'm going to tell you why. It's too hard. It's nerve. I. I don't know how to do this. It's daunting. But you know, you got Shopify here makes it easy for people.
Starting point is 01:00:08 You can get started with, they got a design studio, hundreds of templates, help you build an online store. They can match your style. You can do this smoother. They also have 24-hour customer service support, world-class expertise, and everything.
Starting point is 01:00:21 It's the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the United States. It's time to turn those what-ifs into with Shopify today. Sign up for your $1 per month trial today at Shopify.com slash TWS. Go to shopify.com slash TWS. That's Shopify.com slash TWS.
Starting point is 01:00:48 So these are like some, I think some really difficult problems with the technology that we've got to find ways to work through to use it in ways that are safe and effective. And I don't think they're easy answers. I think the technology has some strange. and new challenges associated with it. Sarah, you're struggling as having a really balanced but also nuanced view of this. What keeps you up at night? Is there something about this that you think about as particularly challenging?
Starting point is 01:01:20 Yeah, well, there are many challenges. Let me see if I can narrow them down. All right, or throw them all out there and we'll go through them one by one. Right. I mean, I think about the challenge related to global governance. I mean, for over a decade now, over 90 plus member states have been meeting at the United Nations to discuss regulating or the possibility of regulating or even introducing a treaty instrument that would regulate lethal autonomous weapon systems. But because of the nature of the forum at which these discussions are taking place, it's a consensus-based body, it's at the convention on certain conventional weapons, it's very unlikely that a treaty-based instrument is even possible. in this space. I mean, you can think about how hard it is to, you know, pick a restaurant with you and your
Starting point is 01:02:09 five friends. Now imagine that you have 90 plus governments trying to decide on regulating. Or pick a restaurant that could kill all of us. But now, how have they been able to do it? Why can't they use the model that they used for atomic weapons? Oh, I see. Well, so there, I mean, I guess there are a few reasons for that. So the convention on certain conventional weapons, it's really in the name. It is talking about conventional weapons. And autonomous weapons, the conversation around them has really focused on trying to preserve meaningful human control to discuss whether that's even possible, whether they can actually discriminate between combatants and civilians. And if they can, in fact, discriminate between combatants and civilians to an extent, then they technically could be legal
Starting point is 01:02:59 under international humanitarian law, but militaries would still need to abide by the existing international legal order and international humanitarian legal principles. And the good thing about this particular forum is that though, you know, regulation with teeth is probably off the agenda, most states have been able to have consented and reaffirmed the norms around international humanitarian law as applying to autonomous weapon. So that's, I think, also a silver lining as well. Has anybody kind of gotten it right? And Paul, I'll ask you because, you know, maybe you see ways through this from being in Washington.
Starting point is 01:03:41 But, you know, has the European Union done a better job with this? Has any governing body, has any international body? Is there any pathway here that you see that could help establish at least the beginning of guardrails? I think that actually the best avenue we have is. starting at the level of AI hardware and then sort of building guardrails domestically, eventually globally, kind of from the ground up. Explain the difference between hardware and the software. Right.
Starting point is 01:04:11 So the thing about these AI systems that it's kind of amazing is they require massive amounts of computing power to train the most capable models and to deploy them at scale. Now, you can make smaller models that you can deploy on a laptop, for example, or some other kind of edge device, smartphones, but they're not as capable. But the most advanced ones are going to be really big. They're going to have to run in the cloud. They're going to need really advanced chips. And to deploy them at scale as a society, you're going to need a lot of these really advanced chips. Well, these chips are made in one place on earth. Taiwan. Taiwan. Now, that does not on the face of it seem great that it's an island 100 miles off the coast of China
Starting point is 01:04:52 that China has pledged to absorb by force, if necessary. But it is a, it is a, Considered a drawback, I think. This is not the best geographic position. However, these fabs that TSM has in Taiwan, where the most advanced chips are made, depend on technology from three countries in the world, Japan, the Netherlands, and the United States. And without that technology, they cannot make these advanced chips. And so that sort of starting at the hardware level, that actually is like a really narrow chokepoint to begin to then control the technology.
Starting point is 01:05:23 So the deal we just made with UAE to give them the chips, the previous concern had been that they would then sell the chips to China, did that just blow a hole in the net? Well, I mean, the bigger question is like, what is the global diffusion of this hardware look like? At the tail end of the Biden administration, literally the last week when they were in office, they dropped this very complicated rule called a diffusion rule. that basically would take U.S. expert controls on the most advanced chips to China, which we've had for several years now, started on the first Trump administration, and expand that globally. And it's kind of tiered system where depending on which country you could get so many chips. It was a little bit complicated.
Starting point is 01:06:10 Trump administration threw that all at the window. But I do think that, like, the chips themselves are a way that we can begin to shape who gets access to the hardware, who can build the data centers because they need these chips to do it. And that's a hook for guardrails, right? So you could say, oh, you want to buy all these advanced chips? I want to see your domestic regulation surrounding making sure that people aren't going to use these chips to make a biological weapon. Like we did it with enriching uranium and the things that you would need to be able to do that. That's actually like not a bad analogy here, right?
Starting point is 01:06:42 And we're, okay, you can get uranium for peaceful civilian nuclear purposes not to make a bomb. And we found ways to separate those two. not to enrich it to that level. Right. Right. So like the idea would be the same thing. You can use these chips for peaceful uses, basically most everything, but you can't use it to make like an offensive cyber weapon, for example,
Starting point is 01:07:04 and put some guardrails on how the technology is used. Right. And inspection. Sarah, is there any fear that like by the time we figure this all out, quantum computing is the new standard. And that's pushed us in a, so by the time we figure out, okay, these three chips are crucial to any ability to do that. And then somebody else comes in and says, actually, that's not state of the art anymore.
Starting point is 01:07:30 Are we moving so quickly that suddenly quantum computing is the power that's necessary to drive these? And that's a whole different can of worms. I think you're now learning in real time that AI researchers aren't necessarily experts in quantum computing. And I am the worst person to ask. answer that question. We just, because the reason why I bring it up is I just read an article about it and I have
Starting point is 01:08:00 no idea what it is. They were, someone was describing that actually quantum computing is going to be wildly preferable to large language models. And I was unable to understand the difference.
Starting point is 01:08:17 Is there, knowing that you're not experts in this, is there a, sort of remedial version of what the difference might be. Paul, do you have any idea about this? Yeah, I think so. So we are seeing some progress in quantum computing. I don't think it's going to change this picture and AI for a couple reasons. Okay. One, the type quantum computing will become valuable over time for like some very niche kinds of computation, but not necessarily everything. And I don't think what large language models or other large, large, no, network. works are doing today.
Starting point is 01:08:53 It's also like the case that we're just, we're not seeing in quantum computing this kind of really rapid exponential growth that we're seeing in AI. So right now, the price performance, the performance per dollar of AI chips is doubling about over two years. It's like really grown very, very quickly. That's not true. That's the productivity of it. That's like the efficiency of it.
Starting point is 01:09:16 Okay. Right. So that's really powerful. That's what's allowing this massive growth in it. That's one of the factors. Data and better algorithms are factor two. We're not seeing that kind of exponential growth in quantum computing. It's really hard science.
Starting point is 01:09:30 It's like difficult physics. It's much more traditional science. People are making incremental gains. I think we're going to continue to see progress. But I'm a skeptic that we're going to see this like transformative leap ahead in quantum computing and say the next five, ten years, the way that we're seeing with AI right now. So in summation, the drug. that we're seeing between anthropic and open AI,
Starting point is 01:09:54 that's really the soap opera story. And there's not necessarily a lot of there there. It's the general competition between these companies that are going to try and establish primacy in the realm of AI models. Military application is just one element of the revenue streams that they're pulling in there. the real sort of where you guys are really looking at is that interface between who are we going to end up trusting more. The humans that are developing the AI models, the humans that are running and integrating the AI models or the models themselves.
Starting point is 01:10:40 Would that be kind of where the real tension is going to play out? I mean, I think it's fair, but I would just add that it's not. not going to be only one technical, you know, it's not only just going to be safety through the technical stack or only safety through the law or safety through regulation policy, right? It is truly going to be an all-of-society effort. And in part, because AI, again, general purpose, and it can be used across a variety of applications. So a one-size-fits-all approach to safety is probably not going to work. Is it akin to the battle against climate change? And if so, that we haven't done a great job there. So is that, does that give us a pathway not to follow?
Starting point is 01:11:21 I mean, I think any pathway towards AI governance is going to be through cooperation. And I don't want to be overly cynical here. And so I'll try and draw on a positive, a positive example. No, go full, go full cynical. I'm going to give you one positive example, just one. I think I've been, there's plenty of cynical. Come on, Sarah. So under the previous administration, there was a, they launched the declaration on the political declaration on military use of AI and autonomy. And that was a voluntary declaration with principles and norms and around 60 countries signed on to it. And in that declaration, it really centered international humanitarian law and also civilian protection. those conversations can resume.
Starting point is 01:12:19 Those diplomatic conversations can resume. Really, what's stopping right now is political will. And that process can, in fact, happen alongside the existing UN processes as well. So there isn't really a way out of this that doesn't involve talking a lot to other people. But that there is something there to build on. Is the cynical version of that international norms and rules, seem to be in disfavor with the current sort of, I guess what you would call large power politics that seem to be playing out. Would that have been, is that your downside?
Starting point is 01:13:01 Yeah, I mean, I think that's probably fair. We're dancing around lots of things. It is. But, you know, at the same time, people can continue to demand this through Congress. mentioned Congress earlier. I see a role here potentially, right? If they want, if they want to do something, you know, if they're, you know, if they're, you know, if they're, you know, I'm counting on your students, doctor. I'm counting on your students at Berkeley to be able to come up with a, yeah, a way through it. Paul, what, what keeps you up at night and, and give us a nice balance between cynicism and optimism on, on the way forward that you see? Yeah, look, I think the reality is
Starting point is 01:13:41 this technology is going to bring to us a lot of challenges. How is it used by them? military? What are some of the risks in cybersecurity? We'd like a little bit about the risks of AI empowering biological weapons. There's a lot of risks of the technology. And that's just in like the sort of national security space, not to mention things like job dislocation. I think my takeaway from this fight between Anthropic and the Pentagon is that these decisions are too important to be left up to any one of these entities on their own, right? For-profit companies or the government deciding on a zone. I think like, we all have a stake in this world that we're living in, not just on some
Starting point is 01:14:21 of the civilian uses, but even military ones. All right. So, okay, we're not the ones building the killer robots, but if people build them, we're going to live in that world. You know, we do have a stake in what that looks like. And so there's, there's, you know, democratically elected representatives, all of us, your listeners, you know, have a role to play in weighing in this debate. And if there's a silver lining of sort of this controversy we've seen in the last couple weeks, it's what would have been a private conversation is now happening publicly, kind of messy. A lot of personalities involved on all sides. But it's airing this issue.
Starting point is 01:14:54 And then we're all sort of debating. What should be these red lights here? Hold on a second. That's a good conversation to have. And I'm encouraged that we're having that discussion. Fantastic. Guys, thank you so much for joining us on this. Thank you for having me in.
Starting point is 01:15:09 Thank you. Thanks for the discussion. It's been great. Should I, did I take the wrong, should I not be calmer? Yeah, my hair is still on fire. Still on fire. Sorry to say. It did not calm me. Did it help at all that, because they were still putting it through a process, that they still wanted to filter the problem of AI through international cooperation or legislative process or government incentives for that, rather than saying, look, we're at one second to doomsday,
Starting point is 01:15:54 somebody's got to step in. I think I was kind of calmed by the idea that, like, we have these models for, like, other sort of disarmament that have worked, like what you said about, like, nuclear weapons, like the nuclear arms deals, but also, like, you know, the Iran deal where we started. The one he used was biological weapons and chemical weapons. Yeah, yeah, exactly. Like I think that that was encouraging to think like we have these frameworks that we could look at as models and like this isn't totally uncharted territory. And then I think I'm just reminded that we're not doing that. So that's where the nerves come back in. I also think freaking
Starting point is 01:16:33 people out too much is not conducive to getting them to act as we've seen with climate change. I think it's really hampered people's ability to organize. So I did appreciate. that. I also really appreciated, this is just a personal thing, but over the weekend I did notice a lot of people framing Anthropic as the good guys, which I thought was really odd considering all the reporting coming out about these neuron strikes, about the Maduro capture. That's already been used. Yeah. And I really appreciated just that we had someone who's worked at one of these companies like breaking down that it's not a binary, that there's so many considerations for these people to make. And as you've said, they're not, you know, perfect actors. Everyone makes mistakes. The technology
Starting point is 01:17:12 itself makes mistakes. So I just appreciated that nuance. I also like that what they talked about was, you know, in terms of the usage, it really is in some ways a kind of cousin of the way that we use it in that it's there, it's just collating data more quickly and spitting out those pleasantly formatted, you know. Yeah. That did not make me feel better. Here's five great places you could bomb. But John, how do you use AI? oh like I'll go in to AI and be like okay I want to find the best like who's got the best pizza in blah blah blah like generally I use it for like those types of recreational like I want to try this sport you know what's the stuff I might need how it would it be hard to get into it like that sort of shit and it's it's effective you know here's five places you could go to get started with you know paddle tenant you know that kind of And then the government asks what's the best pizza and then bombs those places. And I don't know if that makes me feel better, you know.
Starting point is 01:18:19 No, but here's, so here's why, though, here's what I'm going to say. So in the same way that I look at autonomous cars as like dystopian, almost everything I've read about it is that it would make it safer. That human error is actually at a higher fraction than the other. Now, obviously, letting it just make decisions on its own without any kind of interaction makes me uncomfortable. But I guess the point is like, how great are we actually? Not driving. Not good. Because we bomb shit randomly before computers ever happened.
Starting point is 01:18:58 Like, what was our track record on bombing? Like, not so fucking great. Like, we dropped two atomic weapons on Japan with the computer. do worse than that? Like that's my only point is like, are we elevating humanity to a higher status than we've earned? I think that the issue is that it makes doing these things so much faster. So maybe it would have dropped five atomic bombs on Japan. I don't know.
Starting point is 01:19:27 You know, like, but if we were to look at the charts, that seems to be the way that it would go. Right. Also in the Waymo case, there was reporting recently that people in the Philippines were intervening. you know, like we're just not there yet. Oh, really? Okay. Yeah, I didn't know that. Yeah, I'm assuming that. I guess what I was saying is sometimes in the battle between man and machine, we tend to look at man a little bit more favorably than maybe man has earned. But I absolutely get that. And again, to that, to that point, one of my biggest fears about AI continues to be what appear to be the pathological
Starting point is 01:20:08 personalities of the people that run those companies. Same. Yeah. I was thinking about that in terms of the attitudes and the personalities of these chatbots when you were talking about that in the conversation. And just remembering like six months ago, though, GROC or whatever company, you know, is above GROC for Elon. Elon.
Starting point is 01:20:30 Yeah, yeah. Yeah. Made a contract with the government for like 42 cents for like a year and a half. they could integrate GROC into government. And apparently there's like posters around DOD with HECSeth's, you know, AI generated mug saying we want you to use AI. Like they really want to get government hooked on, you know, their product. And I just imagine like someone in government being like to Jillian's point a little bit,
Starting point is 01:20:58 like, okay, there's flooding in Texas. What do we do? And they're like, well, Hitler is the best person to do with this, you know? You're thinking that they contracted with Mecca Hitler as opposed to just normal grass. No, you're right. And those guys manipulate algorithms and they are ideologues. They have a, a lot of them are transhumanist. They are leading us down a path that is not favorable, I think.
Starting point is 01:21:24 Yeah, when Sarah said, I don't know the personality of Maven. Like, my stomach dropped. I was like, oh, my God. We can't be talking about the personality of weapons. That's so dark. Before when she talked about Sam Altman's heart and mind, I was like, does he have either of those things? Right. Hearts or minds. But it is like, I don't know the personality of the Palantir-generated war autonomy system. Some wild shit, man, and not going away. But I loved how measured they were and I loved how they sort of helped us through there. Brittany, what do the people have for us this week? Sure. John, we're still going to get Greenland, right?
Starting point is 01:22:03 I think we already have it. We've already won. Like everything else in the Trump administration, we've already, not only it's like with the Iran war, we've won and we're doing more. It's we are Schrodinger's country. We exist in all different. We have Greenland and don't have it at the same time, but they respect our unique and, you know, unparalleled power. And so absolutely we have it and don't have it. and could do whatever we want with it and won't because of our largesse and I don't know. It's like how the Iran war is almost complete, but also could go on for as long as it takes. We live in this middle space, yeah.
Starting point is 01:22:48 Almost complete and never done. Yeah. And we are going to only stop at unconditional surrender, and we've already stopped. We are Schrodinger's country, and it is only the beholder that determines where we are, are on the existence plane. We obliterated the nuclear program, but they're one day away from it, you know. Hard to keep up, guys. Very hard to keep up. Is that it for them? One more. One more. John, why does everyone ask you where to get pizza? Because I am considered one of the world's leading, and this is recognized around the world, any of the larger pizza conglomerates, the pizzas,
Starting point is 01:23:29 They recognize. You know what? I think because of that rant I did on deep dish pizza in Chicago, I think that's the only, oh, and we did something on Trump, eating it with a knife and fork. And so those two things, you know, there is no real accreditation other than the port noise rating system for pizza. So oftentimes non-experts are elevated to that position.
Starting point is 01:23:56 I mean, little do they know. You're just asking the AI. I was about to say you reveals earlier. Can I tell you the truth? Like, my world there is so small. I go to Joe's on Carmines if I want to slice, and I go to Johns on Bleaker if I want a pie. And that's kind of my, like, as you guys know me,
Starting point is 01:24:15 my world is small. I am not. I am not a man who is out there. It's the same clothes. I have eaten, I shouldn't even be telling you guys this. I eat the same lunch every day when I, go in to work at The Daily Show, and I've done it since I've been back. The exact same lunch.
Starting point is 01:24:34 Well, what is it? I'm embarrassed to say. Oh, no, come on. You have to. What is it like girl lunch? What's a girl lunch? It's a girl dinner. Just like little bits of everything.
Starting point is 01:24:47 I'm just very curious, trying to prompt you. Yeah. We are not ending the podcast until you tell us. That's what a girl lunch is, is little bits of everything. All right. Yeah, you don't have to really cook. Yeah, no, I order, I don't make it. Let me just be very clear when I go to work at the daily show. I don't cook. I call out and I get a bean and cheese toastata. Okay. We have to stop talking about lunch during these recordings.
Starting point is 01:25:18 With all that setup, I know that that was a bit of a letdown in terms of, I should probably be more particular. Like, I get every day the same thing. thing, a quarter of a lime spritz lightly on steamed cod. It's a bean and cheese tastata. And the only difference is it comes with jalapeno, and I generally say no jalapeno. And I've done it every time for three years. It's very Jennifer Aniston of you. Is it really? Does she get a test, she's a tistada lady? Well, not tisada. Does she look like a tistada lady? Just look like a carb lady.
Starting point is 01:25:57 We're doing friends. She would eat this same lunch every day. Oh, is that true? Now, what did she get? It was like a chef salad. I actually know exactly what it is, and I'm not going to repeat it because I don't want to look a crazy place. I am the Jennifer Aniston of Late Night.
Starting point is 01:26:10 I think people have always. But you guys know that about it. On my 50th birthday, the Daily Show bought me. We had, you know, one of those staff, you know, all hands meetings down in the studio. And they had a box sitting on a table. And I opened the box. And I pulled out. it was a T-shirt, a long John's shirt, khaki pants, hiking boots, and it was exactly what I was
Starting point is 01:26:34 wearing that day. And I was flattered and humiliated all in the same moment. But I am a creature of very lame habits. But I hope, man, what information they got today. Very, very, very nice. A lovely program. thrilling and chilling and nerve-wracking and all those different things. Brittany, how do they keep in touch with us?
Starting point is 01:27:01 Twitter, we are a weekly show pod, Instagram, threads, TikTok, Blue Sky. We are weekly show podcast. And you can like, subscribe and comment on our YouTube channel, the weekly show with John Stewart. Beautiful. As always, guys, thank you guys so much for the incredible preparation you did on this episode. Lead producer Lauren Walker, producer Brittany Mehmedevic, producer Jillian Spear, video editor and engineer Rob Vatolo, audio editor and engineer Nicole Boyce
Starting point is 01:27:23 and our executive producers, Chris McShane and Katie Gray. We will see you next time. The weekly show with John Stewart is a Comedy Central podcast. It's produced by Paramount Audio and Bus Boy Productions.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.