Tech Won't Save Us - Why Iran is Attacking Data Centers w/ Sam Biddle

Episode Date: April 2, 2026

Paris Marx is joined by Sam Biddle to discuss what it means for data centers to become targets in a war, and how Silicon Valley is aiding the US war against Iran. Sam Biddle is a technology journalis...t at The Intercept. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. Take your personal data back with Incogni! Use code SAVEUS at the link below and get 60% off an annual plan: https://incogni.com/saveus The podcast is made in partnership with The Nation. Production is by Kyla Hewson. Also mentioned in this episode: Send your questions to mailbag [at] techwontsave [dot] us! Sam wrote about Iran’s attacks on data centers and its legality. Here is Sam’s most recent piece about Palantir and NYC public hospitals. The Intercept is also covering the role of social media in the US-Iranian war.

Transcript
Discussion (0)
Starting point is 00:00:00 And so to switch from Chatsy Beatty to Claude, I think, accomplished nothing and signaled very little, but it gave people the, at least the approximation of some sort of political act. But, I mean, it's like switching your business from Lockheed to Raytheon, right? Like, I don't, there's not that much difference at the end of the day. Hello and welcome to Tech Won't Save Us made in partnership with The Nation magazine. I'm your host, Paris Marks, and I have a good. great guest for you today, but first, there's something else we needed to talk about. In April, this month, Tech Won't Save Us turns six years old.
Starting point is 00:00:54 It's so hard to believe that I've been doing the show since April of 2020, back in the early days of the pandemic, and now, you know, the show has reached, I don't know, tens of thousands of listeners every single week. Who knows how many of that is cumulatively? It's in the millions. I should have looked up the numbers, I guess. I've had more than 320 episodes. That means so many interviews with fantastic, insightful people to help inform you about the tech industry, the impacts of this industry and its products on society.
Starting point is 00:01:27 And how we might think about doing something better, right? How we might think about challenging this and trying to imagine what a better world could look like and what better technology could look like as well. I feel like Tech Won't Save Us has achieved a lot over the years. And I have had so many positive messages from people about the show and about what it's meant to them and about how it has allowed them to share these episodes with people to try to get more people to think about technology in the way that we do. And so thank you so much for listening for all of this time. It's still hard to believe that it has been six years of making this show. But it wouldn't be possible without all of you, the fantastic listeners. And I know that this year, you know, the past year I feel like has been a bit slower for Tech Won't Save Us than it usually would.
Starting point is 00:02:15 We weren't able to do a special dedicated series in the fall, which was my plan, because I just ended up being so busy working on my next book, which comes out in October. Stay tuned. There will be more information on that soon. But luckily, the work on the book is finally winding down. And that means more time, you know, more attention for Tech Won't Save Us. And there is a lot I want to do with the show in 2026, some things that we've already started to do, you know, some of the interviews that I've had already this year to give an indication of the type of things that I want to do with the show this year. But there's more to that as well. And I need your support to do it. So where the show is turning six this month, I'm hoping that we can get 100 new supporters over on patreon.com slash tech won't save us. If you are not a supporter of the show already and you really enjoy these interviews, you feel like you learn a lot from them, I would ask you to consider, you know, putting the show.
Starting point is 00:03:05 show on pause and heading over to become a supporter now to help support this work so I can continue to do it and so that we can increase our ambition and our scope heading into our sixth year of doing the show. I am working on a new series now that I'm tentatively calling the internet deception, but maybe I'll change that by the time it actually comes out. And we're aiming to have that out in May. And this will really be a series about the privatization of the internet, you know, how it set us on the path that we have arrived at now, what we misunderstood about it at the time, you know, with some of the narratives around what the internet was and what it could be.
Starting point is 00:03:41 And how, you know, recognizing that there was a problem with how we misunderstood what happened with the internet back then and how it has developed since, how that should force us to change how we see the internet and how we approach the internet in the present, right? Because if the history is wrong, right, if the foundations of what we have built are, you know, understandings on and our narratives on around the internet, internet was wrong, then that forces us to change some other things as well. And that means approaching the policy through a different lens too. And so I hope that is going to be very informative
Starting point is 00:04:12 for people. As I said, we're planning to get that out in May. And people who are Patreon supporters will get full access to the series before anybody else. And then after the series finishes airing, they will also get access to all of the interviews that I did for the series as premium episode. So if you want to get access to that, it's a great time to become a supporter to, you know, support us to make the series, but then to get access to that additional content that is going to come out around it. I will also be recording a mailbag episode soon, which I know I said I was going to record like a couple months ago, but then edits on the book just became a bit too much. And unfortunately, that had to get pushed off. So if you do want to, you know, contribute a question before I record it, you can certainly send one to Mailbag at TechWon't Save.us. and I will get that.
Starting point is 00:05:01 We'll put it in the show notes if you want to find the email address there. And so then you can ask a question and I will answer it. And we're certainly hoping to do more kind of premium content through this year as well. So stay tuned for more information about that. Of course, the show has also gone video. You know, if you didn't notice last week, we now are doing video episodes on YouTube and Spotify. I believe Apple is going to offer that at some point as well. You know, I wasn't super enthused about video, but that's,
Starting point is 00:05:28 is the way that things are going. And so Tech Won't Save Us is doing it as well. So if you really want to see me interviewing someone else, you can do that through the video platforms. But if you like audio, you can keep listening on Audio too. So that is a lot. I know. But the key points are thank you for supporting the show, for listening to the show. And if you're able to do so, I would ask you to go over to Patreon.com slash Tech Won't Save Us, become a supporter today. So you can support the great work that we continue to do. And hopefully that you enjoy every single week. And so that we can continue to do the types of things that we want to do this year and into the following years with your support because listener support is essential to what I do with Tech Won't Save Us and to give you this quality of critical analysis of the tech industry. And of course, if you do become a supporter, you will get access to our backlog of premium and ad-free episodes.
Starting point is 00:06:19 You will be able to join our Discord and join in the discussions that happen there. You'll be able to get stickers if you support at a certain level or above. So thanks again and go over to patreon.com slash tech won't save us to help us hit our goal of 100 new supporters this month to celebrate the sixth anniversary of Tech Won't Save Us. Now with that said, on to this week's episode. My guest is Sam Biddle. He's a technology journalist at The Intercept. And if you listen to last week's episode with Spencer Ackerman where we talked a lot about kind of the history of what got us to this point where the United States and Israel are at war with Iran, this is really a follow up to that conversation where we dig much more. into the consequences of this, right, with a particular focus on the bombing of Amazon data centers in Bahrain and the United Arab Emirates. And what that really means, right? What it means now that data centers can be targets in a war, recognizing that, you know, obviously AI tools and the cloud are used for targeting and other military applications. So it seems pretty natural
Starting point is 00:07:22 that things would move in that direction. But even still, it seems pretty shocking, right, to imagine a data center actually getting bombed in a war and what that means for the future of these infrastructures, you know, how we see them, but also what it means for the collaboration between Silicon Valley and the military that we have been seeing escalate for quite some time now, but now has reached this point where it's quite undeniable with the Iran War and, you know, with the other wars that we have been seeing in recent years, not to mention the genocide in Gaza, but really, I think that this is a fantastic conversation with Sam to get into the wider repercussions of this, the wider ramifications of what this all means. And Sam is just the
Starting point is 00:08:02 perfect, most insightful guest to basically discuss this with us. So I think you're really going to like this episode. I think it's a great way to start off this month of the show's sixth anniversary. And so I think you're really going to like it. If you do, again, head over to patreon.com slash tech won't save us, become a supporter this month to help us hit our goal of 100 new supporters. I would really appreciate it. And it helps me to continue doing these great interviews every week that provide you the critical perspective that you want and that people need about the tech industry. So thanks so much and enjoy this week's conversation. Sam, welcome back to Tech Won't Save Us. Thanks so much. It's great to be here. Absolutely. It's always great to talk to you.
Starting point is 00:08:42 I'm always reading your stuff at The Intercept. And it's always, you know, fantastic to talk to you about what you have been working on. And, you know, of course, we're all very focused on this war between the U.S. and Israel and Iran at the moment. You know, obviously it's in the news every day. It's hard to know exactly what's happening. And just to be clear, we're recording this on March 25th. So there will surely be changes in developments by the time that this gets published. But, you know, obviously now this war has been going on for a number of weeks.
Starting point is 00:09:14 And before we dig into, you know, like the core topic that I wanted to discuss with you in this episode, I was just wondering if there's, you know, anything that really stands out to you as you are watching this war progress, you know, whether that's specifically on the use of tech or what we've seen technologically or, you know, just more generally about what this says about kind of like geopolitics in the world that we're living in right now. So yeah, what stands out to you about what we're seeing? I mean, I think what's been most striking to me is, as you mentioned, how little we know about a war that is so thoroughly documented, both by news media and people on the ground. And it's been weeks, as you all,
Starting point is 00:09:52 also noted. And, you know, the answer to the question of how is the war going for the U.S. and Israel is, I mean, completely obscure. I mean, certainly not well in that it's still going. I think it's pretty clear this was not the, well, I don't want to say the plan because I don't think there really was much of a plan, but this was not the hope of the administration. It was certainly not a Venezuela 2.0 situation. No, no. They didn't, you know, sweep in. with shock and awe and then get right back out and, you know, the world, most of the world sort of moves on as was the case with Venezuela. I mean, to point out the dishonesty of the Trump administration, I mean, what's even the point of this point? But the effect of the president and
Starting point is 00:10:39 commander in chief, ostensibly the person in charge of the entire thing being, you know, probably literally the least credible source on that war. I mean, not that a president is ever going to tell hard truths about a war, but it's just a complete unreality. You know, it does a number on you, I think. And, you know, again, not, you know, to say nothing of the people on the ground in Iran who are suffering the brunt of all of this violence and destruction. But it's very, very hard to be an informed citizen right now. I mean, and that's, it's in a way that I think, at least I have not seen before. I mean, I was, I'm old enough to remember very well post-9-11 wars in the Middle East. And as much as those went down the rabbit hole of, you know, dishonesty and propaganda and so forth,
Starting point is 00:11:34 I don't recall it being quite this muddled. Yeah, it's also wild, you know, speaking of, like, the information environment to see the way that generative AI has become, like, a key part of the information war on it, you know, when it comes to these like modern wars and modern military activities that the United States and these other countries are involved in. Obviously, you have, you know, the White House itself kind of producing these, you know, AI generated, you know, basically propaganda, as you say, in order to justify what it's doing and make it look like it's doing things very well. But then you also see Iran producing its own kind of like, you know, AI generated counter propaganda. It's, I don't know. It's terrible. I mean, I, I, I, I, I, I, I
Starting point is 00:12:17 I've also been, I've been struck by just how, like, low effort a lot of the American propaganda is. I mean, they seem, there's someone at the Pentagon who clearly, or maybe Hegg Seth, I don't know, who loves these just, like, highlight real kind of videos of, like, basically explosions set to, like, shitty music, you know, with scenes from, like, Halo and stuff spliced in. Like, it's the kind of thing that, like, I would have probably thought was really cool when I was, you know, like a ninth grade. you know, like a child. But this, you know, sort of like celebration of violence, it isn't even really making an argument.
Starting point is 00:12:54 It's just like, I mean, the over the, the, the takeaways or, you know, the ethos is just like, fuck yeah, you know, like there's no like, it's not even really propagandizing. It's just, it's like a marketing clip. But I think the most effective propaganda has been the not AI generated, you know, footage of Iranian missiles streaming over. the sky in Tel Aviv or, you know, elsewhere. I mean, I think that you cannot manufacture or fake or generate something that's as
Starting point is 00:13:27 compelling as, you know, real footage filmed by some person of, you know, this country we were told would fold very quickly, continuing to fight and drag this war out. I mean, I think that's probably up there in terms of effective messaging. Yeah, absolutely. I think you're completely right, you know, and it has been, you know, kind of shocking to see that the degree to which this has been able to continue playing out, you know, and how resilient the Iranians have been, which, you know, maybe on the one hand shouldn't be a surprise. But then when you consider, especially the way that the United States and Israel kind of present themselves and, you know, how they often kind of, you know, are able to bomb these places and get them to, you know, kind of get in line with what they want and then kind of move on, or at least that's the doctor in that we have seen them want to, want to present. in recent years, yeah, it is kind of wild to see the way that they seem to have really miscalculated in the case of Iran and really not understood the degree to which, you know, the state could really, you know, keep active and, you know, keep them at bay in a sense. Yeah, well, you know, when Trump, you know, you have the president of the United States saying,
Starting point is 00:14:39 I think, on multiple occasions, including right after the war started, the war is over, that it's been one that we've crushed the regime, et cetera. And, you know, and that's obviously not true. But the, I think the most effective counterargument to that lie is, again, a footage of a ballistic missile, you know, entering the atmosphere over a city. I mean, that is that that is as, I think, effective a refutation as you could get of any kind of claim that this war is, has been, is even in the neighborhood of over. Yeah. You know, at least he didn't appear on a, an aircraft carrier with a big mission accomplished banner. Not yet.
Starting point is 00:15:19 Maybe we'll get there. Maybe we'll get there. But you mentioned the drones and the bombs and everything like that, right? And there are many different aspects of this war that we could talk about, you know, and that are very much related to your reporting. But there was one element of this that obviously really stood out to me as someone who is very interested in the tech dimension, but, you know, has also been following data centers and that kind of story very closely recently.
Starting point is 00:15:44 And that was that early on in this war, Iran actually targeted Amazon data centers in the UAE and Bahrain. Can you tell us a bit about what happened there? Yeah, so it was in the first day or two. I believe the first strike was March 1st. I believe three, if I'm recalling correctly, three Amazon AWS cloud facilities were hit by Iranian drones in Bahrain and UAE. and the damage was sufficient to majorly disrupt these cloud regions. And to the extent that Amazon actually told customers relocate your cloud workloads to another region elsewhere in the world, which is, you know, the whole point of the cloud
Starting point is 00:16:28 is that it's redundant and resilient. And you don't have to think about things like geography, right? It's supposed to just work. Yeah, I mean, Amazon said that there were fires at the facilities. You know, there's obviously fire control mechanisms built in that then flooded it. you know, these things with water. If you're talking about tightly, tightly packed computers in a giant warehouse, fire and water are a pretty major problem.
Starting point is 00:16:56 And so, yeah, I mean, it disrupted in a pretty major way, the one of the flagship technologies of one of the flagship companies of the American economy. Whether or not there was a military objective accomplished there, you know, you want to to talk about propaganda, being able to even temporarily shut down Amazon Cloud and stuff is a pretty big deal. No, definitely, right? And especially where data centers have very much been in the public focus, especially in the past few years, right? You know, as the generative AI boom has resulted in so many more of these infrastructures being built, and we're seeing this growing, you know, public backlash to, you know, the construction of these facilities in so many communities around
Starting point is 00:17:39 the world and certainly across the United States as well, seeing these targeted in this way was really, I don't know, again, maybe it shouldn't be surprising, but it was in a sense because for so long you didn't think about the infrastructure that powered the cloud. Has Iran talked about what its motivation was for targeting these data centers specifically? So there was a, there was a messaging out of an Iranian sort of state-affiliated media outlet that, you know, I think you can sort of take as, you know, not the words of the state itself, but reflecting that rationale. And they just said they wanted to highlight the usage of American cloud providers and data centers as military infrastructure. I think that the, I mean, this is going to be speculating a little bit,
Starting point is 00:18:27 but I think maybe it was as much to draw attention to the fact that data centers have military use as it was to actually disrupt that use. I mean, I very much doubt that either the U.S. or Israeli militaries were directly using the data centers in question, but they are certainly using other Amazon data centers elsewhere and, in fact, very close by within Israel. Yeah, absolutely. And there has been a lot of attention even on that piece in recent years as well, especially as we've been seeing what Israel has been doing in Gaza,
Starting point is 00:19:08 the way it's been using technology there, and the way the cloud has been part of that. Can you talk to us a bit about how these cloud computing centers and AI more generally are used for military purposes and how we're seeing them being used in Iran specifically or in wars throughout the Middle East? Sure. I mean, I think that the most relevant and probably at this point
Starting point is 00:19:32 best documented in terms of public reporting use case is target generation um so we've seen in in gaza uh over the past few years to devastating effect the um the the very fast pace and indiscriminate nature of bombing there and if you are a military like the idf that has a you know basically insatiable appetite for things to destroy in the Gaza Strip, you don't just want to carpet vomit from top to bottom for, I mean, essentially, PR reasons. I think that generative AI allows you to very rapidly create a list of people and places that you can attack, at least have some bureaucratic, you know, plausibility according to this, you know, according to the computer, right? I mean, not according to anyone's
Starting point is 00:20:28 actual judgment. But it gives you an official looking list of things to blow up. And I think that if you want to blow shit up, that's what you really want is a computer that provides the sheen of intelligence and irrationality, right? In, you know, in Gaza, you know, it's been, I think, again, very well reported by outlets like The Guardian and 972 magazine in particular, you know, the use of generative AI tools to, you know, to find the, to suggest for bombing the homes of, alleged militants, whatever that means, however many degrees removed from any actual militancy. And I think that that is maybe one of the most concerning and quite literally dangerous use
Starting point is 00:21:14 cases for AI, just generate a list. And you know what happens to the people in places on that list. Yeah. And I feel like you're getting to the two different angles of this, right? On the one hand, it is the use of the technology to do. generate these targets that can then be bombed, right? And you can generate a really large list in a very short amount of time. And, you know, whether or not a human checks that or not, I think it's fair to say
Starting point is 00:21:42 it's often not being checked. But it, you know, it gives you a long list of places that you can target that, you know, the computer things are associated with, you know, the, the enemy or whatnot, right? But then on the other piece is like, okay, so there's the generation of all these targets. But then because the computer has generated them, it's like it starts to reduce the accountability for the humans that are then actually, you know, pulling the trigger on where to drop these bombs, right? And, yeah. I mean, not that there's very much accountability when it comes to air, you know, air war generally. I mean, you know, pre-AI, we were certainly still, we, I mean, you know, the U.S., where I live was certainly killing civilians based on targeting that had been done by humans. There's scant accountability when it comes to that.
Starting point is 00:22:34 But yeah, I mean, even on paper where you're supposed to have a system of legal accountability or congressional oversight onto these things. The computer gives you a list of, you know, 100 things to blow up and, you know, 20 of them were wrong. Whose fault is it? I mean, the computer doesn't have personhood. It doesn't have moral agency. It doesn't have anything. I mean, it's just a computer, right? So you can only, but then, you know, who you're then supposed to blame, I think becomes an open question, right?
Starting point is 00:23:04 I mean, is it the person, is it whoever trained the model? Is it whoever queried the model? Is it whoever signed off on it? I mean, I, in a world where we, where there is very, very, very little accountability, I think this further suppresses the possibility to hold anyone accountable. I mean, you know what I think about a lot is like, so one use case that I could see being potentially useful in my own work with when it comes to an LLM is like summarizing documents, right, which I think is a major use case in the military and intelligence worlds as well, right?
Starting point is 00:23:36 Like you have huge pile of paperwork, dossiers, old satellite maps, you know, and you want to feed it all into the machine and get a summary and then, you know, plan operations based on that. The thing that's always stopped me from using it in my own life is if there's even like a 5% chance that, you know, if there's, if like 5% of the summary is going to be wrong, right? And I'm pulling that number out of nowhere, but we know based on how these things work that there's a good chance that some of it will be incorrect or hallucinated or whatever, then if I don't want to embarrass myself professionally,
Starting point is 00:24:12 I got to go through and read the whole thing anyway to check what was wrong. So I might as well just read it myself, right? Like it's not actually saving any time. And I think about that a lot in terms of, I mean, you brought up the question of like, is anyone signing off on the stuff or going through it to verify it? I can imagine that in the military and intelligence world, you have the same issue where like, if we're going back through to double-check the computer's work, it sort of defeats the purpose of using the computers. So I would be very suspicious of how much they are checking
Starting point is 00:24:43 Claude's homework on this kind of thing. Yeah, I know in the case of like the reporting that 972 magazine was doing on what was happening in Israel, there was a suggestion that not very much of this was being checked by humans after it was being generated, right? And just to pull up another example based on what you're saying, obviously one of the first major stories to come out of the U.S. Israeli War on Iran was, you know, when they bombed this girl's school, right? You know, in Iran and more than 170 people were killed, most of them schoolgirls. And we had a number of stories after that seeking to justify or understand why that happened. And one of those stories was it's possible that the AI target generation system, you know, misread a document or that this
Starting point is 00:25:30 facility was previously associated with the IRGC or something like that and has since not been associated and thought it was still. And it's like, you know, it just adds to this like complexity. And again, like the lack of accountability around it. Right. It's like, oh, well, maybe the the computer just got it wrong and we're so sorry that this happened like blah, blah, blah, right? Yeah. So I have not seen anything that all the talk I've seen around the use of AI in that, in that specific strike on the school has been speculative, right? I have not seen any kind of confirmation. I'm not saying any of that is concrete. Of course. And yeah, it's completely plausible, right?
Starting point is 00:26:04 I do worry that maybe the like jump to speculate that basically the jump to blame AI for something like that obscures the fact that humans are perfectly capable of bombing schools on their own or weddings or hospitals, right? Like you do not need an advanced technology to kill civilians, even by accident, right? Like, even let's, let's give the U.S., I don't often say this, let's give the U.S. military the benefit of the doubt here, right? And maybe this was genuinely an accident. I would argue a, you know, criminally negligent accident, but these things do happen based on recklessness and indifference to human life and carelessness. Historically, there are, you know, many cases predating generative AI or LLMs or whatever. So, you know, I get the appeal of connecting this new technology to something horrific in current events like that. But, yeah, just like I said, humans are perfectly capable of being indifferent to, you know, the facts on the ground and killing people over it.
Starting point is 00:27:15 So I maybe it was based on, you know, an AI mistake. But, yeah, I think that requires more reporting and scrutiny. Absolutely. And just to add to what you're saying, it's like we see plenty of cases of that right now, whether that is the bombs that are being dropped in Tehran on oil facilities like, you know, the infrastructure that regular people depend on, but also what we've been seeing over the past number of years in Gaza, in Lebanon, in Ukraine, you know, we can see that humans are going to do a lot of really terrible things to one another if they have the tools to do so. you know, and this kind of animosity is created and whatnot. So we don't need to say, we don't need to pin this all in the technology when we know that humans are going to do this. Unfortunately, if they have the ability, yeah. Yeah, I mean, so there was a report of the Washington Post in the first several days of the war
Starting point is 00:28:11 about the use of Claude of, you know, Anthropics, LLM that was being accessed through Palantir's Maven Smart System targeting platform. and just revealing that, you know, the post-reported that Claude was being used to plan air strikes in Iran. And what stood out to me was not, you know, more than just the sheer fact of using Claude for this purpose. But the post report said that they were using it to speed, I mean, I'm paraphrasing here, but to basically speed up the targeting an execution process of these strikes.
Starting point is 00:28:49 And anytime you see that, I think alarm bells should go off. Because I think in Heggzeth and Trump, you see a clear desire to just blow the shit out of Iran as much as possible, as fast as possible. And anytime you're accelerating killing, especially aerial warfare, air strikes, I think that really, really, really cranks up the likelihood of killing innocent people. because you were not, and this, again, there are many examples of that pre-AI. I mean, like in the war on terror, there's been some excellent reporting around how J-Soc really wanted to just, I think the term they used is like pick up the operational cadence, right? They just wanted, and that was, you know, it was pre-AI, but it was very much computer-enabled, you know, just like let's, you know, they wanted to just aggregate all the data, pull out,
Starting point is 00:29:47 names, pull up places, and go kill people. And it was done, the speed was the name of the game. And so I think when you have a desire to just, not just execute a war, but execute it rapidly, that is extremely dangerous. And I do think that, you know, whether AI was used in that school strike or not, a major function that provides a military is being able to, you know, hit the gas on killing. And, you know, and that's something people should be extremely disturbed by. Definitely. It brings a whole new spin to move fast and break things. Yeah, no, absolutely. So long associated with them.
Starting point is 00:30:24 Right, right. Move fast and kill people is, I think, you know, the ethos of AI augmented warfare. And, you know, I do also, you know, I also don't want to say that this school strike, I'm not trying to say that it, like, wasn't done with AI. It's entirely possible. I just don't know. And I, you know, I don't want to say one way or the other. Certainly it's a question that people should be putting to both Anthropic and the Pentagon. I mean, I think we're owed an answer on that exact question.
Starting point is 00:30:57 Yeah, no, I completely agree. And, you know, I wouldn't give them too much what benefit of the doubt on that strike and try to explain it away with AI, especially when, you know, we've had the reporting now that shows it's a double-tap strike as well. So it's like, I don't know, I think they kind of knew what was happening if they're going that deep with it, right? It's inexcusable to matter what, like, there's no possible rationalization for killing a school. It doesn't matter what used to be there or what was next to it, right?
Starting point is 00:31:25 So I mean, I know that you agree with that. But the AI or no AI, it's a crime. So, you know, I think that's what's most important. Yeah, and maybe just to be clear to listeners, double tap strike is when they strike it once. And then once people kind of gather around to try to, you know, rescue people or whatnot, they hit it a second time. time, right? It's a tactic that has been used extensively by the Israeli Air Force. Yeah, absolutely. If you've ever thought, wow, that scam email feels weirdly personalized. That's not intuition. That's the data economy working exactly as intended. That's why I want to talk
Starting point is 00:32:03 about incogny. When your home address, phone number, email, and other personal details are sitting in searchable databases, it increases your exposure to fishing attacks, identity theft, and financial fraud. This isn't just about annoying spam. It's about real. risk. Incogni works to remove you broadly from the internet. They identify where your data appears and send legally binding removal requests under laws like GDPR and CCPA. Follow up until deletion is confirmed and continuously monitor so it doesn't quietly reappear later. And what I really like is their custom removals feature. If your information shows up at a specific site outside their automated system, their team will manually pursue that removal too. That's included with
Starting point is 00:32:41 the unlimited and family unlimited plans. And honestly, if you want the most comprehensive protection, especially if you're thinking about your household, that's the route I'd recommend. This is about shrinking your digital footprint and lowering your exposure to scams and identity theft and not pretending the problem doesn't exist. Right now, you can get 60% off an annual incogny plan. Go to incogny.com slash save us and use code save us. You can't dismantle surveillance capitalism overnight, but you can make yourself a less convenient data point. Again, that's incogny.com slash save us code save us for 60% off today.
Starting point is 00:33:12 You mentioned Claude there, and there's a whole conversation that we could have about what happened during the war on terror and the developments in technology then. And I think we might even have a previous episode on that. I'm forgetting that there's been so many episodes now that I forget. Let me link. I got to listen to that. But, you know, like even Palantir kind of comes out of that moment and things that were happening, right? As you've mentioned. But one of the big stories around this war in particular has been the Pentagon.
Starting point is 00:33:42 desire to have a generative AI partner, you know, for military purposes. And before the Iran War started, we basically had all of this reporting on weather, Anthropic, you know, which makes the Claude, you know, Chapot, LLM, was going to basically sign this deal with the Pentagon. And, you know, eventually they decided that there was a piece of that contract that they weren't going to go for, an Open AI signed a contract instead. And then, like, I think it was like the day after they signed the contract, they started bombing Iran or something. Like, it was a very bad look for Sam Altman at Open AI. And they had plenty of, you know, things to be feeling bad about.
Starting point is 00:34:22 I think it was literally later that day. Yeah, you're probably right. Yeah. Like, it was very not long after, right? But then, of course, we have these stories. And I will say that prompted a whole campaign like, you know, drop open AI, start using Anthropic and Claw. Instead, like a consumer campaign, quit GPT. I think it was called.
Starting point is 00:34:43 But then, of course, we started getting these stories about how the Pentagon was very reliant on Claude, actually, for this war. And as you're saying, for the target generation and things like that. So what are we seeing with the Pentagon's relationship to these AI companies and, you know, its use of different services and things like that in order to prosecute this war, but also just in general with the types of things that the military is doing? Yes. I mean, something that really stood up to me with the, the, the, the, the, the, the, the, the
Starting point is 00:35:11 You know, dust up and rivalry. And like you said, the sort of consumer response of like, you know, boycott, open AI. Let's all use clot. I think it sort of created this narrative around Anthropic that it's like Code Pink or something, that this was like people who were going to stand up to the Pentagon, which is just absurd. I mean, they are, they are a military contractor and a very eager one. I mean, the company put out a statement in the days following. the, I think it was right before the lawsuit they filed against the, about their designation as a supply chain risk.
Starting point is 00:35:50 But they put out a statement saying, and I'm paraphrasing here again, there's something like, we have more in common with the Department of War than differences. I mean, like, and that's in the midst of, you know, arguably an illegal war of aggression being perpetrated against Iran in which many, many innocent people are being injured, maimed, or killed. killed. So this is not like the Peasnick AI lab, right? Like they have, this is a disagreement over some very, very, very narrow specific use cases. The thing that Anthropics says they did not want to engage in, namely fully autonomous lethal weapon systems and mass domestic surveillance. Which, yeah, I mean, I suppose it's good for society that there was some conscientious objection to those applications, which clearly are not held by Open AI, but that still leaves a vast universe of death and destruction
Starting point is 00:36:50 that Anthropic is willing to assist in, right? They're not saying this, but it is, you know, implicitly, they are a company that is okay, helping a military that is killing civilians, right? So, yeah, I thought that was very funny. I mean, maybe funny is not the right word, But, you know, I think people are so, I think people feel really helpless, right? And so to switch from chat GBT to Claude, I think accomplished nothing and signaled very little.
Starting point is 00:37:22 But it gave people the, at least the approximation of some sort of political act. But, I mean, it's like switching your business from Lockheed to Raytheon, right? I don't, there's not that much difference at the end of the day. And, you know, I would not be surprised if this lawsuit prevails and Anthropics lawsuit against the Pentagon, to be clear, if they prevail on that and are right back where they started. I mean, again, the most important thing is this is a company that very much wants to be a defense contract, to be a military contractor. That's who they are. It's a real trip. I suggest, I recommend anyone who wants to feel really depressed and fucked up for a little while to read through Anthropics.
Starting point is 00:38:10 They have a like, clawed constitution and like a policy document of things you can't do. And there's so much talk in there about how important it is to Anthropic to avoid doing harm. And like it's, you know, it's absurd. Their flagship product is being used to plant airstrikes. I mean, to then talk in the same. breath about not doing harm is just again I mean it's it would be funny if it weren't so so so perverse you know totally because like the harm they're talking about it's like not creating the what the the aGI that might kill us or something rather than like you know the real harm that is
Starting point is 00:38:50 being caused I mean day and day out by the use of its tool right that's a whole other that's a whole other I'm sure I haven't I think you've probably done an episode of this but that's a whole other conversation about like yeah oh totally means in AI research or work world, right? And it is very much not about the common sense understanding of safety and harm, right? Like, yeah, I think you're probably right that if you ask them to defend that, they would say, oh, well, yeah, the harm we're talking about is the internet becomes sentient and starts building, you know, death robots or releases, you know, sarin gas into, I mean, right, I mean, like, far future sort of science fiction scenarios.
Starting point is 00:39:24 Meanwhile, they are actively harming people probably every day across Iran. Totally. And it's been, it's been fascinating, too, to see the evolution in that, too, you know, because a number of years ago, there was an explicit prohibition within, you know, open AI's rules or whatnot, you know, against military uses of its AI applications. And now it's like, they're the ones jumping on the Pentagon contract to try to do it, to try to get, you know, more use of this tool, to try to get a new, you know, big contract, big partner. And even though Claude has, has lost this contract. contract or, you know, was, they were not willing to agree to like one specific, you know, type of thing that they didn't like in, in the agreement, as plenty of reporting has showed, and as you've mentioned, it is being actively used to, you know, identify targets in Iran. And the reporting I've seen suggests that as much as the Pentagon might not want to use, like, the woke cloud or whatever, that is pretty impossible to get off of at this point because it's so, like, deeply rooted into the systems that they're using.
Starting point is 00:40:31 I think a lot of people depending on probably really love woke clawed, right? I mean, as evidence by the fact that they're using woke clawed, I mean, this was sort of a weird spat between, you know, Hegseth and his deputies. And, you know, I think, frankly, they just didn't like being told no or even have the suggestion of a no coming from a military contractor, which is, you know, I mean, the hypothetical that I've seen brought up a lot from people who are defending the Pentagon's course of action. here was, well, can you imagine if, like, Raytheon told the Pentagon how, like, when they could and couldn't drop bombs, which, like, yeah, fair enough is an absurd arrangement. And, and of course, Raytheon would do no such thing. But it's almost, like, taken, what's missing from this whole conversation is, like, Anthropic doesn't need to be a military contractor, right?
Starting point is 00:41:23 Like, they could be a consumer or research or, I mean, they could service every other part of the economy. and even government, right? Like, no one has forced them or coerced them to pursue Pentagon money. So I think that there is very much a sense in which they want to have it both ways. But I think the most important thing is that the purported and self-professed values of these companies are essentially meaningless. Like, you know, Google had a, you know, you can have a document that says, do no harm or don't be evil. but as soon as you want to do harm or be evil,
Starting point is 00:42:00 you just remove it from the document. Like they are, and like the fact that they're often called like, like Claude is called the Constitution, which like amending the U.S. constitution or any nation's constitution is like a really long, difficult process. I mean, in the U.S. it's nearly impossible. By design, the document is supposed to be not immutable,
Starting point is 00:42:20 but very, very, very hard to change, which is what gives it power. Whereas a tech company policy document, are constantly being changed. They don't mean anything. They are shifted according to the whims of executive leadership. So, I mean, I really do think that you should, anytime you're looking at one of these policy documents,
Starting point is 00:42:43 sort of acceptable use policy, and this goes for outside of AI stuff too, but they are not worth the pixels they are printed on. They are, I think, window dressing and sort of a public relations instrument and nothing else. If they want to change the rules, they'll just change them,
Starting point is 00:43:01 which is exactly what Open AI did. They removed the language from their permissible use policy, forbidding warfare, and then pursued all those contracts. So, you know, it's very, it's silly. Oh, definitely. It's not silly.
Starting point is 00:43:13 It's deadly serious, but it's the documents themselves are simply. Yeah, I get you. But I want to kind of broaden out what you're saying, right? Because we're talking about the AI companies specifically, you know, Anthropic, Open AI. But we started this conversation talking, talking about an Amazon data center, right?
Starting point is 00:43:30 And the targeting of this cloud computing infrastructure. And it's very clear that, you know, you were saying Anthropic could do everything but working with the military, right? And they have chosen not to do that and to pursue these contracts. And I feel like we're seeing that a lot from these tech companies going harder and harder at, you know, working with the military, working with the Pentagon, seeking out these contracts, whether that's for profit reasons or, you know, for broader political reasons, what do we see in the broader kind of tech industry turn toward the military,
Starting point is 00:44:03 and how does that contribute then to say an Amazon user, the center then being seen as a target in a war? I mean, I think the inflection point was Project Maven, which was the aborted Google deal to do aerial warfare targeting, basically to aggregate and analyze data that can. could then be used to plan and execute airstrikes. There was a pretty significant employee revolt over that. There was a lot of news media scrutiny.
Starting point is 00:44:37 There were protests. And Google walked away from it and said, look, you know, enough people within the company have said they do not want this to be the kind of, they don't want this to be the way we make money. So we're not going to do it. And there did seem like a time when opposition to militarism within Silicon Valley was sort of the prevailing mood. So Project Maven was launched in 2017.
Starting point is 00:45:03 By 2018, Google has abandoned it. So pretty rapid turnaround. And again, I mean, the organizing by workers at Google was very effective. I mean, they won. You know, that era is very much over. I think that companies like Amazon have real, and Google have realized that they can just plow forward, you know, fire employees who protest, get the money.
Starting point is 00:45:32 And also, I think the sort of worker sentiment has changed. I think you see more people joining these companies that say, no, I do want to help the military. You know, like I'm okay with building, helping build bombs and rockets and drones and stuff. And, you know, you see companies like Andrew and Palantir that explicitly recruit that way. You know, they say, come here if you want to build weapons. And they're able to get a lot of very, very talented engineers that way. You know, they have a lot of, they have a lot of strong talent pool there.
Starting point is 00:46:05 I mean, there are people who are willing to do that work. You know, in terms of Amazon, you know, Amazon then becomes one of the primary contractors for the JWCC, the joint war fighting cloud capability, which is just like a DOD-wide cloud computing contract. So, you know, Amazon is very much in the business of the U.S. military. You know, as it pertains to Israel, they're also one of the main contractors on Project Nimbus, which is the cloud computing platform that supports the whole Israeli military, but including the Ministry of Defense and the IDF and the Air Force and so forth.
Starting point is 00:46:45 Yeah, and there was a lot of controversy around that, I believe last year, when it was revealed that a lot of data that had been collected on Palestinians was stored actually in Microsoft servers in Europe, I believe in Netherlands in the end, right? And I know that Microsoft increased security around its data centers in response to that because it was worried about something happening, which certainly came to mind to me when I saw Iran actually target data centers, Amazon in this case, of course. Yeah, I mean, so the Amazon data centers that were targeted by Iran that you're meant that we talked about a little earlier. Again, unclear if there was any military disruption
Starting point is 00:47:24 there. All that was publicly reported was disruptions to things like banking apps and, you know, food deliveries, you know, consumer use cases. But Amazon and Google data centers within Israel are hosting Project NIMIS. They are hosting military workloads for the idea, and not just for the IDF, but for, I've reported the Israel Aerospace Industries and Rafis. which are the two main Israeli state-owned weapons manufacturers that have, you know, built the bombs that have been used to destroy so much of Gaza. I have to imagine that Iran would love to strike those, but obviously being within Israel, it's much, much harder to get through than it is in Bahrain.
Starting point is 00:48:08 But, you know, those would be certainly be squarely military targets. Yeah, and you mentioned before, or maybe didn't, but you mentioned in your piece that, you know, there was a news agency in Iran that effectively listed a bunch of data centers, dozens across the Middle East as potential targets, right? Many of them belonging to Amazon, Microsoft, these U.S. companies. Yeah. And, you know, the article that I wrote was sort of about the, you know, quote-unquote, legality under international humanitarian law or, you know,
Starting point is 00:48:37 what's referred to as the laws of armed conflict. You know, that was something I was interested in is, you know, according to these laws. And I want to use the word law. is loosely here because what we refer to as international humanitarian law is just like a patchwork of different treaties that countries have either ratified or not. In the case of the U.S. and Israel and Iran, actually, they have not ratified some of the most relevant frameworks.
Starting point is 00:49:05 So, you know, war is basically lawless, right? The countries do what they can get away with. But according to the letter of the law, a data center that carries military work loads on behalf of a, you know, a state military could be, you know, could be a legitimate target for bombing or for a drone attack or, or what have you. And so, you know, I think that particularly the Nimbus data centers in Israel, I think, I imagine, have a huge target on on their, on their roofs. Yeah, definitely. There's one other piece of this that I wanted to get to with you before we start to wrap up our conversation. And that is really, you know, you see these
Starting point is 00:49:44 data centers in Bahrain and the UAE getting targeted by Iran. And it's hard not to think about the fact that the Gulf has been trying to reposition itself for a number of years now as a major tech player and more recently a major AI player, right? You know, Trump did this big, you know, tour through the Middle East last year and secured a bunch of commitments, not just from U.S. companies investing in data centers in Gulf countries like Saudi Arabia and the UAE, but also a lot of of investment then coming back to the United States for the tech sector. Totally. Yeah.
Starting point is 00:50:20 They have a lot of infrastructure to rebuild now as a result of a US war, right? But I wonder what you think that, you know, maybe this war, but even just these attacks on data centers in the region mean for this ambition to become, you know, a major kind of tech region, a major tech player on behalf of the Gulf monarchies. Well, it's certainly shown how vulnerable they are. and will be for the foreseeable future. I mean, like I mentioned before, the actual computer hardware in question here
Starting point is 00:50:53 is very delicate and sensitive. I mean, even a small drone carrying a relatively small payload, if you set off the sprinklers in the data center, right? Like, your toast, a small fire could shut down a data center on its own, you know, an electrical outage. Right?
Starting point is 00:51:14 I mean, these things are vulnerable. They're sensitive. They're delicate. And I think that, well, I think obviously the countries that we're talking about here are sort of envisioning a kind of like post-oil economy, right? Like how, you know, I mean, this sounds like a TED talk, but like, you know, data is the new oil, right? Like maybe it makes sense to gradually, you know, transition from, you know, digging wells
Starting point is 00:51:40 to building these data centers. I mean, I can imagine that being in like someone's PowerPoint. It would also give you a lot of, you know, it's a way of linking, further linking your economy with the West, you know, with the U.S. and getting yourself in the good graces of, you know, the White House and, you know, U.S. American business leaders. But yeah, I mean, I think it reveals that this is not going to be an easy thing if there's a war. You know, if there is a, anytime there's a war in the Middle East, which is certainly not going to be a possibility that goes away. time, probably within our lifetimes, these are, these I think have been revealed as very soft
Starting point is 00:52:17 targets. And that gives a lot of leverage to a combatant on either side, right? Like, it's hard enough to protect military infrastructure, you know, like a, to protect like a base, right, or oil tankers. Now you have to worry about these massive, massive, massive buildings that are just literally giant targets. I think it has, you know, that was maybe not an aspect of this that was thought through entirely. On the other hand, this probably opens up a huge whole new economy of data center-based counter-drone weapon systems. I mean, I have to imagine there will be a feeding frenzy around, you know, buying countermeasures
Starting point is 00:52:56 for that kind of thing. But it's certainly complicated the plans of all those Gulf countries, yeah. Yeah, my God, I hadn't even considered the new market that would create, but I think you're absolutely right. There's going to be, I mean, the pitches are probably happening right now. all around the world, you know. Oh, I have no doubt. Like, you know, even, even you think it's not related to data centers,
Starting point is 00:53:18 but so quickly after the war happened and after Iran started bombing, you know, Gulf countries and infrastructure and things there, you know, Ukraine was there right away to start offering its, you know, kind of anti-drone technology that it has developed in recent years against Russia, right? Yeah, I mean, I, I, securing something so large. is, I think, I don't know, I don't want to say impossible, but very, very, very difficult. And anytime there's a very, very difficult military engineering problem, there's a lot of money to be made in at least in at least trying to solve it.
Starting point is 00:53:55 So, yeah, I mean, there are already firms that advertise sort of data center defense systems. But again, I think that is going to be a big, there will be further investment and things like that, I don't think it'll be sufficient, though. I mean, I think at the end of the day, if you really want to say one of these things on fire, you will probably find a way. Yeah, I completely agree with you, right?
Starting point is 00:54:20 You know, it's pretty easy to do something to one of these data centers to make it so that they are disrupted so that there's something happening. Maybe they'll start building them underground. You know, like, like, you know, like missile bases are, or, you know, there'll be some attempt to sort of harden them from air strikes, only adding to the colossal expense required to build one of them. I mean, they're already huge, huge undertakings. If you have to now guard against
Starting point is 00:54:45 aerial attack, that makes it even more of a pain in the ass. I mean, something I also referred to in my, the story I did about this was, you know, I have to imagine that this would be potentially concerning to people who live near data centers. You know, it's always very opaque as to which data centers are carrying which workloads, but, you know, Amazon, all the big hyperscalers have DOD-specific data centers that they operate that are, you know, exclusive to the Pentagon. I mean, Amazon, for instance, has dedicated DOD data centers in Virginia and in the Midwest. And, you know, I can imagine people feeling a little uneasy, living next to, again, a colossal, like literally enormous bombing target. And I think as time goes
Starting point is 00:55:37 on and these technologies get more and more deeply integrated into militaries, they only become more attractive targets. Yeah, it's like a separate issue than what I was thinking about when it was revealed that Microsoft was storing this data on Palestinians, right? In that case, I was like, okay, so these Microsoft data centers might become targets for, like, you know, protesters or individuals that want to, you know, bond them to try to stop them from aiding the genocide in Gaza, right? And I hadn't really considered the flip side of it where, you know,
Starting point is 00:56:14 it's not some individual doing it, but, you know, an actual state might be targeting these data centers because, you know, they're aiding in a military campaign against their territory. That was not really on my radar as something to even expect. I mean, in any like World War III scenario, North Virginia and D.C. are gone immediately. but I think that the concentration of data centers in Northern Virginia probably makes it one of the most bombable places in the world. I mean, that is like the beating heart of the American military, the computerized arm of the American military. And anyway, in the American economy.
Starting point is 00:57:00 But yeah, I have to imagine that would be one of the first things to get hit. Yeah, I would agree with you. And yeah, it's making me think about this in a whole different way now. Yeah, it's a fascinating subject, unfortunately, you know, regarding an absolutely devastating consequence for so many people, right? Whether it's we're looking at the people in Gaza or now the people in Iran, you know, who are on the other side of these missiles or bombs or whatnot whose locations are being determined by. these AI systems, right? And they are feeling the brunt of this. Certainly we can complain about gas prices or things like that,
Starting point is 00:57:41 but it's like, you know, that's the real consequence of what's happening. To be subjected to any of this is an unimaginable, you know, cruelty and nightmare. I literally cannot imagine it. But yes, I mean, to, I think the real horror is that it is being done at a, you know, as that Washington Post story, sort of nodded towards, it's being done at a deliberately accelerated clip. I don't think there's any way, well, I don't think there's any way to practice war in a humane matter, but I don't think there, I don't, I think that a fast pace and a concern for civilians are
Starting point is 00:58:20 mutually exclusive. I think caring, you know, looking out for, or trying to take precautions around killing innocent people cannot be done when you're trying to go as fast as possible. Yeah, especially when you're looking at some of the people who are prosecuting this war, right? They don't seem to be the type of people who are thinking very much about the collateral damage and the people on the other side of these bombs. No, certainly not, to say the least, yeah. Yeah, no, it's terrible.
Starting point is 00:58:48 But I really appreciate you coming on the show to talk about all of this with us, because I think it's a topic that we really need to be paying attention to. And unfortunately, it's one that we're probably just going to keep seeing evolving in the years to come. So yeah, keep up the fantastic work. Obviously, I highly recommend people read your work. Certainly read this piece, but the other work that you're doing. And thanks so much for coming back on the show. Happy to join you and happy to come back anytime.
Starting point is 00:59:13 Thank you. Sam Biddle is a technology journalist at The Intercept. Tech Won't Save Us is made in partnership with The Nation magazine and is hosted by me, Paris Marks. Production is by Kyla Houston. Tech Won't Save Us relies on the support of listeners like you to keep providing critical perspectives on the tech industry. And for our sixth anniversary, you can join hundreds of other supporters
Starting point is 00:59:32 by going to patreon.com slash tech won't save us and making a pleasure of your own to help us hit our goal of 100 new supporters this month. Thanks for listening. Make sure to come back next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.