Big Technology Podcast - Dario’s Choice and Anthropic’s Future, Apple’s AI Devices, Netflix Loses WBD

Episode Date: March 2, 2026

M.G. Siegler of Spyglass is back for our monthly tech news discussion. Siegler joins us to discuss the latest on the Pentagon’s clash with Anthropic, why OpenAI stepped in to take the deal, and what... comes next for Anthropic and its CEO Dario Amodei. Tune in to hear what the “supply chain risk” label could mean and AI’s growing role in defense work. We also cover Apple’s rumored trio of AI devices, Siri’s latest delays, and the Netflix–Warner Bros. Discovery deal falling apart as Paramount jumps in. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Anthropics War with the Pentagon hits another level. Apple's preparing three AI devices, but the iPhone might be the killer feature, and Netflix will have to go forward without Warner Brothers Discovery. We'll dig into what it all means with spy glasses, M.G. Seagler, right after this. Fiscally responsible, financial geniuses, monetary magicians. These are things people say about drivers who switch their car insurance to Progressive and save hundreds. because Progressive offers discounts for paying in full, owning a home, and more. Plus, you can count on their great customer service to help when you need it so your dollar goes a long way.
Starting point is 00:00:40 Visit Progressive.com to see if you could save on car insurance. Progressive Casualty Insurance Company and Affiliates, potential savings will vary, not available in all states or situations. Michael Lewis here. My bestselling book The Big Short tells the story of the buildup and burst of the U.S. housing market back in 2008. A decade ago, the Big Short was made into an Academy Award-winning movie. Now I'm bringing it to you for the first time as an audiobook narrated by yours truly. The Big Short story, what it means to bet against the market, and who really pays for an unchecked
Starting point is 00:01:15 financial system, is as relevant today as it's ever been. Get the Big Short now at Pushkin.fm. or wherever audiobooks are sold. Welcome to Big Technology Podcast. It's the first Monday of the month, which means M.G. Siegler from Spyglasses here to break down the month's news with us. And boy, am I glad that we have an episode for you today because it feels like a year's worth of news has happened over the weekend. Since we last left you on Friday, the Pentagon declared Anthropic a supply chain risk, making it clear that Anthropic was not able to work with the government or its contractors on government work, which is going to be a major hit to the business. If it holds up, we also have Open AI coming in and signing a very similar deal for the one Anthropic was just about to sign with the Pentagon.
Starting point is 00:02:05 So we're going to dig into the latest in that story and what the implications might be for Anthropic and the rest of the AI industry. We'll also talk about Apple's forthcoming AI devices. It's a set of them. And Netflix, of course, losing the deal with Warner Brothers Discovery as Paramount swoops in and pays a lot of money for that shrinking property. All right, MG, great to see you. Thank you for being here. Great to see you, Alex. As you know, I'm happy to be here.
Starting point is 00:02:33 As a week ago, I was in Dubai. So this is, you know, my family was lucky in the timing of getting out of there. But, you know, obviously thoughts with all the people over there. And it's a terrible situation. Definitely, no. I'm very glad that you and your family made it out. And yeah, it seems like it's not just military infrastructure, but civilian hotels, airports, even a,
Starting point is 00:02:56 data center, an anthropic data center was hit. So we will talk about, oh, right, an Amazon data center that may or may not have been serving Anthropic. Let's pick up because let's pick up on this story of Anthropic and the Pentagon because we now have some more news about what exactly led to the dispute and what the fallout might be. We have movement, right? Anthropics lost the deal.
Starting point is 00:03:25 Not only that, they can't work with the government anymore. maybe and Open AI has now picked up that deal. So let me just take you all through what exactly happened because there's this Atlantic story inside Anthropics Killer Robot Dispute with the Pentagon. They say on Friday morning Anthropic received word that Pete Hexeth, Secretary of War, his team was going to make a major concession. It would pledge not to use Anthropics AI for mass domestic surveillance or fully autonomous killing machines, but then qualified those pledges with loopholey phrases like as appropriate,
Starting point is 00:03:59 suggesting that the terms would be subject to change based on the administration's interpretation of the given situation. And here's where it goes off the rails. But on Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company's AI to analyze bulk data collected from Americans. This could include information such as questions you ask your favorite chatbot, your Google search history, your GPS tracked movements, and your credit card transactions, all of which would be cross-reference with other details about your life. Anthropics leadership told Hegsseth the team that was a bridge too far, and the deal fell apart. Just to pick up my perspective from Friday, where I said maybe there's not really a there there and this is likely positioning and marketing,
Starting point is 00:04:48 I think there's more of a there than I thought. It does seem like this is a good line for Anthropos. to draw. However, you know, as I kept reading more about this, it just seemed to me like this is a deal that did not need to fall apart, that there were ways to word the deal that you could basically include the carve-outs that everybody had agreed to. And it would have been fine. But the Pentagon just set this deadline for Friday at 5 p.m. And it stuck with it. Basically, Dario didn't return their calls in the way that they wanted. And then they went nuclear, substituted them out with Open AI and declared them a supply chain risk. That's that sort of my perspective on, on where we stand today.
Starting point is 00:05:28 Do these details change or have any change for you, M.G., in the way that you see this story and what's your general read on where it is and where it's going? So I haven't actually written about this, in part because I still feel like, yeah, obviously we're all digesting it a bit in real time and it's a delicate situation given what we just talked about sort of with the situation in the Middle East going down. It does seem, I mean, obviously the timing of that, you note the Friday deadline. It's the most wild thing to me about all of this is that, you know, Secretary Hegsseth is going through with these negotiations in the middle of major preparations for war, obviously. I mean, we didn't necessarily know that at the time, though, you know, clearly there was the buildup happening.
Starting point is 00:06:14 And, you know, in the middle of getting ready for these strikes, they are going back and forth. with a, you know, an AI technology provider to try to get them, you know, to agree to terms. And so, you know, part of me, a cynical part of me wonders if, you know, they weren't using that, not that they would disclose anything like that to Anthropic necessarily, but like that they knew that this was sort of coming. And so they knew like, we need to both, we need to get something done now because we're probably going to be using some of this technology in the forthcoming, you know, war preparations and execution of the of the war strategy. And or, you know, is this going to be the best position for us to sort of lay down the terms
Starting point is 00:06:59 that we want and maybe Anthropic will have to sort of, yeah, just yield a bit easier. But maybe he hadn't done enough research on Dario, listen to your interviews and many other interviews to know like what his response was likely to be, sort of these types of ultimatums. And so, yeah, it does feel like a bit that they probably could have hashed this out. But I do wonder, again, if the timing of the macro stuff of the actual war and attack situation just added the time pressure necessarily to some of this. Well, this is interesting because they actually did end up using Anthropic in these strikes. And last week, on Friday, I said it actually, Anthropics use was limited because I was reading the reports saying,
Starting point is 00:07:47 that Palantir has Anthropic involved and that was what started this entire discussion because Palantir systems were used in the capture of Maduro and Anthropic had some questions about how they were used. I think I got, I won't say I got that wrong,
Starting point is 00:08:02 but I had an incomplete picture of how deeply integrated anthropic already is in the U.S. government and this stunned me. This is from the Wall Street Journal. U.S. strikes in the Middle East used Anthropic hours after Trump ban. So by the way,
Starting point is 00:08:16 the ban, and we'll talk about this, it's going to be six months from now, right, that they can't use it. But already, with the Iran strikes, they are using Anthropic. Here's the Wall Street Journal story. Within hours of declaring that the federal government will end use of its artificial intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropics Cloud's AI tool. The command uses the tool for intelligence assessments, target identification, and simulating battle scenarios, even as tension between the company and the Pentagon ratcheted up, highlighting how embedded the AI tools are in military operations. This isn't just military analysts like asking Claude questions. It seems like you have war games going on with Claude, which was much more than I expected.
Starting point is 00:09:12 And I mean, I'd love to get your reaction A to what your reaction is now that we're learning how deeply it is integrated. And B, why would the military risk having to substitute it out over, you know, language that they could have agreed to Anthropic with and they just didn't? Again, I sort of come back to the notion of was this sort of a, it's just like the worst possible timing in ways for both sides, right? Whereas if it were a more stable situation, you know, maybe the two sides could have sat down and hash things out a little bit more. But given the buildup to this, like it seemed like the administration got very fast, it was very fast to get exacerbated by, or sorry, exasperated by Anthropic. And now again, you might see why. It's like, look, we don't have time for this, guys. We are, we are preparing for some military action right now.
Starting point is 00:10:08 If you guys are not on board, unfortunately, like, you know, we already have the systems in place. We're using those right now. And, you know, we'd love for you guys to be on board. But if you're not, like, that's something we can discuss, I guess, down the road to your point, like six months later. And also to your points, like, it's not just that they're, yeah, using like Claude chatbot stuff. This is directly related. It seems like to their, you know, their contracts with Palantir and also Amazon, which has their own sort of government cloud stuff, right, that allows these things to operate behind, you know, their own
Starting point is 00:10:43 firewalls and insecure centers and whatnot. And so it's, again, this is not something they could swap out overnight. It's not something that even if they give clearance to Open AI or anyone else that they can just, yeah, put in there. Because again, these things have to be tested. Like, how would you know to trust that? You know, if you're all of a sudden swapping out your main model and you're running like literal war games on there, like, how do you know, like, you know, what to trust and whatnot. And so again, it just feels like this, this timing of it. Maybe it's like, guys, we need to make sure all of our eyes are dotted and T's are crossed before we go ahead with this operation. As you know, we're going to be using some new technology this go
Starting point is 00:11:27 around. So has anyone talked to Anthropic about like, you know, the latest with what they're thinking about it? And then as you noted, the Maduro situation. And obviously that, Palantir seems like was involved in that as well. And so that came to the forefront there. And so it is, this is, this is all sorts of ensnarled and weird entanglement going on right now. We, I feel like, I feel like all the talk that we've been doing about circular deals and all this, like we're now at new stakes now in terms of, uh, of where this is all getting integrated within these systems. Right. These like science fiction papers of AI potentially being used in the military somewhere down the line like in future years. Like,
Starting point is 00:12:08 Oh, wait a second. It's already being used. But it is interesting because they do have this six-month deadline to disentangle themselves from the federal government, or really the federal government has the six-month deadline, disentangle them from Anthropic. Are you suggesting because of the timing that basically it was like, it's not that we need this Friday, you know, meet this Friday deadline because we're going to swap out another model. It's like, we're going to give you this Friday deadline because we got some other shit that we need to handle in. And I mean, doesn't it feel like that? Obviously, I have no idea.
Starting point is 00:12:40 I've no inside sources at the Pentagon to know that they were giving this ultimatum, given their timeline for, for war preparations. But it does feel like, you know, they must have been at some level thinking, like, we don't have time for this right now. You know, if you, if we want to hash out something like great, here's here's X, Y, and Z partner on the team that can sort of talk you through it. But if not, like, sorry, we're, we'll revisit this at some of the time. But that's the thing I don't understand. because you're about to fight a major operation, right? Like, this is a war. And you're not going to be able to swap out Anthropic.
Starting point is 00:13:14 So if Anthropic, like, comes at, like, before the war starts, right? Like, you're not going to, like, switch to Deepseek or Open AI on Friday afternoon. So, you know. Yeah, deep seek. Could you imagine that? I could. That would go over really now. I could.
Starting point is 00:13:26 I mean, at this point, anything's possible, right? I mean, I wouldn't do it, but I wouldn't be stunned. I mean, look, look what they just did to an American country. company. But I guess it's interesting because like if you fight that war and you say, all right, Anthropic gave us problems during the war. That's maybe when you start the process of thinking about you're going to find out one way or the other. So and by the way, the attack was quite successful in the early going. I mean, I'm not sure if this is like all Anthropics doing. That would be, I think, a bridge too far to put it all on Anthropic. But if that's the AI tool you're
Starting point is 00:13:59 using, you're having a pretty successful campaign early on militarily, at least like, I don't know. Is that when you want to start seven and a month? So two things to that. One, again, I do think the Maduro, the Maduro thing situation obviously played a role in this. It's sort of weirdly like hinted at what was to come, right? Because all of a sudden we, we learned that maybe it was being used. The reporting had, there's conflicting reports, but some of the reports had, you know, the notion that Anthropic learned about how Poundtire was using it potentially for that raid. And they didn't like that maybe too much. And sort of that's where they raised, raised the, you know, raised it up the chain a little bit. And maybe the administration didn't like the fact,
Starting point is 00:14:41 Anne Palinter didn't like the fact that they were doing that. And so fast forward to now, again, we know, you know, the government obviously knows that they're, they're heading into this new situation. Either maybe they wanted to try to get it squared away before they did that, or again, to my earlier, to my first point, like, it's possible that they used it as like a point of leverage over Anthropic, right, to say like, look, we understand, you know, that there's been this back and forth about how we're potentially using, using the technology here. But like, look, we're, you know, we're going to be using these things going forward to your models. We'd love to keep doing that. And like dot, dot, dot, by the way, like, you know, again, they wouldn't,
Starting point is 00:15:21 they wouldn't tip their hands. But let's just look in a week from now and see what you think, like, how this is, how this is playing out. And if you really, you know, want to to be on sort of the wrong side of this from their advantage. Right. Look, the more I think about this, the more it just seems to me like I shared on Friday, that this is sort of an ultimate culture clash. And we'll get into the opening ideal in a moment. But you look at Emil Michael, the Undersecretary of War, who's been working on this,
Starting point is 00:15:50 clearly doesn't like Dario, clearly doesn't like the Anthropic team. And I wouldn't be surprised knowing about him and knowing about them that there would be a culture clash there. And in fact, so I read that basically at the beginning that Anthropics stood up against what they thought was going to be domestic surveillance. And they had seemingly both agreed on the autonomous warfare part. This is what Emil Michael said. He just, you know, it's like one of those, like I reported it for years and he just tweeted it out. He just tweeted it out sort of like what happened behind closed doors. He says, Anthropic wanted language that would prevent all Department of War employees from doing a LinkedIn search.
Starting point is 00:16:26 They wanted to stop the Department of War from using any public database that will enable us to, for example, recruit military service members, hire new employees. When I called to discuss cutting off the Department of War from using publicly available information that would hurt our military readiness, Dario didn't have the courage to answer, right? This is the now sort of infamous. Emil called him before the deadline and Dario was in a meeting. And then by the time you get out of the meeting, this whole thing was blown up. Now this is really where it gets wild. He says, we agreed in writing to act according to the National Security Act of 1947 and the Foreign Internet Intelligence Surveillance Act of 1978 and all other applicable laws.
Starting point is 00:17:07 They wanted the word pursuant versus consistent with and wanted to delete all applicable laws, which was less protective of Americans. Can't make this up. We also agreed to oversight of all weapons systems by saying the Department of War will use the AI systems for all lawful uses, use case in accordance with all applicable laws of the U.S. law and the Department of War directives. And we wanted to retain the ability to override or disable the AI system as appropriate. He goes, he didn't like, this is Emil talking about Dario. He didn't like the word, he didn't like as appropriate. Would he prefer inappropriate? I agreed, I even agreed to
Starting point is 00:17:47 take that out. He knows that his investors, customers, and employees should know about his lies, risking the safety and security of our country and our troops are a marketing vehicle for him. I mean, again, like, this is, I'm just going to say it. If you have two adults in the room, I think you should be able to work out this language. The other explanation is that the Department of Defense really did want to be able to override. These systems really did want to be able to conduct domestic surveillance. But again, we're talking about a tool that's so important to the military today that's being used in the use cases we described to,
Starting point is 00:18:23 blow it up over these terms to me seems like a complete like a ridiculous thing. I think there's a few things going on here. So first and foremost, like hearing you talk through those exact quotes, it's like I'm sure you've been involved with them. I've been involved in on, you know, in a deal side on a number of times like with lawyer, when lawyers get involved and want to use very explicit language to to make sure that everything is drilled down and there's no wiggle room like lawyers themselves, for lack of a better phrase, go to war over the little terms, right? And it's like, no, we can't say it this way. We have to say it exactly this way. And the other side's lawyers will say, no, we can't let them say it this way. So like there's
Starting point is 00:19:04 definitely some level of that. I know that, you know, they're talking about this on like the Emil and Dario level, certainly. But like the legalese stuff just seems like it's, it's lawyers like, you know, going back and forth on both sides to try to cover their own asses, right? And the company's asses in the in the case of the downside scenario. That's, I think you hit on it earlier where it's like, obviously these two sides just don't like each other from a philosophical level, right? There's long been the charge against Anthropic from the Trump administration that maybe Anthropic is the more quote unquote woke AI company, that they have all this, you know, effective altruism stuff going on that they don't like and that, you know, David Sachs has come out strongly on these on these issues and that they just feel like that they're misaligned philosophically. And I do think it's an awkward situation because they didn't. I'm not, I don't know this for sure, but I wouldn't be shocked if they didn't necessarily know just how vital anthropic was to some of the systems they're using.
Starting point is 00:20:05 Again, with regard to Palant's here. Obviously, they use Palantir for a lot of different things. And government famously has for a while for different services. And the fact that, you know, anthropic, because everyone, I think across the board, loves their models for different reasons, regardless of sort of your philosophical events about. the team that's building them, you know, they have great technology. And so the fact that Palantir and then Amazon, obviously, and a bunch of others have used Anthropic services. Like, maybe the government just wasn't savvy enough to know just how integrated Anthropic itself was and that they can't just, again, like we were talking about, swap it out overnight for everyone makes, you know, frontier models.
Starting point is 00:20:45 We can use Open AI. We can use Google. We can use anyone. Like, let's just get someone else in there. It's like, it's not going to be that simple. And so I think, you know, all of these things sort coming to a head leading up to the situation that we're talking about with the attacks last weekend. It just feels like there's a boiling point. And again, there's maybe some points of leverage that Emil and some others like thought about. And obviously we didn't even talk about the Trump tweet. He tweeted like, you know, to basically try to, you know, end Anthropic as we know it, you know, saying that like we're we're done dealing with them, you know, best of luck with whatever you do. We're not working with you anymore. And none of our partners are working with.
Starting point is 00:21:23 you and obviously the government has partnerships with Google with with with Amazon with everyone else and it's like you know it felt like it was an existential potentially threat to to anthropic itself and so there's so many layers going on into this and obviously they're reporting every single day comes up with more and more layers to it to unravel and it's just weird to think that again all this is unraveling while there's actually attacks going on like yeah insane there's general Jack Shanahan who's no friend to He sort of woke wing of the tech industry.
Starting point is 00:21:57 He's the general behind the Maven program that Google employees rebelled against, which was a partnership between Google and the Department of Defense. He said you might expect him to be sympathetic to the Department of Wars position. He's not. He says, I'm sympathetic to Anthropics position. No LM anywhere in its current form should be considered for use in fully lethal autonomous in a fully lethal autonomous weapon system. Despite the hype, frontier models are not ready for primetime in national security settings.
Starting point is 00:22:29 Over reliance on them at this stage is a recipe for catastrophe. Mass surveillance of U.S. citizens? No thanks. Seems like a reasonable second red line. That's it. Those are the two showstoppers, painting a bullseye on anthropic garner spicy headlines, but everyone loses in the end. This should never have been such a public spat,
Starting point is 00:22:49 should have been handled quietly behind the scenes, scratching my head over why there was such a misunderstanding on both sides about terms and conditions of use. Something went very wrong during the rush to roll out the models. Let reason and sanity prevail. I mean, that seems like a pretty reasonable take. It does. But again, I think that maybe it was a trickle down effect of the Maduro situation coming into this, knowing that the government knowing that they're going into this situation and not wanting to, you know, for this to come up. Like say that the, these attacks started and Anthropic got wind that their models were being used via Palantir or whatnot.
Starting point is 00:23:28 And, you know, they just start to raise like this giant PR campaign against the government for doing that. Now, you might say that that would backfire against them and it could have, but it's sort of a who knows how it would have exactly played out in that case. But I'm just trying to game through like what the government was thinking here in terms of like why engage this ahead of time. again, either it's that they viewed it as a point of leverage over Anthropic leading up to this, that they knew that they could get maybe, or they thought that they could get more of what they wanted out of, out of Dario leading up to this, or again, that they wanted to sort of cover themselves for if and when they went forward with this and using these models. But again, you point to the other stuff, which is, you know, there's the multi-layers here.
Starting point is 00:24:11 It's not just war game scenarios and things like that. It is the mass surveillance stuff, which obviously Anthropic cares about. And you would be hard pressed to find people who would be on the other side of that, right? Like, um, to your point on the general's comments, like everyone sort of is on the, not everyone, of course, but like a lot of people I think would be on the side. But the government's pushback against that, at least to date, has been like, we just don't want Anthropic to have, um, de facto say over anything. It's not like that they're saying, like, we want to mass surveil the, you know, the,
Starting point is 00:24:43 the American populate. And they would say, like, the laws are already in place against that. Like obviously there's gray areas with all of this stuff. But like they just, their stances, we do not think that a company should have de facto say over what, you know, what we would do in situations. And again, we're not going to, the plan is not to mass surveil the U.S., but again, these are, they're slippery slopes, which is what Anthropic would argue, I would assume. And so you can just go back and forth and continually will go back and forth over those issues.
Starting point is 00:25:14 Right. And I would, I would still hold that they should. have come to a deal, but they didn't. And so now the question is, what happens next? So as I mentioned earlier, the Pentagon has labeled Anthropic a supply chain risk, which as I understand it means no federal government agency can work with Anthropic after this six-month deadline. Not only that, private companies working with the government on certain contracting work cannot use Anthropic for that work. So, by the way, like if let's say you're a Boeing, you may not want to have have a certain model that your engineers use for, you know, government work and a different model
Starting point is 00:25:54 that they use, maybe for commercial work. You want to have standardization. So this is a potentially very big hit for Anthropic, not just the $200 million contract that it had with the Pentagon, but this is a potential billion dollar, multi-billion dollar hit if the Pentagon does go through with this designation. Would you agree? I totally agree. It's it's not just as you said the contract itself. It is the the trickle-down effects and the broader ramifications of if they lose that distinction. And again, like it might just, yeah, it puts a chilling effect on new contracts that are signed, right? Because it's like what if some other company is thinking about like, oh, we might do a government contract one day. And to your point of like, would we rather just use one model like, you know, to sort of do all of our work? work or would we really want to have to swap out Anthropic for Open AI if we do go forward with this government contract? And to that point, like, I do think that the two sides, you hear, you know, there's been
Starting point is 00:26:58 subsequent reporting that, like, there's still some talk that they want to, you know, figure out how to make this work. Again, for nothing else, maybe if we still have this six-month window, like, where they're going to be using the anthropic models. Like, the six months are probably going to be pretty intense. terms of what's going down from, you know, the war perspective. Yeah, a lot of tokens being used. And so they probably do want to find a way to hash things out.
Starting point is 00:27:23 So like the hope, obviously, is that cooler heads prevail, maybe once this initial wave of the, of this, you know, these attacks are are sort of behind us, hopefully that they can, you know, sit down again and maybe hash out the legalese as we were talking about and like the exact wording of like how to go forward with this. Because, yes, it's bad for Anthropic if, if they. get ripped out of the U.S. government as they're talking about. Right.
Starting point is 00:27:48 And we should say that this supply chain threat is not something that's typically used for domestic companies, right? It's typically like, right. It's Chinese. It's like all the threats that were used against Huawei and all the Chinese companies. And it's wild that this is. And that's like,
Starting point is 00:28:03 so it's the sort of backdrop behind all of this right now. I noted this earlier, but a number of people have seen this. Like, Claude is now the number one app in the app store, which is wild. For the first time ever. It's very clearly related to some of this. Yeah.
Starting point is 00:28:18 Like, right? Like, it's not just that, obviously it's been doing well. Anthropics been doing well with the new opus models that have rolled out and co-work and clod code and whatnot. But like some of this is certainly, you know, virtue signaling, if nothing else, right? Like people are saying like, oh, yeah, we want to be on the side of the AI company that is pushing back against the government that's trying to mass surveil or in the headlines at least, right? Like that's the way that it's being portrayed.
Starting point is 00:28:45 And who does that remind you of? Tim Cook, 10 years ago, one month, 10 years or one month ago was the time that he sent that memo out about standing up to the FBI. And Apple's basically capitalized on that for the last decade. Yeah. And so, you know, it's Anthropic running a similar playbook to that. I mean, maybe not explicitly, at least right now, like not doing PR campaigns. That would be pretty, you know, not in great taste to do that at the moment. But still, like, again, it doesn't.
Starting point is 00:29:13 It doesn't seem like it's completely unrelated. That Claude is shooting up and people and they're sort of, yeah, thinking of that this might be Anthropic is positioned as the AI company. That's going to be the quote unquote moral one. And, you know, that's a whole, obviously, Hornet's nest of a topic as well. That's right. I will give my hot take here, which is that this supply chain risk threat is never, never manifests. Just never takes effect. again, it's a six-month deadline. We've seen six-month deadlines a lot from the U.S. government. Often it's been around TikTok. Oh, we'll ban TikTok in six months. Yeah, we'll extend it for another six months. If Anthropic is this pivotal for the government,
Starting point is 00:30:01 then they will just continue to use it and extend this or rescind it or it won't hold up in court. So that's hot take one. The other side of it, though, is I've already heard some rumblings from government, like big companies that are government contractors, that they will preemptively take Anthropic out of their workflow, or at least are highly considering it, because they don't want to bank on the fact that this will get extended.
Starting point is 00:30:27 So even if this isn't going to go through completely, I do anticipate that it will hurt Anthropic when it comes to these private companies. I agree with you, but I would also just say, like I wouldn't fully discount the notion that we talked about already, but that these two sides just don't like each other from the personnel involved, right? Like every indication seems that way. And so like, is there, are they going to be able to get past sort of the grudge between the Trump administration and Dario, basically? Or is there some intermediary that has to come in to sort of assuage that in some ways? because yeah like the TikTok thing and everything else like you know the sort of the taco stuff right like Trump always chickens out of like the things that he threatens and goes back upon is this another one of
Starting point is 00:31:22 those and again it feels like yes it probably will be except if they view it like that they want to make a sort of you know make make some sort of points about of more philosophical points and high level point about, you know, quote unquote, woke companies or companies that are that are misaligned or they view as misaligned with sort of the American public and the electorate and all that. And, you know, they may dig in their heels a little bit more because of that. Yeah. I do think Taco really did apply with the tariffs, but maybe after this Iran thing, it's going to be tougher for that label to stick. Yeah, yeah. Maybe the intermediary that comes in is Sam Altman or maybe not. I mean, he swooped right in.
Starting point is 00:32:06 This is from the Times. You know, as these discussions were breaking down with Anthropic, Emil Michael had an ace up his sleeve. On the side, he had been hammering out an alternative to Anthropic with its rival OpenAI. A framework between the Pentagon and Open AI had already been reached. Mr. Altman of Open A. I got a call, got on a call with Mr. Michael to discuss a deal for his company. Within a day they had drafted the framework,
Starting point is 00:32:30 Open AI agreed to the Pentagon's requirement that its AI could be. used for all lawful purposes, but it also negotiated the right to put technical guardrails on its systems to adhere to its safety principles. At 10 p.m. on Friday, as Anthropics lawyers began working on a lawsuit against the Pentagon, Mr. Altman was on the phone with Mr. Michael finalizing the details of Open AIs deal with the Department of Defense. Mr. Altman then posted the news of the agreement on social media. On Saturday, Altman invited people to ask him questions on X about the deal as opening I faced a backlash for swooping in. He goes, we don't want the ability to opine on a specific legal military action,
Starting point is 00:33:11 but we do really want the ability to use our expertise to design a safe system. Basically the same, very similar deal to the one that Anthropic could not agree on with the Pentagon. Your thoughts on Open AI's role in this whole situation, classic, I guess. I mean, 100% this could have been predicted, right? Like you see, you see the opening. Sam Haltman sees the opening. Sam Lulman's going to take that opening. And he is going to to immediately ring up Emil Michael and get him on the, get him on the line and figure out a way to sort of swoop in there and not only potentially take over all these contracts, but also, obviously he's positioning. He's trying to position this as like they are the peace broker here, right?
Starting point is 00:33:54 like that they are the ones who are going to sort of iron out these differences between Anthropic and the U.S. government by cutting their own deal that paves the path to sort of do a new deal going forward. But I think that they wouldn't mind if, say, they got all those contracts instead of Anthropic got all those contracts going forward as well. And so, you know, that part was maybe left out. But they're the peace broker and they're going to come in here and make everything. I mean, again, this was so predictable.
Starting point is 00:34:23 And it was also predictable the backlash. to it, right? Because like no one believes that like two blood rivals that won't hold hands on stage at an event are going to, you know, one is going to help out the other in a major way. Now, to be fair to Sam Altman, like he might think at a high level like, yeah, I think we should probably take a stand on this. That's more in line with what Anthropic is is trying to project at least at the highest level. But still, we're going to do that in a way that's good for the business at the end of the day. And so, you know, both things can probably be true. But again, the optics around this are just not great. And, you know, again, to be expected there. I got a text
Starting point is 00:35:06 as this was all unfolding where Sam had said something like, we don't want Anthropic to, you know, not be able to work with the government. And someone sent me like this text like, oh, well, looks like opening eyes really changing their tune on Anthropic. And I was like, I don't think so. wait and see. And there they were. So yeah. Could be a potential very lucrative deal for opening eye. And especially if this thing goes through and opening eye, by the way, in the middle of
Starting point is 00:35:35 the year where they're really emphasizing enterprise, they could potentially swoop in and get much more than just that one contract. I would just, one last thing I would add to this because I was going over today, trying to look through some of the numbers for ownership stakes. I like to do is like a hobby of these, these. AI companies. Given the ownership stakes in Anthropic that we know, obviously from Google and Amazon, but now Microsoft bought in, right, famously, and Nvidia too. And so, you know, don't, don't necessarily underplay those elements to it as well, especially someone like Amazon, right,
Starting point is 00:36:11 who has lots of government contracts as well. I had Google too. If they can sort of step in and be a bit of an intermediary here and say, you know, like, look, we got to pause on this. Like, We can all work together. We can all get along. You can figure out how to use these models in ways that both sides sort of figure out. Because again, like it does ding also their businesses, those big players, if all of this gets ripped out. Yeah. By the way, I mean, Amazon just did this $50 billion funding deal with Open AI.
Starting point is 00:36:42 So you know it's 15 first. I don't think that that was related. 35 next. So maybe they might just say, all right, creative destruction. They're hedging. They're always hedging. So they're fine either way, I guess. But yeah, wild.
Starting point is 00:36:55 All right. So could Amazon and OpenAI work on a potential device together to go against the Apple and Google Alliance? And where is Apple's AI device bet going? That's where we will pick up when we come back right after this. If a driver in your fleet gotten an accident tomorrow, could you prove what actually happened? Without footage, it's much harder. So your insurance rates spike and you're stuck paying for it. That's why so many fleets choose Samsara's AI-powered dash camps, clear video evidence, real-time alerts, and coaching tools that help prevent accidents before they happen.
Starting point is 00:37:36 Samsara AI helps reduce crash rates by nearly 75%. For instance, the city and county of Denver saw a 50% reduction in false claims against them and a 94% reduction in safety events over. all. This is the kind of visibility that every operation manager needs. Don't wait for the next accident to take action. Head to samsara.com slash big tech to request a free demo and see how somsum brings visibility and safety to your operations. That's somsara.com slash big tech. Samsara operate smarter. You want to eat better, but you have zero time and zero energy to make it happen. Factor doesn't ask you to meal prep or follow recipes. It just removes the entire problem. Two minutes, you get real food and you are done.
Starting point is 00:38:25 So remember that time where you wanted to cook healthy but just ran out of time? You're not failing at healthy eating. You're failing at having three extra hours every night. Factor is already made by chefs, designed by dieticians, and delivered to your door. Inside, there are lean proteins, colorful vegetables, and healthy fats. It's the stuff that you'd make at home if you had the time. There's also this new muscle pro collection for strength in recovery. You always get fresh and never frozen food. It's ready in two minutes and there's no prep, no cleanup, and no mental load.
Starting point is 00:38:55 Head to factor meals.com slash big tech 50 off and use code big tech 50 off to get 50% off your first factor box, plus free breakfast for one year. The offers only valid for new factor customers with the code and qualifying auto-renewing subscription purchase. Make healthier eating easy with Factor. And we're back here on BigTen. technology podcast with M.G. Siegler of Spyglass. You can find it at Spyglass.org. Highly recommend signing up for it, getting the newsletter. One of my favorite tech reads. All right, MG, let's talk a little bit about switching gears from this big blow-up. Yes. A calm. Nice topic. Let's talk about Siri. Yes. Or more, let's talk about the devices
Starting point is 00:39:36 that Apple might be developing that will have Siri or Gemini or Gemini powered, Siri baked in. So recently we've gotten news that Apple is going to release maybe three devices all at once, smart glasses, a pendant, and AirPods with expanded AI capabilities. I've thought that I think we've both discussed, actually, that this is going to be a pretty good year for Apple. And when this news hit, I was like, I got to go to Spyglass to get MG's perspective. And you started with a very surprising line at the beginning that maybe we're seeing the beginning of Apple, if not pulling ahead of the AI race, really starting to assert itself and
Starting point is 00:40:17 make a strong play here. Talk a little bit about what you're seeing. Yeah. So there's a few things that I think lead into fuel that idea. And, you know, this states back to obviously when Apple at WWDC two years ago now was gearing up to talk about AI in a real way for the first time. And obviously they ended up doing that and falling flat on their face because they couldn't execute upon it. But now in a way, it's almost like, are they going to run basically the same game plan, but now that they have the Google partnership for Gemini building these models, like they can actually do it and execute on it, execute on it in the right way. I wouldn't put it past them to sort of basically do that, do everything that they promised. And then to your point
Starting point is 00:41:02 on these devices extended a bit to sort of the world that we're entering now. I do think that they are potentially in a good position. We've talked about it before that if we believe that models are getting commoditized and if there's going to be diminishing returns and sort of spending billions and billions, hundreds of billions of dollars on training these large language models, like, what's the next sort of step after that? And if you're Apple and you believe that that is the case that they don't need to train necessarily their own massive frontier models,
Starting point is 00:41:39 that they instead can partner as they're doing with Google on them, then the value might, you know, from their eyes, come from the way that they implement them. And obviously, a lot of their value has always been derived from selling devices, the best devices, you know, many would say, to the public. And so if they can create these devices that leverage that, and by the way, like I do think the one key device to all of these things remains the iPhone. And I think that what you're seeing with these three devices that are being talked about,
Starting point is 00:42:09 that you put out there, you know, AirPods and glasses and appendants, all of them, you know, per the report, per Mark German's reporting, like would likely be reliant to some degree upon the iPhone. And that's where Apple has this unique advantage. You know, maybe you could say that Google and Samsung have similar capabilities because of their device, their smartphones. But Apple has a very unique advantage in, you know, certainly ahead of the metas of the world and others that are trying to create these types of newfangled devices, let alone any startup that's trying to do so and Open AI in that bucket. Apple has this unique position where they have the iPhone in billions of pockets, and now they're going to have these devices that rely upon that as sort of, at least for the
Starting point is 00:42:53 foreseeable future, as basically the central processing unit of those devices potentially. And so you can close your eyes and it's not too hard to imagine a world in which Apple is sort of the device leader again in this new AI world. And if if they're to the device leader, who's to say they're not the overall leader? If they're the way that everyone's interfacing with AI, at least down to back. Yes. I want to talk through this because, you know, I've recently I've gotten like the first wearable that I actually use frequently, which is this Garmin watch, which is not an Apple product. Yep. But actually works quite well with the iPhone. There's this Garmin app. It mostly connects. I only had one situation where I've had to like reset the whole thing because
Starting point is 00:43:33 the Bluetooth connection was off. And this is basically, like, these AI devices probably wouldn't exist in their own, like their own ecosystem. For instance, when you want to set up the metaglasses, you set it up with the smartphone. But it's still, it syncs pretty well. And there's technologies that have come out that lets you, like, sync data through Wi-Fi that have made it much more seamless. So if the iPhone is going to give an advantage to Apple's AI devices,
Starting point is 00:44:03 How does its interoperability, which has always been Apple's calling card, how does that help in a way that would be that much better than, you know, the ways that these current wearables are connected? So it's a good question. It's hard to know for sure without obviously seeing what Apple's going to release out there. But I would just point to, you know, comments made by no less than Mark Zuckerberg over and over again about complaining nonstop about how they don't get the full level of interoperability that they would like with,
Starting point is 00:44:33 Apple's products, right? And some of that is obviously just a little bit of posturing because those two sides don't like one another. And obviously, Metafamously doesn't have a smartphone play. And so, you know, they're telling regulators that, look, you need to make sure that the iPhone is as open as can be to third-party products, like perhaps the ones we're making and others are making. And obviously, Europe is very open to that notion. They've basically installed some laws in various places to make it so that they have to be more. interoperable and allow thing low level system integrations that Apple may not want to. And again, your question though, like what's going to be all that different?
Starting point is 00:45:13 From the, you know, at the day to day level, it might not be all that different. But I do think that there's lots of low lying under under the hood stuff, you know, potentially as boring as like slightly longer battery life because Apple is able to, you know, more tightly hone the way that the connection is made between their device. and the iPhone. And I think there's all different sorts of things, background syncing, contact syncing, all this type of stuff that can come into play that you might not think on a on a day-to-day level as you're using it is like that big of a deal. But there are advantages that Apple has. And the question will become probably both certainly in Europe, but I think it will
Starting point is 00:45:52 ultimately become true also in the U.S. of like how much of that is too much of a competitive advantage and that they're hurting competition as a result of that. And that's, we're going to hear a lot from Mark Zuckerberg and probably some others. Maybe Sam Altman as well about that going forward. So we're going to have, it seems like these are all coming at the same time. Smart Glass is a pin and these enhanced AirPods. What do you give the chance of being the most successful of those three? I would imagine, I mean, I do think that they'll all be for slightly different purposes.
Starting point is 00:46:29 I would imagine price will be a key factor in that, as it always is. But like, if I had to guess, I would think that the AirPods would probably be the most successful just because you and I are wearing them right now. Everyone's wearing them out and about. Like, they're a known thing. As long as they don't look entirely ridiculous and different with some sort of camera sensor on them, I think that they will continue to be obviously a very popular product. It's a matter of, again, how much do they cost if they have.
Starting point is 00:46:59 had a camera sensor to it. Is it a $500 product all of a sudden? Can they keep it like at $300 or something around there? I think that will matter a lot. Glasses, obviously, META has already sort of proven somewhat of a market, but relative to Apple's other products, like it's a drop in the bucket.
Starting point is 00:47:14 It's not very big. You know, the Meta RayBan products are not huge compared to, say, AirPods or Apple Watch or anything else. And so can Apple take that to another level? I think that, you know, I think that they'll have success with it. But, you know, we're now seeing already there's starting to be backlash preemptively against meta because they're talking about using facial recognition within the glasses, right?
Starting point is 00:47:38 Adding that after the fact. And so we're all of a sudden thrown right back into the glass hole situation from Google Glass a few years ago. And meta has, to their credit, sort of avoided that to date. And now we're getting thrown back into that. And how does Apple deal with something like that if meta is, you know, for lack of a better word, sort of poisoning the well or the market by thinking like, I don't want any glasses with any sort of camera on your face. And obviously Apple's product will have that to some degree. And then the pendant itself, obviously, you think to Humane and, you know,
Starting point is 00:48:08 ex-Apple engineers and designers who were working on that. Didn't end up being successful, of course, and sold to HP and a fire sale, it seems like. But Apple has that unique advantage of having the iPhone itself. And it sounds like this would maybe be more of the, I think German even said, it was like an internal phrasing of it as the eyes and ears. maybe of the iPhone going forward. And so you wear it around and it's constantly just looking at things. Again, this is a privacy thing, though.
Starting point is 00:48:37 But Apple's, as we're, you know, talking about, Apple is in the unique position to be more trusted than probably any other tech companies, certainly, from a privacy angle. And so, yeah, there's all those elements to it. Right. Yeah, I think the AirPods, that's my bet. I think we're going to see a battle of these AI devices in the earbuds space. But it does seem you're right. like we're just kind of, we are sort of doomed to just be videotaped by everybody at all,
Starting point is 00:49:03 although we kind of already are. I still like, like looking at us, you know, right now wearing these AirPods, like I've always been curious, like how they're actually going to do that, though, from a pure product perspective. It's like, so I have a beard. If like there's stems, you know, feature the camera, like, does it just record my beard, like looking forward? Or do they have to stick out more than as a result? And that will look ridiculous.
Starting point is 00:49:27 You know, everyone joked when the AirPods source came out, how ridiculous. You know, they thought they looked because they're sticking out of your ears. But, like, ultimately, they're pretty streamlined and you can't really, you know, tell all that often, you know, when you're looking at people and we got used to it very quickly. But if you got cameras sticking out of them. And then there was, like, talk where it wasn't like, it wasn't necessarily camera cameras, but was more IR cameras and was used like, you know, to potentially capture motion and things like that to help with gesture control of different devices and things. And that made a little bit more sense to me. But I am very curious, like, how they end up doing. But there was also talk that they were going to put a camera in the watch
Starting point is 00:50:02 and that you would have like, yeah, like almost like a, you know, Dick Tracy style like a camera that you would like shoot people with like looking at looking at your wrist. And so all these things are going to create situations where you just need new cultural norms to come in. And again, Apple has done much better than any other company. But to Metis Credit, they have done well with the Rayban so far. That's right.
Starting point is 00:50:24 And I think the battle will definitely fall on whose assistant is better. And Siri has to get better. I mean, it feels like beating a dead horse at this point. But we didn't even talk about it because it's so regular that Siri got postponed again. Or features within Siri got postponed. Again, you had a really funny piece about that. You said it's almost like Apple is having some major issues with their AI implementation and strategy. They should probably look into that.
Starting point is 00:50:53 But it just keeps happening, right? that this keeps getting delayed. And you know, you start to lose faith over time, even with the Google partnership that they're going to be able to figure this out. Yeah, I was always like a little bit skeptical. I mean, I've obviously been super skeptical of Siri over the past 15, having used it over the past 15 years. But like when they announced a Google partnership,
Starting point is 00:51:13 I was always a little bit skeptical of the initial rollout because it's like, how are they going to, it's sort of what we're talking about with the government. Like, right? Like you can't just swap these things in. It may seem like it's that simple. But like there's a lot. of like underlying things that need to be connected. Look at Amazon for an example of that,
Starting point is 00:51:29 right? Like look how long it took them to rework Alexa to be able to work with things like Anthropics models and all the models that they're using behind the scenes to sort of upgrade Alexa. It took over a year and then they promised something and they couldn't deliver on the timing of it. And now we're seeing the same thing. We've seen the same thing play out with Apple. It just takes a long time to get like all of the little pieces in place because the last thing Apple can afford to do right now was put something out there, even in beta, I think, even in some sort of, you know, like thing where it's any, any forward facing, user facing service and just have it flop again. That would be just a death now, I think,
Starting point is 00:52:05 to, they would have to change the Siri name at that point. You would have like Siri, we might have the Microsoft style like funeral, where they would be like, yeah, walking down Cooper Tino with a coffin and series in it because they would need a new, a new branding if they fail one more time with this. Yeah, I think it's long past, past time to do that. Could Amazon and OpenAI, be the competition here? I mean, we talked about it before the break, but Amazon's going to invest $50 billion in Open AI. Now, of course,
Starting point is 00:52:32 Open AI has a device. Program underway. Apple has, I mean, Amazon has the Echo. I think Alexa Plus is actually already pretty good. Could you see as part of that deal because Open AI will be helping Amazon develop some of some specialized AI technology
Starting point is 00:52:48 that this could also be part of a counter. It could be a team, team battle, Open AI and Amazon against Google. and Apple. Yeah, that's sort of where my mind went when I was reading about, yeah, these reporting. And again, like $50 billion. Yes, it's over like two tranches. It sounds like, you know, 15 and then 35. But still, $50 billion that Amazon is investing in a time when they're making cuts. They're famously doing layoffs, right? And they're getting dinged left and right for their for their cap expense. Like, $50 billion is no joke. And they're spending that for a reason,
Starting point is 00:53:20 obviously, with Open AI. And so you have to, my mind again, went to, wondering, is this some sort of massive play to get sort of all of the models in-house to, you know, a lot of, there's a lot of talk right now about orchestration and the idea that like, like, perplexity and others are now trying to like move their businesses into being these layers on top of the LLMs to be able to do whatever you, you as a user shouldn't have to worry about which model picker and things like that. You should just let it say what you want and let a service pick the best one for you. And obviously that's harder with an Amazon because, as you noted, they make their own product in Alexa.
Starting point is 00:54:00 But given that they have the Anthropic partnership and now given that they have the Open AI partnership, is there a world in which they're using all those models behind the scenes? And sort of they can use that to counter both Apple and Google potentially where they say like, look, if you're using those products, you're only going to get in both cases Google as a result because they're both using Gemini. But, you know, they're using their in-house models. Whereas if you use Amazon, if you use Alexa, maybe going forward, you know, you will have the power of Claude. You will have the power of chat, GBT, BT, and you'll have the power of Alexa, all three, you know, on top of maybe some others that they add in there as well. And it's sort of a playbook that they've run right with the cloud in a way, too, right?
Starting point is 00:54:42 So it's like they view it as like you can pick which you want to use or let us pick which you think we should use from a product perspective. And so no, you know, indications that that necessarily is going to be what happens. But I wouldn't be shocked about that. Okay. Before we leave, I definitely want to talk briefly about this Netflix Warner Brothers Paramount deal. You've written about it. We haven't talked about it on the show in depth really yet.
Starting point is 00:55:10 But the cliff notes here was that Netflix had agreed to buy Warner Brothers Discovery, which has CNN and HBO and was going to build this sort of powerhouse streaming company that maybe the streaming company of the future by adding these old school assets. Netflix is obviously in the lead. No one really comes close to it in streaming. So this might just have solidified it as the dominant service. It reaches a deal with Warner Brothers Discovery. Paramount comes in and says, nope, we want to make the deal instead.
Starting point is 00:55:41 We weren't given a fair chance to bid and just keeps throwing out these bids until it decides that it's going to end up or both, companies decide that Paramount will be the buyer and not Netflix. Warner Brothers Discovery is going to have to pay Netflix about a $3 billion breakup fee. And the final deal is going to be $110, $10 billion or so that Paramount will pay for Warner Brothers Discovery whose market cap, as you note in Spike Glass was $20 billion a year ago. So just give us your perspective on what happened here and what the implications are. Yeah. So it does seem like on one level, at the highest level, this is just a master, a masterful job by David Zazlov, who's the CEO of Warner Brothers Discovery because he was able to take a company,
Starting point is 00:56:35 as you noted that a year ago was a fraction of what this offer is in terms of market cap, and still is right now, and turn it into this offer. And they basically did that by, at first, It was Paramount that came out with an offer, much lower offer than what these current offers are. I believe $19 a share, and we're up to $31 a share now with the newest one. And I think the wildcard there was Netflix coming in because Netflix was viewed as obviously a big player. And the biggest sort of, if you want to call it a media company, the biggest one, its market cap is roughly doubled out of Disney. And so they obviously have the capital to be able to do whatever they want and to deal like this.
Starting point is 00:57:18 but they had not historically done anything like this. And so I think that, you know, Paramount basically felt like they came in and stole this from under their nose. And there was a question of, was this, you know, Masterstroke by David Zazlov sort of orchestrating this whole thing, knowing perhaps that Netflix, that Paramount basically needed this more than Netflix did. And so they were going to drive up the price to make it so that Netflix would walk away with their $3 billion all cash consolation prize, which is a pretty nice, you know, offer to, you know, Yeah, it's about what they make in profit a quarter. And so they just got that in one fell swoop. But still, yeah, this deal has gone back and forth and back and forth. And now the fact that Netflix walked away relatively quickly once the Paramount offer came in, you know, kudos to Netflix for it seems like had good discipline.
Starting point is 00:58:08 They weren't going to get into some sort of bidding war and go outside of their bounds. But also like, I'm just overall sort of sad for Hollywood because I do feel like that they, They didn't like either of these deals, but I think that they're going to be in for a bigger world of pain with the Paramount deal than they would have been with Netflix. And we talk, you know, you can talk about, yeah, the streaming dominance of Netflix and whatnot. But the reality, in my view, at least, is that this is much more about like the future going forward. And the future is going to be Netflix versus YouTube and a few other, you know, key players. I think Prime Video will be in there. Disney Plus, obviously.
Starting point is 00:58:44 but like it's not just and TikTok which has interesting new ownership given this Paramount structure as well and so all of these players in there is really the battle going forward and we're talking about like this decaying sort of industry that in movie going
Starting point is 00:59:00 which is an industry I love but it's not like a giant growth industry and so we're talking about like these players battling over these assets and it feels like you know Netflix would have been a good safe haven for a for a studio that's like been owned by conglomerates for a hundred years. This isn't like a new thing. Everyone's all afraid because we're in the world of
Starting point is 00:59:19 tech now and AI is coming and all of this. But like Netflix would have been a pretty good save haven, I feel like for this. And instead, we're just going to get a straight down the middle sort of combination of two studios. And that's just going to be a lot of layoffs. And it's going to be just this brutal sort of, you know, decline over a longer period of time. Right. And I'll note that Netflix is up 26% in the past five days. So clearly the market has, really digested this and said, yeah, probably better that you didn't do the deal. I thought maybe it would be good. Like maybe it would be nice to roll all this content up. Obviously, as a consumer, you're not happy about that because you have you have less choices. But from a business perspective,
Starting point is 00:59:58 I understand why Netflix was interested. But obviously, different way. Market likes it. And everyone will just move forward. Yeah, we'll see until like there's going to be a lot of fallout from this. And I think it's going to happen both, both from, you know, the antitrust perspective because, you know, of the relationship with Trump and the Ellisons. And there's going to be a lot of different hearings on this type of stuff. And I think it'll play out over years and years and years because then they'll look back on it after it's approved and say, like, was it approved for, you know, less than above board reasons? And so I think we're going to just hear about this for years and years and years. And the reality is it's like, it is a bit sad. It just feels like, you know, obviously Paramount's play. is going to be to try to bulk up to compete with the Disney pluses and the Netflix's of the world. But are they really realistically going to be able to do that? Maybe if they can leverage TikTok or something in some way, you know, now owned in no small part by Oracle. Maybe. But like it feels more like that this is still a slow decay story.
Starting point is 01:01:00 And, you know, they'll just sell their products ultimately the content itself to Netflix just as they've been doing. All right, folks. The website is spyglass.org. MG, always great to speak with you. I'm so glad we got a chance to speak today, especially just, I mean, an incredible weekend of news that I think we're all still trying to wrap our heads around and I'm so glad we got a chance to digest it here together.
Starting point is 01:01:23 Indeed, good as always, Alex. All right, thank you so much. Thanks, everybody for listening and watching. If you haven't, if you could rate us five stars on Spotify or Apple Podcasts, it will go a long way to helping the podcast reach new audiences, which would help us, you know, recruit guests. And that would always be great.
Starting point is 01:01:38 So hope you do that. Hope you have a great Monday and the rest of your week. And we'll be back here on Wednesday with another new interview. I'm not quite sure who it will be, but we'll hopefully touch more on the Anthropic Pentagon saga. So thank you again for being here. And we'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.