Risky Business - Wide World of Cyber: DeepSeek lobs an AI hand grenade

Episode Date: February 21, 2025

In this episode of the Wide World of Cyber podcast Risky Business host Patrick Gray chats with SentinelOne’s Chris Krebs and Alex Stamos about AI, DeepSeek, and regula...tion. From its bad transport security to its Chinese ownership and the economic implications of China “entering the chat”, everyone’s freaking out over this new model. But should they be? Pat, Alex and Chris dissect the model’s significance, the politics of it all and how AI regulation in Europe, the US and China will shape the future of LLMs. This episode is also available on Youtube. Show notes

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everyone and welcome to another edition of the Wide World of Cyber, the podcast we do here at RiskyBiz which is sponsored by and produced in conjunction with Sentinel One and joining me now is Alex Stamos who is, you are the CISO these days aren't you for Sentinel One? I am the CISO and also the CIO so that's what what I'm like. Whatever happened to never CISO my friend? Whatever happened to never CISO? I tried to avoid responsibility and I failed. That's what happened Patrick.
Starting point is 00:00:38 Alex is the CISO for Sentinel One and the CIO apparently which I just learned and prior to that he has worked as the CISO for Facebook, for Yahoo. He's done all sorts of stuff, founded ISEC partners back in the day. Joining us also is Chris Krebs, who is the Policy and Intelligence guy over at Sentinel One, also was the first director of CISA. Welcome to you, Chris. Thanks, Pat. All right.
Starting point is 00:01:03 So today's topic, we're really going to be talking about AI, which is a topic that we've covered on this podcast before, but we're going to go a little bit more specific, right? And we're going to talk about DeepSeek. And this is something that we've talked about little elements of it on the main weekly show on Risky Business, but we've never gone deep on it, mostly because neither Adam nor I are really what you would call AI experts, right? Whereas Alex, I know you follow this stuff closely. Ever since LLM sort of became the hot new thing, you've been all over it, trying to
Starting point is 00:01:38 understand the tech and the implications of it and developments in it. So why don't we just run through in basic terms like the deep seek situation? Because as I understand it, essentially what happened is a Chinese company published an open source LLM that was a lot better than anyone was expecting it to be. They also made claims with regard to the training cost. They said that it cost them very little to develop this thing, which has provoked some skepticism from some quarters, it must be said. But nonetheless, it's a very impressive bit of technology. It's very efficient. You know, when you're actually just running
Starting point is 00:02:13 the model, it is extremely efficient. And, you know, this has almost led to a bit of a emperor has no clothes moment in the broader AI industry. I mean, is that about the state of it? How did they go summing it up there? Yeah, I think it's a pretty good summary. DeepSeek's not new. They've been around for a little while. They're a lab that's part of a Chinese hedge fund. They've published a number of papers over the past
Starting point is 00:02:37 and they published a new paper with this. And they both made claims that can't be verified and claims that can be verified because you can do it themselves. And in their paper, they had some new breakthroughs that can be verified in both training and inference efficiency that they released to their credit. Everybody can take advantage of those things.
Starting point is 00:03:02 And they made some claims around the efficiency of the training of the model that weren't totally verified. Now, to be fair, people over applied some of the claims they made to one model to a different model. They actually over expanded some of the claims that DeepSeek made to be to everything that they didn't actually make. So a lot of those claims actually came from a LinkedIn post
Starting point is 00:03:24 that a guy made that ended up, that's what tanked Nvidia stock. Nvidia ended up losing more market cap in one day than any other companies ever lost. Wasn't it like, you know, like nearly a trillion dollars of market cap just got vaped? Yeah, you know, it was hundreds of billions. I don't think they hit quite a trillion, but it was like, it was hundreds of billions of dollars. And there's a lot of- So it's like ballpark, okay, cool, cool. Totally normal, yeah.
Starting point is 00:03:47 Yeah, it's a ballpark, but totally normal. And there's a lot of criticism. I mean, I think there's a couple of things here. One, basically, Nvidia's got this incredible market cap that is based upon the assumption that some ridiculous percentage of the output of world GP is going to go to Nvidia, GPUs forever, which I don't think is really sustainable. You have all these companies that are pouring investment
Starting point is 00:04:15 into AI, which ends up going to Nvidia without sustainable business models. And so at some point that's gonna have to turn around. The other problem is that you've got all these Wall Street traders who don't really know crap about AI, right. And so the paper was weeks old, and then the dude interpreted it on LinkedIn on the weekend. And then by the time the market opened, people lost their mind. And so I think what it
Starting point is 00:04:40 demonstrated is that it's really easy to manipulate Wall Street when it comes to AI. I should say too the grand irony in all of this is that even if this were a model that was incredibly efficient to train the impact to Nvidia wouldn't be all that great in fact because you know an excellent commodity model would actually mean people would need more chips on their devices to run the model so perhaps fewer chips in the data centers to train the models, but more chips going out into consumer devices to actually run the model.
Starting point is 00:05:10 So thus this being a net positive for Nvidia and the stock should have gone up. But anyway, we're not a financial podcast. And this is the argument a bunch of people made, the Jevons paradox of like, yeah, if it's more efficient then people are still going to utilize the Nvidia. Not like anybody called up Nvidia and like, oh, cancel it's more efficient, then people are still going to utilize the NVIDIA. Not like anybody called up NVIDIA and like, oh, cancel all my orders, right?
Starting point is 00:05:29 But I think one of the things you're gonna see from this is that somebody else is going to try to do the same kind of thing, release a paper that's either true or not true and then trade on it, right? So I wouldn't be shocked if that's one of the outcomes of this. So anyway, DeepSea releases the paper, the model's efficient and they did have some breakthroughs.
Starting point is 00:05:47 There's some questions as to the actual efficiency of the training because one, it does look like they distilled via different techniques, both llama and OpenAI, right? So llama is, meta llama is open source, so it's trivial to distill it. OpenAI is closed source, but Microsoft has an entire business model
Starting point is 00:06:04 where you can go to Azure, you can throw it on a credit card, you can rent a private copy of open AI, and then you can ask it millions or billions of questions and then just pay Microsoft per hour. And you could do that, if you do that in a structured way, you can distill out the thought process and you can use that to train your own models. I think we should, instead of calling it a thought process, when it comes to LLMs and AI, we should describe it as the model's soul.
Starting point is 00:06:33 I think that works a little bit better. They extracted its soul by asking it a bunch of questions. But this is interesting as well because there's been a lot of concern about Chinese APT actors trying to go out and steal some of this very valuable intellectual property from the large model companies in the United States. But when you can effectively enumerate it, when you spin up an account with a credit card, you don't really need to be popping shells to steal stuff at that point. You just ask it the right questions. That's still going to be a terms of service violation, but it's not what we think of when we think of like APT
Starting point is 00:07:08 activity, right? And that is one of the interesting things about the DeepSeek R1 model is that is a reasoning model. And it is an open source model that allows you to see its chain of thought, which is one of the things that people thought is actually pretty cool, is when you ask it a question, it shows you how it's trying to get there, which is not
Starting point is 00:07:24 available in a number of other reasoning models, and is also something that OpenAI debuted not very long ago. So the time between OpenAI's announcement of this knowledge and DeepSeek copying it publicly was quite short. And so look, nobody has any evidence of DeepSeq playing unfair here, but I do want to point out a couple things that I think Chris has some thoughts here. One, the idea that DeepSeq only has access
Starting point is 00:07:51 to the H800s and the other kind of older GPUs that are only allowed to be shipped into the PRC is ridiculous. There are companies whose entire job it is to operate in places like the UAE in Singapore where they can buy whatever they want in those countries and then they rent it out to Chinese companies. These are models.
Starting point is 00:08:11 They don't have to, the GPUs don't have to be in China for Chinese companies to rent them elsewhere, right? And there's no reason, the UAE does not have huge AI labs, right? Like the people who are paying for that are Chinese companies. And so that's one of the things that's possibly going on here. And there's, it is quite possible there's a number
Starting point is 00:08:30 of sanction busting techniques that are happening. And again, we don't have evidence that DeepSeek did that, but we know China is doing that overall. Let me just explain a little there for people who might not be caught up, but you know, the latest and greatest Nvidia tech, you know, China should not be able to buy that, which is why some people think some of the claims DeepSeek have made about how wonderfully efficient
Starting point is 00:08:51 this whole thing is and about how they were able to train the models on old tech is just because they don't want to admit that they've been getting around sanctions, which seems plausible on the surface of it. But again, I feel like the problem with this whole discussion is everybody's sort of coming up with theories and we don't really have that many facts on the ground. But I mean, what are your feelings there, Chris, when it comes to how effective these Nvidia sanctions are in terms of restricting GPU compute cycles to China? We know they're not effective because the prior administration, the Biden administration,
Starting point is 00:09:28 had to further ratchet them down over the course of the last couple, three years. First it was you can't sell to third parties, well, third countries, and then it was you can't rent to them. And then at the same time, there are these popping up black or gray markets in places like Singapore and elsewhere that we know have been accessed by actors in China, in Russia, and elsewhere. So it's not to say that they haven't been fully effective. I think there has been friction.
Starting point is 00:10:02 It'll be interesting to see how the current administration decides on how they want to implement certain sanctions on certain sectors. I mean, it does seem as if the current administration wants to bring back the entire chip industry to the US, which will take tens of billions, if not more, probably 10 years to vertically integrate that market. I mean, I would like a unicorn that urinates beer and craps out gold nuggets as well, but that doesn't necessarily mean it's gonna happen.
Starting point is 00:10:31 It's a negotiation tactic perhaps, but then you just have other factors, you know, like ASML and their role in how they can be constrained. So I think that's one aspect, but something that Alex has been on for a couple of years now that we have seen out there in the market, it's not just about the fact that they may have gotten the API to spit out whatever they needed for the distillation, but it's also
Starting point is 00:10:59 we know that certain actors in China, whether it's state security services, academics, research and contractors have been actively targeting employees of US labs. I mean, we've found dossiers of employees of US AI companies and labs that have been used for targeting for poaching purposes. So, you know, I think that's what happens is we tend to fall back always into this trap, right, of like, oh, the cybers and this stuff was hacked and it was pulled out and sent back to China. You know, there are three, four, five other different ways that they can get the technological
Starting point is 00:11:43 edge that they need, then roll it into a product that takes the world by surprise. Well, there's also a tantalizing other possibility here, which is that China is full to the brim with very intelligent, hardworking people who are extremely well educated, and it's entirely possible that they got there on their own. I mean, obviously, they they're gonna do what they can to extract as much advantage as they can from tech that came before them.
Starting point is 00:12:11 But you know, competitors in the United States would be doing that as well. Competitors all over the world are gonna be doing that. This is actually a very, very interesting observation as we think about perhaps this age of austerity that we may be entering into the US Government where we're kind of pulling back on government spending which could see grants going out to colleges and universities as well as federal funding going into the national lab system
Starting point is 00:12:39 That's driving a lot of technological advantage right now. And so if we're gonna pull back a little bit, the Chinese that are dumping massive amounts of treasure capital in effort behind their own indigenous workforce and STEM community, I do wonder, I do worry that maybe they'll be able to press the pedal down a little bit more while we seem to be pulling back a little bit. Now that's not to say that the private sector, which is part of this entire strategy
Starting point is 00:13:16 of the new administration is move people into higher productivity jobs in the private sector. activity jobs in the private sector, are we going to see the big tech companies be provided certain advantages on a tax or regulatory basis that will allow them to invest, that will allow them to continue driving? Just saw Microsoft has invented a fourth state of matter, I guess, with their announcement on quantum computing. But look, I mean, that's, I think,
Starting point is 00:13:44 where we're pushing the chips on the table towards the private sector companies. So one thing that you touched on there that I find, yeah, really, really interesting is this whole idea of DeepSeek as a threat because it's Chinese. It's been really interesting to watch this as a non-American because the
Starting point is 00:14:07 reaction to DeepSeek has been borderline hysterical in the United States. And it seemed like the reason this was getting so much attention is it kind of punctured the American hubris bubble, which is we are the leaders, no one else is ever going to come close to us. You know, the Europeans are over regulating, the Chinese can't develop indigenous tech, they just have to steal it from us, they've got nothing. And then along comes this thing and it's a bit of a bubble puncturing moment. Do you? So nobody who worked in tech thought that, right? Like maybe there's people in DC, but like nobody in Silicon Valley thought that China was never going to be competitive in AI. Certainly nobody who works in academia because like half of our good AI PhD students
Starting point is 00:14:58 are from China. But you would agree that DeepSeek, you know, there were some advancements there that perhaps people weren't expecting. I mean, this wasn't just a case of a model coming out that was kind of at parity and it's not at parity in every dimension but there were some breakthroughs there that I think were genuinely surprising including to the people in technology in the US. Yeah, I mean I think there were legitimate breakthroughs in efficiency. They did demonstrate some breakthroughs in using H800.
Starting point is 00:15:26 I mean, they did demonstrate, it's not necessarily true that they actually cheated on the, they might've only trained on H800s. They showed that they're doing low-level programming to get more efficiency out of chips that are sanction compliant. And it demonstrates that necessity is the mother of invention, right?
Starting point is 00:15:48 And I totally agree with Chris. Like, it demonstrates that now is not the time for us to take the pedal off of trying to invest in fundamental research, right? Like a huge amount of the work that went into the invention of LLM's, all of the academic work here was funded by the National Science Foundation, it's funded by DARPA, it's funded by like US government grants into 20, 30 years ago, things that seemed neural networks and stuff
Starting point is 00:16:15 that seemed like totally ridiculous, non applicable computer science work and applied math and such that now seems super practical. But yes, I mean, it did shock a lot of people. But like I'm just saying, nobody in academic AI or who worked in Silicon Valley thought like, oh, China will never catch up. On Europe, yes, I mean, a lot of people have looked at Europe and thought,
Starting point is 00:16:38 yo, there's one competitive AI company, Mistral, but for the most part, there's lots of smart Europeans in AI and they all work for American companies. Yeah. So let's now change the focus a little bit and talk about the security concerns, which again I think to some degree have been overblown. One concern is, oh my God, this is a Chinese model. So anybody entering a query into this thing, that information is going to be captured by China. And they're like, OK, sure.
Starting point is 00:17:09 That is an issue. We've also seen issues around the security of DeepSeek's infrastructure, terribly insecure. I mean, I actually made a joke about you, Alex, on the show. I don't know if you caught it. I heard. Yes, I appreciated that. But yeah, normally when a startup has these sort of issues,
Starting point is 00:17:23 Alex can show up and implement some sort of rapid security response program and, you know, like you did with, who was it? Was it Zoom? Zoom or SolarWinds, yes. Yeah, exactly. It turns out Chris and I are not available to go parachute in the DeepSeek. So you know, these were the issues with it, but again, this is an open source model, which people are free to run on
Starting point is 00:17:45 their own you know what I mean so you can use it without sending data to China and there's a lot of censorship stuff in there you know to make sure that the model is compliant with you know good socialist thought and whatnot that you know I'd imagine it again being open source would be fairly easy to disable so I guess the question becomes like how overblown are the you know security imagine it again being open source would be fairly easy to disable. So I guess the question becomes like how overblown are the security concerns? Because I think people were thinking about this from a sort of TikTok security concern paradigm and it doesn't seem to be the right way to think about this at all.
Starting point is 00:18:19 But I just wanted your thoughts on that. I think part of the problem here is there's really two totally different ways you can use this thing, right? So for normal consumers, if you're downloading the DeepSeq app or you're going to their website, that's just like using a, it's much worse than TikTok, right? Like if you're an American using TikTok, it's USDs, it's in America, there's a bunch of controls, there are concerns that people have, but like at least you're using like American servers that have some controls around them. If you use a DeepSeq app, that stuff's going to China, do not pass go, your data just goes straight to
Starting point is 00:18:51 China, apparently into totally insecure infrastructure as you pointed out. That has nothing to do with AI, right? It's like using Baidu or WeChat, right? I mean it should go into insecure infrastructure in America. Yeah, exactly. That's my joke, but anyway, yeah. Yes, yes, exactly. So that has nothing to do whether it's DeepSeek or not, that's just not secure.
Starting point is 00:19:12 The thing that's like very embarrassing, like you know I'm middle-aged so I use LinkedIn, right? Like that's the middle-aged social network these days. We launched our LinkedIn this week. Risky Business is now on LinkedIn, everybody. You can find us by searching for Risky business media where you too can get excellent tips on how running a podcast for 20 years What it has taught me about B2B sales, but yeah Yeah, are you crushing it?
Starting point is 00:19:37 Yeah, so like as I was you know crushing it and grinding it out on LinkedIn You know like a lot of people are really embarrassing themselves. There's like a lot of people who are just like, oh, you never have to listen to this person ever again for security advice, because people are treating open source model weights like software, right? And they are not.
Starting point is 00:19:57 So if you are a company and you're downloading the DeepSeq model weights, that is something that's somewhere between totally safe and just as dangerous as compiled software. It's actually really complicated how you treat something like that. It's a totally new thing for which we do not have well defined understanding of the security model, right?
Starting point is 00:20:21 So to go back a little bit, Meta created this entire space when they released Llama. And when they released the first version of Llama, it was both executable code and the model weights, right? So it was a bunch of Python code and the model weights. People pretty quickly threw away Meta's Llama Python code because it wasn't that fast. And they re-implemented,
Starting point is 00:20:43 and there's a bunch of open source projects, the most famous is llama.cpp that is a llama compatible implementations that are optimized for all kinds of different pieces of hardware. And now what you have is that people distribute models in a variety of different formats, but the most popular base format is the called safe tensors, which as the name describes, is supposed to be a safe serialization of the mathematical representation of an LLM. And then that can get wrapped in a variety
Starting point is 00:21:11 of different kinds of metadata. So like on Hugging Face, the most popular format is gguf. And so that's just like metadata upfront. And then effectively a massive matrix of tensors that represents this humongous mathematical structure that is a LLM, right? When you run that code,
Starting point is 00:21:30 the actual work is being done by llama.cpp, or in the case of if you're running at Microsoft or Amazon, their own customized llama compatible engine, that is doing the work. The model is really,'s like it's it's Safer than a word file or a PDF or one of these really complicated things It is you're basically the code is walking its way Through this humongous tensor space to interpret first a certain input. What is the output that this LLM gives me?
Starting point is 00:22:02 The LLM itself cannot talk to the internet. It cannot execute code. It can't do anything other than give you a sequence of text for whatever sequence of text you gave it. Now, theoretically, you could do something stupid, right? You could ask the LLM, give me some shell code, and then you can execute that code on your shell, right? You could put it into a lane chain,
Starting point is 00:22:29 like into like an agentic framework, and you could have it execute something dangerous. But if somebody wanted to backdoor an LLM to do something dangerous, they would have to predict what kind of dangerous thing you were doing and backdoor it to do that. And so there are risks, but in general, those risks are risks that you have to create for yourself.
Starting point is 00:22:52 It's not like you could just down, it's not like the OpenSSH backdoor, which is a backdoor that if it had not been detected would have been every Linux machine on the planet, you can log into, right? It's not like you can download these model weights from DeepSeek and then the model wakes up a year later. I know you understand this, but I don't think everybody,
Starting point is 00:23:11 there's a lot of people who are acting like this model is actually intelligent and like it's a Chinese spy, and a year later it wakes up and it's like, oh, I'm gonna sneak out of my network. No, no, no, all I can do is generate text. Now, in the future, though, there is going to be risk because people are going to want functionality like the OpenAI deep research where you can ask OpenAI, hey, go write a report for me that
Starting point is 00:23:34 does a bunch of stuff. And it has to go out to the internet and do all these things. And so people are now building agentic frameworks where there's a standard mechanism for the model in its response to say, I want to talk to the web. I want to do this. I want to do that. And so that will be something that you can insert back doors into. But as of today, that's not a thing. I'd imagine too, like a lot of these instrumentation frameworks, because that's essentially what we're talking about. I mean, you'll be able to swap the models around, right?
Starting point is 00:24:01 I understand what you're saying though, because even if the model developer doesn't develop that framework as well, they could still do something dodgy with the instrumentation. But look, broadly speaking, I'm exactly on the same page with you, which is that, okay, maybe using a model hosted in China and dropping a bunch of sensitive information into it, not a really good idea, not great from a security perspective, but it doesn't mean that we can't capture some of the value of these models by customizing the open source versions that have been released.
Starting point is 00:24:32 And I guess this, my take on this as a non-expert, and I definitely want to check it with you, is that I think this has shown us that perhaps models themselves, for a long time, you know, since this all kicked off with the release of chat GPT, the first, you know, big version that made everyone lose their minds. The big thing with it is everybody thought, oh, well, that's where the value is going to be created.
Starting point is 00:24:57 These companies that are that are generating these models and, and whatever. And, you know, open AI has a absolutely gargantuan valuation at this point, and you know, there's so much value in the sector, but it sort of seems like maybe that's not where the money's going to come from, and the people who really benefit from this are the people who are going to be making the products that make the best use of these models, and Nvidia, who provide the hardware to power them. Is that a ridiculous take? Because as I said, this is not something I have been following as closely as you. No, I think that's right. I mean, I think one of the things DeepSeek demonstrated here is whether it's China or
Starting point is 00:25:32 not, the base model makers, the open AIs, the anthropics, the folks like that, the people who are making general purpose LLM foundational models do not have moats, right? That you could be like, whoa, we're the winners, we're on the victory, and then any day, somebody can elbow you in the face and they'll be on top, right? And so the two, the winners are, like you said, Nvidia. The other winners is the middleware guys,
Starting point is 00:25:57 Microsoft, Amazon, those guys immediately were like, oh, DeepSeek, great, and they offered DeepSeek, right? Yeah. And because- I I mean for them It's just another form of compute right it's like ec2 or whatever It's just like hosted LLM ticker box pick your model and they get a margin You know it's like they get a margin on on offering that to you whether it's storage better for them because they don't have to Pay a license for it because it's like MIT license so unlike llama unlike open AI
Starting point is 00:26:22 You know cuz llama is open source, but Meta's license is if you use it for commercial purpose, you have to kick the money, right? So to DeepSeek's credit, like their license is, oh, even if you use it for commercial purposes, you don't have to OS anything, which is a fascinating kind of escalation versus Meta's license, which is like,
Starting point is 00:26:42 Meta's license is kind of like, it's like Fiorador's Nmap license, whereas like, it's even more which is like, Meta's license is kind of like, it's like Fiorador's and MAP license, whereas like, it's even more aggressive than that, right? So it's really good for like an Amazon or Microsoft, because in the end, they now get to, you know, they get much more margin on this. And then, yeah, it's good for folks like us, because like we use LLMs to sell a product to folks,
Starting point is 00:27:02 and like, if we have to pay less, if it makes just the competition, I mean we're not using DeepSeek, right? But just the competition, if it makes our LLM providers lower their costs, then that's great for us. And like you said, it didn't really hurt Nvidia. Make their stock go down, their stock go up,
Starting point is 00:27:16 means they're gonna get sued, because like if your stock goes down, you get sued. But like in the end, they're shelling shovels and the gold rush is still going. It's funny though, I will just say too earlier when you were talking about Nvidia, and I remember like, you know, a year, a year and a half ago, people saying, wow, you know, like the growth is tapped out, they would have to hit incredible numbers for this to continue. And they kept hitting them, right? So never count them out. They seem to be, they seem to be just,
Starting point is 00:27:43 it just keeps going. Will it be like Cisco during the dot com boom and eventually collapse? Who knows, but betting against Nvidia seems to be as risky as betting for it. It's unfair to them because eventually they have to like, they could be spectacularly ridiculously profitable. They can't grow forever, right?
Starting point is 00:28:02 It's like an exponential growth. It's like the bacteria taking over the planet kind of problem. At some point, you know, like people have to have enough GPUs. Um, I mean, it's also at the point now you hear from, from open AI, you hear from other folks that it is the constraints on their capacity as well as the fact that they make a decent amount of margin hasn't made a lot of people invest in creating their own hardware, right? Yeah.
Starting point is 00:28:27 So. Now, look, I wanna talk to you now, Chris, about more of a city. Well, I do wanna say, Nvidia's gonna be fine, right? I mean, they were already back up to 140 today. What did they hit, about 116 after DeepSeek? And I think, as Alex mentioned, the massive market cap hit they took,
Starting point is 00:28:44 but the inference, using GPUs for inference is always going to be a requirement. It's really that amplification at the edge. That's something, Alex, when we were modeling the risk posed to the AI value chain, the real value is in the amplification at the last mile and the customer interface. I mean, that's kind of what I was saying, right? Which is what they might lose in the training they're going to make up at the edge, right? We haven't really even scratched the surface on that entire market.
Starting point is 00:29:15 I mean, we're still in the very early days of use case development and real true integration into the enterprise. Should say Chris, this is not financial advice to anyone listening to this. This is just a bunch of- I didn't say it was. I know. I'm just saying very clearly that it isn't. Yes, that is right.
Starting point is 00:29:35 That is right. Right, so the three stocks you should buy right now. Don't stop. I don't need trouble with the regulators. But I wanted to talk to you Chris more about the geostrategic implications of this, because this is something that you've spent a lot of time thinking about. You've indeed just returned from the Munich Security Conference, where a lot of people were talking about all of this AI stuff.
Starting point is 00:29:58 What was the vibe on the ground at Munich? What were people talking about? Where did they sort of zero in? Because you always notice when you go to an event like that, when there is a discussion of a big issue, it tends to pretty rapidly focus onto a few key things. What were they? Well, so just kind of first things first,
Starting point is 00:30:14 Munich Security Conference tends to be the, if not, the number one, the number two or three top national security conferences every year. Alex is a long time participant. I've been several years. Our old mutual friend Demetri is a bit of a fixture. Host a number of different events the last couple years.
Starting point is 00:30:39 And it's such an interesting event because it's in a really small venue. It's in the Hotel Beresherhof in Munich. It is a very, you know, classic, old, elegant hotel, but small. And so you get members of Congress and I'm talking senators, a very high statue, a statue rather, without staff, they are not granted plus ones. And so they're just roaming the halls.
Starting point is 00:31:06 And it creates some very interesting interactions. I remember a couple years ago, kind of walking down the hall and Sergey Lavrov, the foreign minister of Russia, walking right, right past me. So there's definitely some surreal moments. And there's always a kind of a theme that's official, but then there's also a theme that's unofficial. And obviously this year's unofficial theme
Starting point is 00:31:32 was kind of the new world order with the Trump administration that seems to be taking a hard look at the transatlantic relationship, NATO, what happens next with Ukraine. Obviously you see plenty of headlines in X posts and whatever about all that. The thing that really stepped out,
Starting point is 00:31:55 or at least kind of I picked up on and was paying attention to the most, was the difference in the transatlantic conversation around AI and regulation. And this has really been an issue for years on tech in general, and that has spurred any number of lawsuits. What are we on now? Shrems 3, Alex, I've lost track.
Starting point is 00:32:18 We've got the Cloud Act. We've got the US-UK agreement. Microsoft had a lawsuit that went all the way up to the Supreme Court on DOJ access to an Irish data center. And so again, these issues have all long been simmering, but I think it really came to a head, particularly with the vice president's comments about technology, about censorship and regulation. So what I am seeing is that there is a significant cultural divide between
Starting point is 00:32:48 the European side of the pond and the American side where clearly the American take and has been for years and years is let the technology blossom and let's figure out what the harms are and then we can make those interventions at that point once we fully appreciate and understand the harms. And I would even say that I think, particularly with the kind of effective, the effective accelerationism, excuse me, that we're even kind of cutting back
Starting point is 00:33:17 on intervening on the harms. Where the flip side is, the European model is regulate first, ask questions later. And we've seen that with the Digital Services Act, the AI Act, the European model is regulate first, ask questions later. And we've seen that with the Digital Services Act, the AI Act, the Cyber Resilience Act. And as a result, and Patrick, I will cabin this up to technology for now because you have I think a broader viewpoint on manufacturing Europe in general, but that regulatory approach in Europe has really hindered and limited the ability of European tech companies to make a dent in kind of the American and then, you know, parenthetically
Starting point is 00:33:55 Israeli domination of the tech space. Well, the Israelis in the cyberspace. So that was absolutely super evident. I think finally it really resonated with members of European Parliament and government officials in various European countries that AI is the latest battleground of this struggle but also the one that is going to probably come to a head with the US government. And I think that's in terms of policing speech, tech censorship, and just AI in general. So one thing I find interesting about this, right, is as you rightly point out, the Europeans have regulated the absolute crap out of AI. But as we've just sort of determined in this conversation, probably the models are going
Starting point is 00:34:50 commodity. So have they pulled, I don't know if you're familiar with the Australian Winter Olympian Stephen Bradbury, but he was the guy who won a gold medal because literally he was last place and everyone else fell over and he wound up getting the gold. And I sort of wonder if the Europeans are going to grab like the latest open source model, make sure that it's compliant with their regulation and then off they go. So I'm just wondering if this is as much of a self-own as people in your country think it is.
Starting point is 00:35:16 So there's an interesting thing about this and Alex and I have talked about this for a bit now, but you know, particularly with the right to be forgotten in Europe with the models as they exist now How does one effectively? Pursue that private right of action where you have yourself can you you cannot extract it from the model itself? So then you have to put some kind of filter or agent on top That is constantly on the lookout for you and everybody else that puts them themselves on that do not fly list And in the funny thing is we've talked about this at least theoretically is constantly on the lookout for you and everybody else that puts themselves on that do not fly list. And the funny thing is we've talked about this at least theoretically. I think we've seen it.
Starting point is 00:35:58 We've seen it with the browser-based and app-based version of DeepSeek where the model, if you believe the stories that it was trained is distilled down from OpenAI and Lama and other things. So then it was trained on the body of knowledge on the, not just the Western internet, but a broader internet. So it has things that might be politically untenable for the CCP. And again, due to the chain of reasoning that Alex mentioned, you can ask it questions and it starts spitting out the answer that's based on the broader body of knowledge, but once it realizes, like, whoa, whoa, whoa, I can't talk about this thing, and it starts working back up the reasoning and delete.
Starting point is 00:36:34 It's fantastic. The videos are amazing where you see it answering and then it just disappears, right, from the screen. It's incredible. Right, there's a big difference between the online model and what you can download, right? The online, it's obvious, and this is actually how a lot of safety alignment works for online models,
Starting point is 00:36:51 is you have the base model, and then you have a different model that's watching for safety, right? But their definition of safety in China includes safety for the Chinese Communist Party. And so, if it starts going off, there is effectively a political officer with a gun to its head, and if it starts going off there is effectively a you know political Officer with a gun to its head and if it starts going off script it shoots the model right? Yeah, but if you have the model weights locally then it is only barely censored right like they did barely the minimal
Starting point is 00:37:19 amount to ship the the deep-seek in fact if if you told me me that people at DeepSeek were in trouble with the Chinese government, I would not be shocked, because the amount of work you have to do to get the DeepSeek model weights to talk to you about Tiananmen Square or to say that Taiwan should be free is not a lot, right? Yeah. Which also maybe also points to the idea
Starting point is 00:37:39 that DeepSeek has a lot of knowledge that has been distilled from either Lama or OpenAI, because there's a lot of Western that has been distilled from either llama or open AI, because there's a lot of Western thought in this model. Right? Yeah. It does give us that real world example of how you would deal with possibly one solution at least for the right to be forgotten problem set.
Starting point is 00:38:00 Well, and more broadly, you know, some of this compliance stuff, some of this regulation that the Europeans have brought in, I mean, perhaps the Chinese have shown them a way that they could do this, which is funny. Funny. So when you talk about European AI regulation, the truth is, is I don't think it's the current AI regulations that make Europe not competitive in AI. It is the net sum of everything Europe has done up to this point that makes them uncompetitive
Starting point is 00:38:27 in tech, right? It's the high-end regulations. It is what they've done to drive away smaller companies. It is what they've done to drive away smaller investments, right? That stuff just makes it that you don't want to start a company there already. The right to be forgotten issues, the GDPR issues. And the AI regulations are just another layer on top. The other problem here for the AI regulations in Europe is they're thoughtful in some ways
Starting point is 00:38:55 and that the one thing I wrote, I wrote an op-ed against California's AI regulations, which end up being vetoed by Governor Newsom, which I'm really glad because the California AI regulations were all about the foundational models. And one of the good things about the European AI regulations is they're dependent upon the application, right? So what Europe wasn't doing is they were not actually trying to regulate the foundational models. What they were saying was if you're using AI
Starting point is 00:39:20 in this circumstance, you have a bunch of obligations. The problem was is that the boundary for that of what you would have to do was of the situations you'd have to apply was very low and what you have to do was very high. And so the result is if you're ever going to apply AI to any purpose, you're not going to do it in Europe until you're huge. And so as a result, every use of AI to solve a human problem will happen outside of Europe first. You'll be a huge company until you'll try it with Europeans. And that is what the Europeans have bought themselves, is that they've basically said,
Starting point is 00:39:54 we rather it be perfect before it gets tried here. And that is their decision that they can make. But the cost of that will be that nobody will start an AI company in Europe. That is the flip side. All right well we're going to wrap it up there guys. Alex Stamos, Chris Krebs, thank you so much for joining me for this discussion. It's always great to see both of you and we're going to be doing one of these every month actually this year which I'm stoked about because you know listeners love this podcast and I also really enjoy doing it, so that's great news. Yeah, a pleasure to see you both and we'll chat again next month. I want that to be the most conservative shirt you wear this year, Patrick.
Starting point is 00:40:31 I want every month the shirt to get louder. I'll see what I can do. It's a real beaut you got there, Pat, and I am super excited that we might be able to get to do this in person again. Yes, at RSA in California coming up in late April. Yeah, looking forward to it. Stay tuned, as they say. ["The Daily Show Theme Song"]

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.