Risky Business - Wide World of Cyber: DeepSeek lobs an AI hand grenade
Episode Date: February 21, 2025In this episode of the Wide World of Cyber podcast Risky Business host Patrick Gray chats with SentinelOne’s Chris Krebs and Alex Stamos about AI, DeepSeek, and regula...tion. From its bad transport security to its Chinese ownership and the economic implications of China “entering the chat”, everyone’s freaking out over this new model. But should they be? Pat, Alex and Chris dissect the model’s significance, the politics of it all and how AI regulation in Europe, the US and China will shape the future of LLMs. This episode is also available on Youtube. Show notes
Transcript
Discussion (0)
Hey everyone and welcome to another edition of the Wide World of Cyber, the podcast we
do here at RiskyBiz which is sponsored by and produced in conjunction with Sentinel
One and joining me now is Alex Stamos who is, you are the CISO these days aren't you for Sentinel One?
I am the CISO and also the CIO so that's what what I'm like.
Whatever happened to never CISO my friend?
Whatever happened to never CISO?
I tried to avoid responsibility and I failed.
That's what happened Patrick.
Alex is the CISO for Sentinel One and the CIO apparently which I just learned and prior to
that he has worked as the CISO for Facebook, for Yahoo.
He's done all sorts of stuff, founded ISEC partners back in the day.
Joining us also is Chris Krebs, who is the Policy and Intelligence guy over at Sentinel
One, also was the first director of CISA.
Welcome to you, Chris.
Thanks, Pat.
All right.
So today's topic, we're really going to be talking about AI, which is a topic that we've
covered on this podcast before, but we're going to go a little bit more specific, right?
And we're going to talk about DeepSeek.
And this is something that we've talked about little elements of it on the main weekly show
on Risky Business, but we've never gone deep on it, mostly because neither Adam
nor I are really what you would call AI experts, right?
Whereas Alex, I know you follow this stuff closely.
Ever since LLM sort of became the hot new thing, you've been all over it, trying to
understand the tech and the implications of it and developments in it.
So why don't we just run through in basic terms like the deep seek
situation? Because as I understand it, essentially what happened is a Chinese company published an
open source LLM that was a lot better than anyone was expecting it to be. They also made claims with
regard to the training cost. They said that it cost them very little to develop this thing,
which has
provoked some skepticism from some quarters, it must be said. But nonetheless, it's a very
impressive bit of technology. It's very efficient. You know, when you're actually just running
the model, it is extremely efficient. And, you know, this has almost led to a bit of
a emperor has no clothes moment in the broader AI industry. I mean, is that about the state
of it? How did they go summing it up there?
Yeah, I think it's a pretty good summary.
DeepSeek's not new.
They've been around for a little while.
They're a lab that's part of a Chinese hedge fund.
They've published a number of papers over the past
and they published a new paper with this.
And they both made claims that can't be verified
and claims that can be verified
because you can do it themselves.
And in their paper, they had some new breakthroughs
that can be verified in both training
and inference efficiency that they released to their credit.
Everybody can take advantage of those things.
And they made some claims around the efficiency
of the training of the model that weren't totally verified.
Now, to be fair, people over applied some of the claims
they made to one model to a different model.
They actually over expanded some of the claims
that DeepSeek made to be to everything
that they didn't actually make.
So a lot of those claims actually came from a LinkedIn post
that a guy made
that ended up, that's what tanked Nvidia stock. Nvidia ended up losing more market cap in
one day than any other companies ever lost.
Wasn't it like, you know, like nearly a trillion dollars of market cap just got vaped?
Yeah, you know, it was hundreds of billions. I don't think they hit quite a trillion, but
it was like, it was hundreds of billions of dollars. And there's a lot of-
So it's like ballpark, okay, cool, cool.
Totally normal, yeah.
Yeah, it's a ballpark, but totally normal.
And there's a lot of criticism.
I mean, I think there's a couple of things here.
One, basically, Nvidia's got this incredible market cap
that is based upon the assumption
that some ridiculous percentage of the output of world GP is going to go to Nvidia,
GPUs forever, which I don't think is really sustainable.
You have all these companies that are pouring investment
into AI, which ends up going to Nvidia
without sustainable business models.
And so at some point that's gonna have to turn around.
The other problem is that you've got all these
Wall Street traders who don't really know crap about AI, right. And
so the paper was weeks old, and then the dude interpreted it on
LinkedIn on the weekend. And then by the time the market
opened, people lost their mind. And so I think what it
demonstrated is that it's really easy to manipulate Wall Street
when it comes to AI. I should say too the grand irony in all of
this is that even if this were a model that was incredibly efficient to train
the impact to Nvidia wouldn't be all that great in fact because you know an
excellent commodity model would actually mean people would need more chips on
their devices to run the model so perhaps fewer chips in the data centers
to train the models, but more chips going out
into consumer devices to actually run the model.
So thus this being a net positive for Nvidia
and the stock should have gone up.
But anyway, we're not a financial podcast.
And this is the argument a bunch of people made,
the Jevons paradox of like, yeah, if it's more efficient
then people are still going to utilize the Nvidia. Not like anybody called up Nvidia and like, oh, cancel it's more efficient, then people are still going to utilize the NVIDIA.
Not like anybody called up NVIDIA and like,
oh, cancel all my orders, right?
But I think one of the things you're gonna see from this
is that somebody else is going to try to do
the same kind of thing, release a paper
that's either true or not true and then trade on it, right?
So I wouldn't be shocked
if that's one of the outcomes of this.
So anyway, DeepSea releases the paper,
the model's efficient and they did have some breakthroughs.
There's some questions as to the actual efficiency
of the training because one, it does look like
they distilled via different techniques,
both llama and OpenAI, right?
So llama is, meta llama is open source,
so it's trivial to distill it.
OpenAI is closed source,
but Microsoft has an entire business model
where you can go to Azure,
you can throw it on a credit card, you can rent a private copy of open AI, and
then you can ask it millions or billions of questions and then just pay Microsoft per
hour.
And you could do that, if you do that in a structured way, you can distill out the thought
process and you can use that to train your own models.
I think we should, instead of calling it a thought process, when it comes to LLMs and
AI, we should describe it as the model's soul.
I think that works a little bit better.
They extracted its soul by asking it a bunch of questions.
But this is interesting as well because there's been a lot of concern about Chinese APT actors trying to go out and steal some of this very valuable intellectual property from the large
model companies in the United States.
But when you can effectively enumerate it, when you spin up an account with a credit
card, you don't really need to be popping shells to steal stuff at that point.
You just ask it the right questions.
That's still going to be a terms of service violation, but it's not what we think of when we think of like APT
activity, right?
And that is one of the interesting things
about the DeepSeek R1 model is that is a reasoning model.
And it is an open source model that allows you
to see its chain of thought, which is one of the things
that people thought is actually pretty cool,
is when you ask it a question, it
shows you how it's trying to get there, which is not
available in a number of other reasoning models, and is also something that OpenAI
debuted not very long ago.
So the time between OpenAI's announcement of this knowledge and DeepSeek copying it
publicly was quite short.
And so look, nobody has any evidence of DeepSeq playing unfair here,
but I do want to point out a couple things
that I think Chris has some thoughts here.
One, the idea that DeepSeq only has access
to the H800s and the other kind of older GPUs
that are only allowed to be shipped into the PRC
is ridiculous.
There are companies whose entire job it is
to operate in places like the UAE in Singapore
where they can buy whatever they want in those countries
and then they rent it out to Chinese companies.
These are models.
They don't have to, the GPUs don't have to be in China
for Chinese companies to rent them elsewhere, right?
And there's no reason, the UAE does not have
huge AI labs, right?
Like the people who are paying for that
are Chinese companies.
And so that's one of the things that's possibly going on here.
And there's, it is quite possible there's a number
of sanction busting techniques that are happening.
And again, we don't have evidence that DeepSeek did that,
but we know China is doing that overall.
Let me just explain a little there
for people who might not be caught up,
but you know, the latest and greatest Nvidia tech,
you know, China should not be able to buy that, which is why
some people think some of the claims DeepSeek have made about how wonderfully efficient
this whole thing is and about how they were able to train the models on old tech is just
because they don't want to admit that they've been getting around sanctions, which seems
plausible on the surface of it.
But again, I feel like the problem
with this whole discussion is everybody's sort of coming up with theories and we don't really have
that many facts on the ground. But I mean, what are your feelings there, Chris, when it comes to
how effective these Nvidia sanctions are in terms of restricting GPU compute cycles to China?
We know they're not effective because the prior administration, the Biden administration,
had to further ratchet them down over the course of the last couple, three years.
First it was you can't sell to third parties, well, third countries, and then it was you
can't rent to them.
And then at the same time, there are these popping up black or gray markets in places
like Singapore and elsewhere that we know have been accessed by actors in China, in
Russia, and elsewhere.
So it's not to say that they haven't been fully effective.
I think there has been friction.
It'll be interesting to see how the current administration decides on how they want to
implement certain sanctions on certain sectors.
I mean, it does seem as if the current administration wants to bring back the entire chip industry
to the US, which will take tens of billions, if not more, probably 10 years to vertically
integrate that market.
I mean, I would like a unicorn that
urinates beer and craps out gold nuggets as well,
but that doesn't necessarily mean it's gonna happen.
It's a negotiation tactic perhaps,
but then you just have other factors,
you know, like ASML and their role
in how they can be constrained.
So I think that's one aspect,
but something that Alex has been on for a couple of years
now that we have seen out there in the market, it's not just about the fact that they may
have gotten the API to spit out whatever they needed for the distillation, but it's also
we know that certain actors in China, whether it's state security services, academics, research
and contractors have been actively targeting employees of US labs.
I mean, we've found dossiers of employees of US AI companies and labs that have been
used for targeting for poaching purposes.
So, you know, I think that's what happens is we tend to fall back always into this trap,
right, of like, oh, the cybers and this stuff was hacked and it was pulled out and sent
back to China.
You know, there are three, four, five other different ways that they can get the technological
edge that they need,
then roll it into a product that takes the world by surprise.
Well, there's also a tantalizing other possibility here, which is that China is full to the brim
with very intelligent, hardworking people who are extremely well educated, and it's
entirely possible that they got there on their own.
I mean, obviously, they they're gonna do what they can
to extract as much advantage as they can
from tech that came before them.
But you know, competitors in the United States
would be doing that as well.
Competitors all over the world are gonna be doing that.
This is actually a very, very interesting observation
as we think about perhaps this age of austerity
that we may be entering into the US
Government where we're kind of pulling back on government spending which could see grants going out to
colleges and universities as well as federal funding going into the national lab system
That's driving a lot of technological advantage right now. And so if we're gonna pull back a little bit,
the Chinese that are dumping
massive amounts of treasure capital
in effort behind their own indigenous workforce
and STEM community,
I do wonder, I do worry that maybe they'll be able to press the pedal down a little bit more
while we seem to be pulling back a little bit.
Now that's not to say that the private sector, which is part of this entire strategy
of the new administration is move people into higher productivity jobs in the private sector.
activity jobs in the private sector,
are we going to see the big tech companies
be provided certain advantages on a tax or regulatory basis that will allow them to invest,
that will allow them to continue driving?
Just saw Microsoft has invented a fourth state of matter,
I guess, with their announcement on quantum computing.
But look, I mean, that's, I think,
where we're pushing the chips on the table
towards the private sector companies.
So one thing that you touched on there
that I find, yeah, really, really interesting
is this whole idea of DeepSeek as a threat
because it's Chinese.
It's been really interesting to watch this
as a non-American because the
reaction to DeepSeek has been borderline hysterical in the United States. And it seemed like the reason
this was getting so much attention is it kind of punctured the American hubris bubble, which is we are the leaders, no one else is
ever going to come close to us. You know, the Europeans are over regulating, the Chinese can't
develop indigenous tech, they just have to steal it from us, they've got nothing. And then along
comes this thing and it's a bit of a bubble puncturing moment. Do you? So nobody who worked
in tech thought that, right? Like maybe there's people in DC, but
like nobody in Silicon Valley thought that China was never going to be competitive in
AI. Certainly nobody who works in academia because like half of our good AI PhD students
are from China.
But you would agree that DeepSeek, you know, there were some advancements there that perhaps
people weren't expecting. I mean, this wasn't just a case of a model coming out that
was kind of at parity and it's not at parity in every dimension but there were
some breakthroughs there that I think were genuinely surprising including to
the people in technology in the US. Yeah, I mean I think there were
legitimate breakthroughs in efficiency. They did demonstrate some breakthroughs
in using H800.
I mean, they did demonstrate,
it's not necessarily true that they actually cheated on the,
they might've only trained on H800s.
They showed that they're doing low-level programming
to get more efficiency out of chips
that are sanction compliant.
And it demonstrates that necessity
is the mother of invention, right?
And I totally agree with Chris.
Like, it demonstrates that now is not the time
for us to take the pedal off of trying to invest
in fundamental research, right?
Like a huge amount of the work that went into
the invention of LLM's, all of the academic work here was
funded by the National Science Foundation, it's funded by DARPA, it's funded by like
US government grants into 20, 30 years ago, things that seemed neural networks and stuff
that seemed like totally ridiculous, non applicable computer science work and applied math and
such that now seems super practical.
But yes, I mean, it did shock a lot of people.
But like I'm just saying, nobody in academic AI
or who worked in Silicon Valley thought like,
oh, China will never catch up.
On Europe, yes, I mean, a lot of people
have looked at Europe and thought,
yo, there's one competitive AI company, Mistral,
but for the most part, there's lots of smart Europeans in AI and they all work for American companies.
Yeah.
So let's now change the focus a little bit and talk about the security concerns, which
again I think to some degree have been overblown.
One concern is, oh my God, this is a Chinese model.
So anybody entering a query into this thing, that information is going to be captured by China.
And they're like, OK, sure.
That is an issue.
We've also seen issues around the security
of DeepSeek's infrastructure, terribly insecure.
I mean, I actually made a joke about you, Alex, on the show.
I don't know if you caught it.
I heard.
Yes, I appreciated that.
But yeah, normally when a startup has these sort of issues,
Alex can show up and implement some sort of rapid security response program and, you know, like you did
with, who was it?
Was it Zoom?
Zoom or SolarWinds, yes.
Yeah, exactly.
It turns out Chris and I are not available to go parachute in the DeepSeek.
So you know, these were the issues with it, but again, this is an open source model, which
people are free to run on
their own you know what I mean so you can use it without sending data to China
and there's a lot of censorship stuff in there you know to make sure that the
model is compliant with you know good socialist thought and whatnot that you
know I'd imagine it again being open source would be fairly easy to disable
so I guess the question becomes like how overblown are the you know security imagine it again being open source would be fairly easy to disable.
So I guess the question becomes like how overblown are the security concerns?
Because I think people were thinking about this from a sort of TikTok security concern
paradigm and it doesn't seem to be the right way to think about this at all.
But I just wanted your thoughts on that.
I think part of the problem here is there's really two totally different ways you can
use this thing, right?
So for normal consumers, if you're downloading the DeepSeq app or you're going to their website,
that's just like using a, it's much worse than TikTok, right?
Like if you're an American using TikTok, it's USDs, it's in America, there's a bunch of
controls, there are concerns that people have, but like at least you're using like American servers that have some controls around them. If you use a
DeepSeq app, that stuff's going to China, do not pass go, your data just goes straight to
China, apparently into totally insecure infrastructure as you pointed out. That
has nothing to do with AI, right? It's like using Baidu or WeChat, right?
I mean it should go into insecure infrastructure in America.
Yeah, exactly.
That's my joke, but anyway, yeah.
Yes, yes, exactly.
So that has nothing to do whether it's DeepSeek or not,
that's just not secure.
The thing that's like very embarrassing,
like you know I'm middle-aged so I use LinkedIn, right?
Like that's the middle-aged social network these days.
We launched our LinkedIn this week.
Risky Business is now on LinkedIn, everybody.
You can find us by searching for Risky business media where you too can get excellent tips on how running a podcast for 20 years
What it has taught me about B2B sales, but yeah
Yeah, are you crushing it?
Yeah, so like as I was you know crushing it and grinding it out on LinkedIn
You know like a lot of people are really embarrassing themselves.
There's like a lot of people who are just like,
oh, you never have to listen to this person ever again
for security advice,
because people are treating open source model weights
like software, right?
And they are not.
So if you are a company
and you're downloading the DeepSeq model weights,
that is something that's somewhere between totally safe
and just as dangerous as compiled software.
It's actually really complicated
how you treat something like that.
It's a totally new thing for which we do not have
well defined understanding of the security model, right?
So to go back a little bit,
Meta created this entire space when they released Llama.
And when they released the first version of Llama,
it was both executable code and the model weights, right?
So it was a bunch of Python code and the model weights.
People pretty quickly threw away Meta's Llama Python code
because it wasn't that fast.
And they re-implemented,
and there's a bunch of open source projects,
the most
famous is llama.cpp that is a llama compatible implementations that are optimized for all
kinds of different pieces of hardware.
And now what you have is that people distribute models in a variety of different formats,
but the most popular base format is the called safe tensors, which as the name describes, is supposed to be a safe serialization
of the mathematical representation of an LLM.
And then that can get wrapped in a variety
of different kinds of metadata.
So like on Hugging Face,
the most popular format is gguf.
And so that's just like metadata upfront.
And then effectively a massive matrix of tensors
that represents this humongous mathematical structure
that is a LLM, right?
When you run that code,
the actual work is being done by llama.cpp,
or in the case of if you're running at Microsoft or Amazon,
their own customized llama compatible engine,
that is doing the work.
The model is really,'s like it's it's
Safer than a word file or a PDF or one of these really complicated things
It is you're basically the code is walking its way
Through this humongous tensor space to interpret first a certain input. What is the output that this LLM gives me?
The LLM itself cannot talk to the internet.
It cannot execute code.
It can't do anything other than give you a sequence of text
for whatever sequence of text you gave it.
Now, theoretically, you could do something stupid, right?
You could ask the LLM, give me some shell code,
and then you can execute that code on your shell, right?
You could put it into a lane chain,
like into like an agentic framework,
and you could have it execute something dangerous.
But if somebody wanted to backdoor an LLM
to do something dangerous,
they would have to predict what kind of dangerous thing
you were doing and backdoor it to do that.
And so there are risks, but in general,
those risks are risks that you have to create for yourself.
It's not like you could just down,
it's not like the OpenSSH backdoor,
which is a backdoor that if it had not been detected
would have been every Linux machine on the planet,
you can log into, right?
It's not like you can download these model weights
from DeepSeek and then the model wakes up a year later.
I know you understand this, but I don't think everybody,
there's a lot of people who are acting
like this model is actually intelligent
and like it's a Chinese spy, and a year later it wakes up
and it's like, oh, I'm gonna sneak out of my network.
No, no, no, all I can do is generate text.
Now, in the future,
though, there is going to be risk because people are going to want functionality like
the OpenAI deep research where you can ask OpenAI, hey, go write a report for me that
does a bunch of stuff. And it has to go out to the internet and do all these things. And
so people are now building agentic frameworks where there's a standard mechanism for the
model in its response to say, I want to talk to the web. I want to do this. I want to do that.
And so that will be something that you can insert back doors into.
But as of today, that's not a thing.
I'd imagine too, like a lot of these instrumentation frameworks,
because that's essentially what we're talking about.
I mean, you'll be able to swap the models around, right?
I understand what you're saying though, because even if the model
developer doesn't develop that framework as well, they could still do something dodgy with the instrumentation.
But look, broadly speaking, I'm exactly on the same page with you, which is that, okay,
maybe using a model hosted in China and dropping a bunch of sensitive information into it,
not a really good idea, not great from a security perspective, but it doesn't mean that we can't capture
some of the value of these models
by customizing the open source versions that
have been released.
And I guess this, my take on this as a non-expert,
and I definitely want to check it with you,
is that I think this has shown us that perhaps models
themselves, for a long time, you know,
since this all kicked off with the release of chat GPT, the first, you know, big version
that made everyone lose their minds.
The big thing with it is everybody thought, oh, well, that's where the value is going
to be created.
These companies that are that are generating these models and, and whatever.
And, you know, open AI has a absolutely gargantuan valuation at this point,
and you know, there's so much value in the sector, but it sort of seems like maybe that's not where
the money's going to come from, and the people who really benefit from this are the people who
are going to be making the products that make the best use of these models, and Nvidia, who provide
the hardware to power them. Is that a ridiculous take? Because as I said, this is not something I have been following as closely as you.
No, I think that's right.
I mean, I think one of the things DeepSeek demonstrated here is whether it's China or
not, the base model makers, the open AIs, the anthropics, the folks like that, the people
who are making general purpose LLM foundational models do not have moats, right?
That you could be like, whoa, we're the winners,
we're on the victory, and then any day,
somebody can elbow you in the face
and they'll be on top, right?
And so the two, the winners are, like you said, Nvidia.
The other winners is the middleware guys,
Microsoft, Amazon, those guys immediately were like,
oh, DeepSeek, great, and they offered DeepSeek, right?
Yeah.
And because- I I mean for them
It's just another form of compute right it's like ec2 or whatever
It's just like hosted LLM ticker box pick your model and they get a margin
You know it's like they get a margin on on offering that to you whether it's storage better for them because they don't have to
Pay a license for it because it's like MIT license so unlike llama unlike open AI
You know cuz llama is open source, but Meta's license
is if you use it for commercial purpose,
you have to kick the money, right?
So to DeepSeek's credit, like their license is,
oh, even if you use it for commercial purposes,
you don't have to OS anything,
which is a fascinating kind of escalation
versus Meta's license, which is like,
Meta's license is kind of like,
it's like Fiorador's Nmap license, whereas like, it's even more which is like, Meta's license is kind of like, it's like Fiorador's and MAP license,
whereas like, it's even more aggressive than that, right?
So it's really good for like an Amazon or Microsoft,
because in the end, they now get to, you know,
they get much more margin on this.
And then, yeah, it's good for folks like us,
because like we use LLMs to sell a product to folks,
and like, if we have to pay less,
if it makes just the competition,
I mean we're not using DeepSeek, right?
But just the competition,
if it makes our LLM providers lower their costs,
then that's great for us.
And like you said, it didn't really hurt Nvidia.
Make their stock go down, their stock go up,
means they're gonna get sued,
because like if your stock goes down, you get sued.
But like in the end, they're shelling shovels
and the gold rush is still going.
It's funny though, I will just say too earlier when you were talking about Nvidia,
and I remember like, you know, a year, a year and a half ago, people saying, wow, you know,
like the growth is tapped out, they would have to hit incredible numbers for this to continue.
And they kept hitting them, right? So never count them out. They seem to be, they seem to be just,
it just keeps going.
Will it be like Cisco during the dot com boom
and eventually collapse?
Who knows, but betting against Nvidia seems to be
as risky as betting for it.
It's unfair to them because eventually they have to like,
they could be spectacularly ridiculously profitable.
They can't grow forever, right?
It's like an exponential growth.
It's like the bacteria taking over the planet kind of problem.
At some point, you know, like people have to have enough GPUs. Um,
I mean, it's also at the point now you hear from, from open AI,
you hear from other folks that it is the constraints on their capacity as well
as the fact that they make a decent amount of margin hasn't made a lot of
people invest in creating their own hardware, right?
Yeah.
So.
Now, look, I wanna talk to you now, Chris,
about more of a city. Well, I do wanna say,
Nvidia's gonna be fine, right?
I mean, they were already back up to 140 today.
What did they hit, about 116 after DeepSeek?
And I think, as Alex mentioned,
the massive market cap hit they took,
but the inference, using GPUs
for inference is always going to be a requirement.
It's really that amplification at the edge.
That's something, Alex, when we were modeling the risk posed to the AI value chain, the
real value is in the amplification at the last mile and the customer interface.
I mean, that's kind of what I was saying, right?
Which is what they might lose in the training they're going to make up at the edge, right?
We haven't really even scratched the surface on that entire market.
I mean, we're still in the very early days of use case development and real true integration
into the enterprise.
Should say Chris, this is not financial advice to anyone listening to this.
This is just a bunch of-
I didn't say it was.
I know.
I'm just saying very clearly that it isn't.
Yes, that is right.
That is right.
Right, so the three stocks you should buy right now.
Don't stop.
I don't need trouble with the regulators.
But I wanted to talk to you Chris more about the geostrategic implications of this, because
this is something that you've spent a lot of time thinking about.
You've indeed just returned from the Munich Security Conference, where a lot of people
were talking about all of this AI stuff.
What was the vibe on the ground at Munich?
What were people talking about?
Where did they sort of zero in?
Because you always notice when you go to an event like that,
when there is a discussion of a big issue,
it tends to pretty rapidly focus onto a few key things.
What were they?
Well, so just kind of first things first,
Munich Security Conference tends to be the, if not,
the number one, the number two or three
top national security conferences
every year.
Alex is a long time participant.
I've been several years.
Our old mutual friend Demetri is a bit of a fixture.
Host a number of different events the last couple years.
And it's such an interesting event because it's in a really small venue.
It's in the Hotel Beresherhof in Munich.
It is a very, you know, classic, old, elegant hotel,
but small.
And so you get members of Congress and I'm talking senators,
a very high statue, a statue rather, without staff,
they are not granted plus ones.
And so they're just roaming the halls.
And it creates some very interesting interactions.
I remember a couple years ago,
kind of walking down the hall and Sergey Lavrov,
the foreign minister of Russia, walking right, right past me.
So there's definitely some surreal moments.
And there's always a kind of a theme that's official,
but then there's also a theme that's unofficial.
And obviously this year's unofficial theme
was kind of the new world order
with the Trump administration
that seems to be taking a hard look
at the transatlantic relationship,
NATO, what happens next with Ukraine.
Obviously you see plenty of headlines in X posts
and whatever about all that.
The thing that really stepped out,
or at least kind of I picked up on
and was paying attention to the most,
was the difference in the transatlantic conversation
around AI and regulation.
And this has really been an issue for years on tech in general, and that has spurred any
number of lawsuits.
What are we on now?
Shrems 3, Alex, I've lost track.
We've got the Cloud Act.
We've got the US-UK agreement.
Microsoft had a lawsuit that went all the way up to the Supreme Court on DOJ access
to an Irish data center.
And so again, these issues have all long been simmering, but I think it really came to a
head, particularly with the vice president's comments about technology, about censorship
and regulation.
So what I am seeing is that there is a significant cultural divide between
the European side of the pond and the American side where clearly the American take and has
been for years and years is let the technology blossom and let's figure out what the harms
are and then we can make those interventions at that point
once we fully appreciate and understand the harms.
And I would even say that I think,
particularly with the kind of effective,
the effective accelerationism, excuse me,
that we're even kind of cutting back
on intervening on the harms.
Where the flip side is, the European model
is regulate first, ask questions later.
And we've seen that with the Digital Services Act, the AI Act, the European model is regulate first, ask questions later.
And we've seen that with the Digital Services Act, the AI Act, the Cyber Resilience Act.
And as a result, and Patrick, I will cabin this up to technology for now because you
have I think a broader viewpoint on manufacturing Europe in general, but that regulatory approach in Europe has really hindered and limited the ability of
European tech companies to make a dent in kind of the American and then, you know, parenthetically
Israeli domination of the tech space.
Well, the Israelis in the cyberspace.
So that was absolutely super evident. I think finally it really resonated with
members of European Parliament and government officials in various European countries that AI
is the latest battleground of this struggle but also the one that is going to probably come to a head with the US government.
And I think that's in terms of policing speech, tech censorship, and just AI in general.
So one thing I find interesting about this, right, is as you rightly point out,
the Europeans have regulated the absolute crap out of AI. But as we've just sort of determined in this conversation, probably the models are going
commodity.
So have they pulled, I don't know if you're familiar with the Australian Winter Olympian
Stephen Bradbury, but he was the guy who won a gold medal because literally he was last
place and everyone else fell over and he wound up getting the gold.
And I sort of wonder if the Europeans are going to grab like the latest open source model, make sure that it's compliant
with their regulation and then off they go.
So I'm just wondering if this is as much of a self-own as people in your country think
it is.
So there's an interesting thing about this and Alex and I have talked about this for
a bit now, but you know, particularly with the right to be forgotten in Europe with the models as they exist now
How does one effectively?
Pursue that private right of action where you have yourself can you you cannot extract it from the model itself?
So then you have to put some kind of filter or agent on top
That is constantly on the lookout for you and everybody else that puts them themselves on that do not fly list
And in the funny thing is we've talked about this at least theoretically is constantly on the lookout for you and everybody else that puts themselves on that do not fly list.
And the funny thing is we've talked about this at least theoretically. I think we've seen it.
We've seen it with the browser-based and app-based version of DeepSeek where the model, if you believe the stories that it was trained is distilled down from OpenAI and Lama and other things. So then it was trained
on the body of knowledge on the, not just the Western internet, but a broader internet. So it
has things that might be politically untenable for the CCP. And again, due to the chain of reasoning
that Alex mentioned, you can ask it questions and it starts spitting out
the answer that's based on the broader body of knowledge,
but once it realizes, like, whoa, whoa, whoa,
I can't talk about this thing,
and it starts working back up the reasoning and delete.
It's fantastic.
The videos are amazing where you see it answering
and then it just disappears, right, from the screen.
It's incredible.
Right, there's a big difference between the online model
and what you can download, right?
The online, it's obvious, and this is actually
how a lot of safety alignment works for online models,
is you have the base model, and then you have
a different model that's watching for safety, right?
But their definition of safety in China
includes safety for the Chinese Communist Party.
And so, if it starts going off,
there is effectively a political officer with a gun to its head, and if it starts going off there is effectively a you know political
Officer with a gun to its head and if it starts going off script it shoots the model right?
Yeah, but if you have the model weights locally then it is only barely censored right like they did barely the minimal
amount to ship the the deep-seek in fact if if you told me me that people at DeepSeek were in trouble with the Chinese government,
I would not be shocked,
because the amount of work you have to do
to get the DeepSeek model weights
to talk to you about Tiananmen Square
or to say that Taiwan should be free is not a lot, right?
Yeah.
Which also maybe also points to the idea
that DeepSeek has a lot of knowledge
that has been distilled from either Lama or OpenAI, because there's a lot of Western that has been distilled from either llama or open AI,
because there's a lot of Western thought in this model.
Right?
Yeah.
It does give us that real world example
of how you would deal with possibly one solution at least
for the right to be forgotten problem set.
Well, and more broadly, you know,
some of this compliance stuff,
some of this regulation that the Europeans have brought in, I mean, perhaps the Chinese have shown them a way
that they could do this, which is funny.
Funny.
So when you talk about European AI regulation, the truth is, is I don't think it's the current
AI regulations that make Europe not competitive in AI.
It is the net sum of everything Europe has done up to this point that makes them uncompetitive
in tech, right?
It's the high-end regulations.
It is what they've done to drive away smaller companies.
It is what they've done to drive away smaller investments, right?
That stuff just makes it that you don't want to start a company there already.
The right to be forgotten issues, the GDPR issues.
And the AI regulations are just another layer on top.
The other problem here for the AI regulations in Europe is they're thoughtful in some ways
and that the one thing I wrote, I wrote an op-ed against California's AI regulations,
which end up being vetoed by Governor Newsom, which I'm really glad because the California AI regulations
were all about the foundational models.
And one of the good things about the European AI regulations
is they're dependent upon the application, right?
So what Europe wasn't doing is they were not actually trying
to regulate the foundational models.
What they were saying was if you're using AI
in this circumstance, you have a bunch of obligations.
The problem was is that the boundary for that of what you would have to do was of the situations
you'd have to apply was very low and what you have to do was very high.
And so the result is if you're ever going to apply AI to any purpose, you're not going
to do it in Europe until you're huge.
And so as a result, every use of AI to solve a human problem will happen
outside of Europe first. You'll be a huge company until you'll try it with Europeans.
And that is what the Europeans have bought themselves, is that they've basically said,
we rather it be perfect before it gets tried here. And that is their decision that they
can make. But the cost of that will be that nobody will start an AI company in Europe. That is the flip
side. All right well we're going to wrap it up there guys. Alex Stamos, Chris Krebs, thank you
so much for joining me for this discussion. It's always great to see both of you and we're going
to be doing one of these every month actually this year which I'm stoked about because you know
listeners love this podcast and I also really enjoy doing it, so that's great news.
Yeah, a pleasure to see you both and we'll chat again next month.
I want that to be the most conservative shirt you wear this year, Patrick.
I want every month the shirt to get louder.
I'll see what I can do.
It's a real beaut you got there, Pat, and I am super excited that we might be able to
get to do this in person again.
Yes, at RSA in California coming up in late April.
Yeah, looking forward to it.
Stay tuned, as they say.
["The Daily Show Theme Song"]