Risky Business - Wide World of Cyber: Krebs and Stamos on How AI Will Change Cybersecurity
Episode Date: May 17, 2024In this podcast SentinelOne’s Chief Trust officer Alex Stamos and its Chief Intelligence and Public Policy Officer Chris Krebs join Patrick Gray to talk all about AI. ... It’s been a year and a half since ChatGPT landed and freaked everyone out. Since then, AI has really entrenched itself as the next big thing. It’s popping up everywhere, and the use cases for cybersecurity are starting to come into focus. Threat actors and defenders are using this stuff already, but it’s early days and as you’ll hear, things are really going to change, and fast.
Transcript
Discussion (0)
Thank you. Wide World of Cyber. Risky Biz, Wide World of Cyber is produced in partnership with Sentinel One.
Welcome back to the podcast that now has a name, The Wide World of Cyber.
This is the podcast where we talk about the big issues with Sentinel One's Chief Trust Officer, Alex Damos,
and its Chief Intelligence and Public Policy Officer Chris Krebs.
And we're recording this one in the flesh in the United States.
The topic is AI, which is why I AI generated that intro music.
We live in interesting times, as they say.
It's been a year and a half since ChatGPT landed and freaked everyone out.
And since then, AI has
really entrenched itself as the next big thing. It's popping up everywhere. It's on the lips of
policymakers around the world. It even has, you know, geostrategic implications and the use cases
for our corner of technology, cybersecurity, are starting to come into focus. Threat actors and
defenders are using this stuff already, but it's early days.
And as you'll hear,
things are really gonna change quite quickly.
Here's Chris Krebs to kick things off now
with a summary of the current state of play
when it comes to AI and cybersecurity.
Enjoy.
Yeah, so just generally,
my view is that on the AI versus AI conversation, defense is, air quoting here, winning. I think
they're closer to market. They're more effective. They're ready for use. I mean, we have them at
Sentinel-1 with Purple AI, AI-enabled threat hunting, natural language processing. It makes
it easier. Hey, go find for any activity that looks like it could be Sanworm or APT44 or whatever you're supposed to call it now. Go find this stuff.
And then what should I do if I find any indicators? That stuff's helpful. And then
when you look at the product roadmap and what's coming next, it really is encouraging.
On the offensive side, based on research put out by OpenAI and Microsoft, it's what they have seen across their models,
and this is just their stuff for now, but it seems to be limited to social engineering and research,
automation of basic functionality, and some scanning, automated scanning, things like that.
So it really isn't super high powered, both at the state level actor and at the cyber criminal. And I think that's probably where we
see it the most right now on network. It's cyber criminals using it to write the better phishing
lure, which is going to have an impact on email security. And some of the kind of indicators and
tools we've used historically aren't going to work because this stuff is going to be much slicker, much higher speed. But as soon as you get the human
back into the loop, particularly on ransomware negotiations, English is the third language,
becomes immediately obvious, and the advantage is gone. So there's a first mover element here.
But once you get into the real kind of back and forth of it and the meet spacey side of it,
the AI advantage falls away for the adversary. Alex, I want to bring you into this. You know,
we are seeing these defensive use cases spring up. I've just signed on as an advisor to a company
called Dropzone.ai, and they do the sort of tier one SOC analyst emulation automation, right? But
what I really thought of when I was watching the way that this thing works, and I really
appreciate that the founder, Edward Wu, talks about this in terms of it mimics reasoning.
You know, he'll never say it's doing reasoning.
He says it does a good job of mimicking reasoning in these sort of cases.
I realized you could use these models in a similar way to sort of mimic the reasoning that's often involved
in pen testing and hacking and obtaining access to networks. Knowing you as I do, I'm guessing
you put a bit of time into thinking about that. What are your thoughts there? Because I hear Chris
say that the offensive use cases are the next thing, the next generation, but I could kind of
see how you could do it with the current technology already. If you wanted to scale up, it's not going to give you the best, most,
you know, precise way of hacking into some hard target or something. But instead of just doing
dumb automated scanning, I think, yeah, I think that I think you could do something here already.
Yeah. The way I look at how AI is going to affect this is I think we have to think about it in
waves, right? And in some ways, these waves are recreating the 80s, 90s, and early 2000s, but we're just speed running
30 years of development of the infosec industry. So the first wave is network and operational.
And like Chris said, there is a defensive advantage. And I think there will be
a persistent advantage in each of these waves. Defenders will move first because we're the ones
spending the money. We're the ones with engineering teams. Sentinel-1 has been using AI for 12 years,
right? So it's like certainly Lockman has not been doing that. And so just all of the venture
capital and all of the public company money that's been spent on this will mean that the
defenders will go first, but we will not get the last laugh, right? And so this first wave is operational.
It is in using AI to give you operational leverage to empower your existing people to make them more efficient and then to replace the SOC engineers that you can't hire anyway, right?
Like everybody talks about the workforce problem.
And as hard as it is to get H100s, it's a lot easier to get a bunch of H100s to do inference on than it is to hire
a really good tier two SOC engineer, right?
Especially one who's willing to work
from 2 a.m. to 10 a.m.
And so I think first the defender side
is about making people more efficient.
Like Chris said, I see this at Sentinel-1.
We have this product, Purple AI,
that gives you something that kind of looks
like a Python notebook.
And so you say, show me all the endpoints that downloaded a binary from a Russian IP address. And you could always
ask someone that, but you'd have to write this humongous query with all these brackets and
custom query language that someone wrote there 10 years ago who has long since left.
Exactly. So it will write that query for you and then it'll put it up there. So if you want to
tune it, you can do it.
But it'll give you the query and the answer, and you can kind of, in this notebook, have like an analyst notebook in which you have.
And one of the things that Sentinel-1's done is one of the kind of acceptance criteria of this was they've tested non-expert users of the product versus expert threat hunters.
And the non-experts are faster than the experts now, right?
So you can – it really has that benefit.
And I think the next phase for this operational,
this first wave,
is then getting the models to be good enough that you can hook them up into your source systems
and then the actions that can be taken
will be automatic, right?
So, okay, great.
Somebody logged in from a Bulgarian IP address.
They passed the two-factor, but it's looking weird.
Automatically lock their account in Azure and pull all of this data.
And by the time it gets to a human, it's all been summarized for them.
This is everything that this person did once their account was compromised.
So I think that's the operational phase.
The next part of operations for the attacker side is going to be automating all of their steps.
Like right now, the big ransomware actors are completely automating initial exploitation steps, right?
They scan all of IPv4 space.
When a new bug comes out, they don't have to go scan for Palo Altos or Cisco ASA bugs.
They just look up in their database their local copy of Shodan effectively and then go hit things really quickly.
But past that, once you have initial exploitation, that's where they often have to have humans.
Well, and that's the bit where I can imagine some of these models being quite useful because they
could automate or at least semi-automate a lot of that process of going from, okay, we've shelled
something that is domain joined. How do we go
from there to the point where we get to deploy ransomware? Right, right. You're on a Cisco ASA
and then it decides, okay, I'm going to dump memory and I'm going to find a token. That token
allows me to go back to Azure with that token and utilize it. And that person turns out to be a
domain admin or tenant admin or something. So yes. So I think that's the next phase of this wave was we'll start to see the attackers catch up in automation for their attacks.
And then what that's going to mean is you're just going to have to patch, right?
Like I think that we've been saying this for decades.
Well, I mean, that was literally what I was about to say, which is we've been yelling at everyone to patch quickly for 20 years and they still don't.
So I don't think that that's, I think at this point, that's just not useful advice.
Well, it's still the accurate advice. I can't make people do it, but like the meantime to live on these vulns being out there as bad as it's been in the past,
it's going to get worse. And so that's the first wave. And I think the second wave that's coming
behind is the app sec wave. So just like an infosec nineties were about firewalls. They're
about patching. They're about exploit kits and such. And then the two thousands were about app
sec. And I think that's the next wave coming is that first, defenders will do better,
and that you've already got application security testing tools, fuzzing tools,
co-pilots that sit there in your IDE and watch over your shoulder while you write
vulnerable code. These systems, we've had these systems, but the problem is that they've never
had good understanding of kind of the technical context in which application developers are operating of all the different pieces.
And these LLMs are going to be incredibly helpful.
And so it's basically non-deterministic reasoning.
And you're right.
And that will be super useful for app security, first for defenders.
But then the offensive guys will come behind behind and they'll be using it for,
look at this binary, find the bugs,
and then write exploit code for me.
So they will get the benefit of not having to have their incredibly expensive exploit developers.
They'll have LLMs that help them do that kind of work.
And so again, I think the defenders
will have the advantage here
because the money's being spent
by the application security folks.
So that was gonna be, yeah.
But they will follow behind.
And what's the third step after that?
Maybe pulling it all together.
But we are definitely ending up in a world
where it's AI versus AI,
where the offensive guys,
the best offensive guys,
are going to have sub-models
where you have a master AI system
that then has a bunch of subsystems
that are smart about their specific thing,
and it is coordinating along the entire kill chain,
and then a human being is supervising it.
And the defenders are going to have to be the same way, where you have a master kind of defensive ai that's looking across
your entire org and then it has the sub ais it has the ai that's watching authentication it has
the ai that's watching malware that's ai and that will be an interesting challenge for we're all we
haven't said it we're all sitting here in a hotel overlooking rsa the worst week of the year um and
uh down there like getting all these companies to
play well together in that way, I think it's gonna actually be a big, interesting challenge.
I mean, one thing I just want to comment on there is that when I had this argument with
Dimitri Alperovitch about, and this was a little while ago, about all this idea that people are
going to be using AI models to spit out high quality exploits for current software.
I mean, I do see that one.
I don't think that's going to be asymmetric.
I think we've got a really good shot at, you know, because the companies who publish software really have a vested interest in doing this first. So I think we have, there's every chance that this is going to wind up
being a net positive
in terms of like exploit volume
and criticality, right?
So I don't know that I agree with that.
I think it's a net positive,
but I do think it is going to
change over time.
It's going to change over time.
The reason it's a complicated thing,
I think that the magic of these systems,
the reason they seem so good
at creating what Chris was talking about, they're already using it for phishing emails and lures and
such and information operations. The numbers of emails that you, a human being, will look at in
English, the search space is effectively infinite of the number of emails that you will say,
this looks like a legitimate email from a human being, right? Your brain will interpret a massive, almost infinite
number of inputs as actually being real English. The number of inputs into a really esoteric bug
in modern fully patched 64-bit Windows 11 that will bypass ASLR, that will bypass exploitation
protections, and that will actually work is much, much smaller. And so I think that's one of the reasons why it's a challenge, is that the LLMs create answers
within this huge output space,
and that has to get very tightly constrained.
That being said,
there is a lot of cutting-edge academic research
and research at the foundational model companies
about creating LLMs that are specifically tied
to very constrained output models,
especially with feedback loops. And so
that is going to be completely a doable thing. I don't think, like people say it's going to be
impossible for LLMs to write exploit code. It's like, for me, it's like right down the day you
said that, right? Because I do think it is, from the academic stuff I see, it is absolutely
possible. Because especially like, you look right now, L LMs are doing drug discovery, right? You've
got Gen AI doing drug discovery stuff. That's the exact same kind of problem where you have to have
the exact perfect protein for it to work. But if you build a feedback loop in which you get 500,000
attempts, 500 million attempts, and then you get a feedback loop every time, and the model is
intelligent enough to learn, then eventually it will hit something that works. I think the question I have is, what's the time horizon? So pharmaceutical companies have been
doing drug discovery using AIML now for a decade or more, right? And I don't know if we have a
knowledge base of adversarial training of models for that. Now, for the policy and lawyer types
that listen to this show, to take Alex's kind of operational wave of adversarial
use, you know, putting this into a use case. I think Patrick, we've talked about this before,
but when Kim Zetter did her teardown of here's what the SVR did inside SolarWinds,
and I was having a conversation with her and walking her through this, I think it was based
on a conversation I had with Alex, was like, just imagine, right? The SVR breaks into SolarWinds,
does their recon, and then pulls back for six months,
and then writes her code, goes to the range, tests it out.
Instead, what that, I think, operational model is,
boom, they inject that intelligent implant
inside the operational environment, and it moves.
It just goes.
There was no six months.
It was instantaneous.
And that's the speed that we're talking about.
And I am, you're saying, hey, it's not here yet.
Alex says it's the next.
I want to know when is that coming?
And that's the question.
It comes back to model versus model, right?
Which is a theme that just every time you see smart people talking about this topic,
it always comes back to, it's going to be white hat versus black hat AIs, right?
And so you had Rob Joyce on the pod a year or so ago
talking about AI and what did he say?
Speed kills.
So the faster, the movers, the first mover advantage here
that defense has right now is helpful.
And this, again, if we're using the SolarWinds example,
look at what the SOC analyst at FireEye, Mandiant, whatever it was
called then, FireEye, I guess, all he did was pick up an alert, triage, call the employee about a
device enrollment, said, no, I didn't enroll that. And that blew up the entire SVR operation.
So that shows you that if we can get scale on some of that lower level tier one stuff,
then yeah, there is, there is some,
some hope here. And we've seen this kind of play out in other examples where we're getting dwell
time and we're cramming it down. Even what happened, uh, we talked about it last time
with the state department and you know, the, the alert they ran over the top big yellow taxi or
whatever it was, you know, it's those sorts of things. If you've got the right signal and it's
flowing through the system and you can move on it and then to alex's point you can automate the the mitigations assuming you trust it then we're going
to keep cramming down on that dwell time uh but how much damage can they do in in the meantime
well and how quickly can the defensive models stop the rapid speed offensive models and you know
again well but again it goes to the trust do you
trust the defensive model to take automated action what are those automated actions is it is it you
know flattening the box and standing back up or is it flattening the box and then going and
investigating well these are some of the open questions this actually leads us into the next
part of this conversation which i think is a really interesting one, which is, you know, there's the attacker
versus defender conversation. Then there's the sort of more macro conversation about how this
all works at a national defense level and asymmetry of capability level, right? So one positive,
I guess, you know, I'm in a liberal democracy myself, not the United States, but Australia. But one of the positives out of all of this is that it's night and day between Western technology companies and nations like China.
They are nowhere near the capability of the West when it comes to this stuff.
I feel like that's going to matter.
Right? So first off going to matter. Right?
So first off, yay capitalism, right? And particularly in the US, vice the EU. I mean,
the regulatory model in the EU has stifled innovation in quite a significant way. You
look at the 10 most valuable companies in the world, six to seven of them are US technology
companies. You also have Alibaba
up there too. I think it's questionable as you look across the AI value chain,
whether the West has dominance in all elements from software and research, hardware,
compute and infrastructure, the models themselves, the amplification layer, and then the final
consumer layer. The amount of research the Chinese are doing on ai is absolutely off the charts it's insane so they they have some element of advantage vice
uh i think the the maybe not advantage vice the west but you know they're not token players in
the game no but they face challenges that the United States doesn't, like access to the
right kind of hardware at scale, at the scale that they would need to paste this sort of technology
into every corner of their economy. They don't have that. So in terms of scale, this is very much
currently the West, which is just absolutely dominating here. Yes, but they are working very
hard on this.
First, I mean, they're doing a lot of great fundamental research.
I believe it was the Stanford AI Index that talked about,
you know, they had like the most cited institutions and the Chinese Academy of Science is like one or two, right?
So now that is like somewhat of a misnomer
in that they combine people from multiple universities under that umbrella.
But still, they're doing good cutting edge research.
Second, their students are coming into America and learning all this, right? And so like they
have done a much better job in the long game compared to a number of other countries of
intentionally sending students to go learn as much as possible from the American academic world and
to bring it back. And then third, their offensive capabilities have been massively pointed against AI companies. And this is like, you know, obviously we all know that China has
done intellectual property theft in the past. This is like an interesting next level. I think
partially because it's very rare to find a company where actually access to the source code really
matters that much, right? Like, you know, people have had, Microsoft's had their source code stolen or leaked. And, you know, we used to joke
at Facebook that's like, oh, if you steal our source code, the next thing you have to do is
just build 20 data centers and then get a billion users and then you're a competitor, right? Like,
nobody really cared about the source code. But that is not true for AI companies, right? Like,
the internal papers, internal documentations, internal internal source code and the model weights all of which fit in a backpack right um are incredibly important i mean i i let me just
jump in there though because on that panel that i was talking about uh uh just earlier you know we
did see heather adkins say look you know could they get the source code theoretically yes i mean
it's very closely guarded could they get the weights yes but they can't get the training data
you that does not fit in a backpack no well i mean some of the smaller sets do but yes most for like google scale now
right like for google that probably doesn't fit in a data center like they're dreaming data but
that's not i i mean i all respect to heather who is one of my favorite people in all security
there's a lot of cutting edge research that's being done at the smaller levels right so like
anything you can do in an academic institution can obviously be done in the PRC
and anything that most of these foundational companies can do. Now, I think the Biden
administration was totally right to try to control hardware, but what it's doing, it is creating
a very massive black market for hours on H-100s, right? And so we see this in the UAE, you see it in a variety of countries that
have data centers, power, and that are places that are happy to host both the United States and
China, that are permissive environments for Chinese operations, that for probably really
important geopolitical reasons the US can't really crack down on, those are becoming hotbeds for
where East and West can meet to bypass all of
these controls on Chinese importation. I mean, I understand that. I guess where I was coming from
is it's an issue of scale, right? China will not be able to scale. I mean, okay, so for the national
defense use case, okay, fair enough, right? They're going to have good models. No one would
accuse the Chinese of not having brilliant researchers across,
you know, all sorts of fields. Very, very smart people. But when we look at it from a,
I guess, more from the economic side than the national defense side, I mean,
they're just not going to get the same scale. Maybe. So I was going to do a pitch for like
a personal public policy issue here. I will go and see an incredible talk by a Stanford PhD candidate who's from the PRC.
And what makes me very sad is the odds of them being able to stay in the US is not that great.
So from a global competitive perspective, we are doing a terrible job of trying to
keep the best talent in this space in the United States. Like if you get a PhD from Stanford in AI,
you should have a green card stapled to your diploma.
Right, absolutely.
But yet because of our political situation,
we're doing quite a poor job.
And I think you cannot underestimate the investment
that has been made to bring wave and wave of student
who like, if we could keep them in the US,
if you could keep them in Australia,
would probably make that decision for themselves to do so but we don't make it easy
for them to do it right um and uh and so like we i think like the chinese are finally getting the
benefit of all of that work let me kind of pull a thread on something that alex mentioned it's
the political aspect of this but it's also the geopolitical aspect which is i think what we're
talking about in this podcast in general. But the Chinese have kind of figured
it out, right? They've figured that they have an advantage if they want to use it, and that's
investment. So for years and years and years, Chinese students, like Alex said, would come here
to the US and stay because there was significant investment. They could work on projects that had
a future, and they could make money. Chinese saw that, said, you know what?
We can do that back at home.
So they have been investing significant as well as some of the foreign direct investment
from the US that's going in.
So there was opportunity.
So they were able to pull people back in, but they didn't stop there.
There's a counterintelligence element here too.
So the Chinese security services are looking out, as Alex hinted at, across the world
and saying, where do we have people of Chinese descent that are in significant companies that
are working in some of these firms that we may be able to pull back, that we may be able to corrupt?
We've seen this in many, many other cases in
other industries. They're using it AI and they're using it to poach and bring people back. And with
them, bring that backpack full of secrets. The FBI is all over this, but it's really,
really hard to track. It's really hard to, in a kind of a shift left mindset, detect and stop.
It tends to happen as they're already on the plane
halfway across the Pacific. I mean, you just touched on the idea that there's geopolitical
ramifications to all of this. I mean, when we think about what's happening with China and Taiwan,
I mean, there's an AI angle to that, right? Which is TSMC, who controls the chips, you know, gets to have the
big, shiny, bright AI future. Once you start thinking into hypotheticals of Taiwanese fabs
being seized by the Chinese, what, you think they're going to keep exporting that stuff to
the United States? You know, this has become, this stuff has become very, very important. I just wondered if you had any thoughts
you want to share there.
I mean, you just spent a long weekend
with Dmitry Alperovitch,
and whose new book, World on the Brink,
talks about this and chips.
And he does take a more expansive view.
It's not just the frontier chips.
It's also the foundational chips.
And we're kind of struggling to compete or win on both
fronts. Even though I will say that the Chips and Science Act, even if it's 20 billion upfront,
which may not seem like enough, is having an impact. You're seeing TSMC, you're seeing NVIDIA,
you're seeing Intel, they're all building out and investing here domestically in the US.
And we're building stronger partnerships down the road with some of those key European
parts of the value chain. And we're building stronger partnerships down the road with some of those key European parts
of the value chain.
Yeah, yeah.
So I think the makers of the things that make the thing.
But it's going to take a decade or more, really, to make a market shift where you see more
than a single digit percentage shift.
And it's going to take vertical integration in some of these companies, which is going
to be hundreds of billions of dollars, again, and it's going to take vertical integration in some of these companies, which is going to be hundreds of billions of dollars, again, and it's going to take time. But yeah, I mean, to your
point, the silicon shield that Taiwan has is real. I mean, there is something here that is
both attractive for the Chinese beyond just Xi wanting to bring Taiwan back in the fold. There is a significant
economic driver and benefit of having Taiwan in the fold. And what happens next after that?
Do I think they're going to stop exporting? No. I think that's the golden goose. This is part of
the economic domination that Xi has laid out and has made in China's 2025 plan. This gives them one way,
at least on the frontier chip side, to make a significant leap ahead.
Well, I mean, I could just see them keeping that for themselves. Because you get more of an economic
gain by actually using the technology than exporting it. You just seize that.
I think they can do both.
But why would you want to? Why would you want to help the United States put AI, plug AI into every facet of its economy?
I mean, look, this is why we need Dimitri sitting right here next to us.
I think they still are going to sell it because of the revenue, the GDP side.
They're going to have to sell this stuff.
Alex has thoughts here. I think there's not a lot of scenarios
in which China takes over Taiwan,
in which TMC is just sitting there
as an ongoing wonderful concern.
That was something I was going to mention,
which is there has been some signaling
out of various American voices
of various levels of influence
saying that should China manage
to successfully invade Taiwan and, you know,
integrate it into the mainland that, you know, the United States would target and destroy those fabs.
Yeah. And I think the Taiwanese, just effectively fighting in the streets of Taipei is not
conductive to running the world's most complicated business, right? Like, and I think the Chinese know that too,
which makes this dance very hard, right?
On their side, it is very hard for them
to get the golden goose without killing it
in the first place.
But isn't it amazing that we've got this, like,
single point of vulnerability
in what has rapidly become, you know,
one of the most critical things for our economy.
It is insane that there's this little island
off the coast of China
that has become the center point
of the world economy for this.
Although not totally unprecedented, right?
Like Taiwan is basically playing the role
of New York in the financial system
for much of the 20th century.
And, you know, the Saudi Arabia for oil. And gold rushes through the 20th century. Saudi Arabia for oil.
And gold rushes through the 19th century.
We've seen this geopolitically.
None of those are good.
Generally, those are not great unless those centers
are part of a world superpower or an empire.
They end up being fought over or having real issues.
So yes, the Taiwan stuff,
this is actually something I'm talking about
in my RSA talk this week,
is China's targeting of AI companies is pretty intense,
like Chris was talking about.
Our intelligence team has actually found
targeting documents about individual people and such,
effectively shopping lists for the mystery of state security
to try to recruit people
and to steal information from people's heads, which it's sometimes a lot easier to
get information out of people's heads than it is from secure cloud systems.
But all the Volt Typhoon stuff, all the things we have seen around Chinese backdoors in American
infrastructure, probing against infrastructure of situations that, for which there's no intellectual property benefit for them to be in the water system
for Guam or the power system for San Diego. It's only useful in a World War III situation. And that,
that is pretty terrifying. I mean, I wonder, I wonder when we were talking about, you know,
automation of offense, you know, what does, what does an AI enabled vault, because that's the
campaign you were talking about is Vault Typhoon. What is, what is vault typhoon look like once you,
you know,
integrate some of these models into it?
That's a,
that's a,
that's a thought.
We'll go back to what Rob Joyce said,
speed kills.
So if,
yeah,
if you do have that horsepower,
if they have the foothold and really that's again,
to be clear about vault typhoon,
that is just about persistence.
That is about getting a foothold and maintaining a point inside these environments.
It's not a malware game here.
It is just they have the creds.
They're in the environment.
They roll back in every six months, make sure they still have access.
And if they needed to delete a server, they could do it.
That's the game.
To your point, though, once you speed it up once you have
a much more disruptive and destructive capability uh how can you get an impact at scale and i think
the scale and duration and that's the question that i have you know i see the volt typhoon as
two prongs i think we've talked about this the The first is hitting things in Guam, Diego Garcia,
Okinawa, Philippines, Pearl Harbor, Australia. It's within the sphere of where you'll be able
to immediately launch defensive support for Taiwan. But the second prong is more of a
psychological operation. It's hitting US critical infrastructure, water systems, as I just mentioned, more to hit us, the people, so that we are
frustrated and harried and confused and we don't politically...
Overwhelmed, I think, is the desired state.
But ultimately, it's like, hey, Congress, what are you guys doing over there? Fix our stuff.
Don't worry about that island that I couldn't even find on a map, fix our stuff. That is part of the entire military doctrine.
And you can achieve some of those effects quickly and for a limited duration of time,
or you could drop the grid for a month. Now, whether they can do that now, no. Could they
do it in the future? I don't know. But I think that's the mentality and that's the objective
of the Volt Typhoon campaign from the PRC side.
And in part, that's why you have the entirety of the U.S. government cybersecurity leadership ranks.
Chris Wray, Director Wray of the FBI, was at a conference in Nashville at Vanderbilt two or three weeks ago.
And this has been his thing.
He has been on this for years, talking about the threat from China.
And it's going to continue.
Jen Easterly, my successor,
assistant is going to be here at RSA talking about this.
So this is the top of mind issue for the US government
and how we need to be prepared.
But again, it's just right now, it's all about creds.
It's all about getting access.
But what does that look like?
Yeah, once this stuff iterates. So look uh we were going to talk a little bit about energy because i mean the
amount of energy that's going to be required for this sort of aiification of everything is
mind-boggling and it's going to have some pretty big implications for energy policy but
because we're a bit stretched on time i'm going to skip that one and i'm going to commend you
chris for your discipline because we've been sitting here on time, I'm going to skip that one. And I'm going to commend you,
Chris, for your discipline, because we've been sitting here talking for over half an hour, and we haven't touched on really the implications for election security.
And you have been incredibly disciplined by not going there.
And you just mentioned Jen Easterly, who succeeded you at CISA, and I just saw her
speak as well the other day. It was extremely clear to me how deadly seriously they're taking
the possibility of election interference later this year. I know that AI already,
the current iteration of AI, is a big concern for them.
You know, I guess the question is, you've had a bit of experience around this.
Is this 2016?
Is it 2020?
Or is it something completely different?
What is going to happen this November?
And do you think AI is going to play a role?
I think we have to keep in mind that there's kind of a stair step or an evolutionary arc in so many aspects of what we're doing in cyber.
So just think about, for instance, what the GRU, Sandworm, whatever, has been doing with operational technology.
So they hit the Ukrainian grid in 15, very, very kind of basic stuff.
16, they're advanced.
Then you get into TriConnex.
Then you get into Hitachi, Microskata.
And they're improving every time. The same thing goes with election
interference. And they were doing it before that. But 16, very basic. They evolved capabilities
through 18 and 20, and then now 22 and 24. But others are watching. So you have more entrance
into the game. You have Iran that ran the, you know, acting like they were the proud
boys, releasing the videos, sending the emails. You have the Chinese that are in there with these
longer, slower burn influence operations. So they're evolving. They're constantly A-B testing
and they're learning, hey, that didn't work last time. And we're going to do this thing the next
time. So AI just gives them one more feature. It gives them one more kind of
facet of their operations. How do I think it's going to play in 24? It's a little unclear,
but I do think it's going to be more basic stuff. It's going to be imagery. It's going to be audio,
static pictures, video, but it's not going to be really nefarious below kind of the table
under the radar stuff. It's going to be obvious. It's going to hit you in the face.
We've already seen a little bit of it. The Russians have been playing around in Eastern
Europe and some prior elections in Moldova and Slovakia. And again, it's modifying video
for support of a pro-Russia candidate, things like that.
We would probably see something like leaked audio or leaked video here.
That's going to get picked up by the national media at a national level and debunked pretty
quickly. And it'll burn through social media channels that are ideologically fractured anyway,
and it'll kind of fall out naturally. I mean, I think the issue is, okay, when it's one or two, you're going to get that picked
up by the national media and say, look at this interference. But when it's so voluminous,
and once we've become desensitized to it, I don't think the media is going to bother
with the debunking anymore. And I think that's probably not this year, but maybe four years
from now.
Okay, so there are a of different things going on here.
I think it is going to start picking up and they're going to be, just to be perfectly clear, this is not exclusive to foreign state actors.
There's enough capability out there that's accessible to domestic actors as well.
So yes, is it going to be voluminous?
Absolutely.
Is there going to be kind of a defensive pushback mechanism from campaigns and from the media?
Yes.
Honestly, if I was a candidate right now,
I would probably cryptographically sign or watermark everything that is associated with me.
And I would say, if it doesn't have that, it ain't real.
It's so funny you say that, because I wanted to go there. Because just in discussions with
my own people, right? We're a media company, and one of my team members has said, geez, it's so worrying, isn't it, that this AI stuff might put us out of a job?
And I said, no, because trusted channels are going to become so relevant, so important again.
I mean, you can already probably use a model to generate a fake CNN clip of an interview with a candidate and then push that out through social media. And under that paradigm, it's going to be so common for that to take place that I just don't see the debunking
really working. So what option does that leave people but to verify sources, go to trusted
information brokers, trusted media platforms, check watermarks, things like that, check signatures.
And even just to kind of like bring it back to the core cyber piece here, you know, one
of the elements that we always talk about is election systems and machines and the election
management system and things like that.
I still think it would be incredibly difficult to have any sort of operation that would achieve
an effect at scale that would affect the vote or the counting or the certification process.
Again, just the decentralized fragmented nature of the way elections work.
But we have to kind of get out of our, I think I said it at another event that we were at,
I think at Dimitri's thing, but we in this community tend to think just about
technical operations and technical impacts. Our adversaries don't think that way. They think as
much about the psychological impact of a technical operation. And so they may just want a little bit of a
tremor in the force, a little bit of a wobble in a system in Pennsylvania, in Georgia, in Arizona.
Those are the battleground states. And then they can amplify and boost, say, see the system
can't be trusted. This whole thing is rigged. Sounds familiar.
Where are we going from here?
That's what I think the real problem is.
Alex, just before we wrap this up,
I mean, you've worked on disinformation stuff extensively
during your time at Stanford.
Do you have any thoughts on this?
Yeah, the way I would think of,
I think there's way too much focus on AI
generating just a deep fake of the video, of the photo.
I mean, I think that that will be a problem.
But the truth is, is in a big election,
those are the kinds of things that will be investigated
and possibly debunked pretty quickly.
To me, what's most concerning about AI and disinformation
is the force multiplier effect and the fact that it brings
Russian-style campaigns from 2016 in the reach
of a huge number of groups.
Like, to do what they did in 2016,
Yevgeny Progozhin, who at the time was pretty,
not a lot of people knew who he was,
now somewhat famous for having
a totally unfortunate airplane accident.
Yes, whoops.
It's amazing how dangerous windows and airplanes are.
He was playing with a grenade in the airplane.
Yeah, really, really not advisable.
For those of you who are listening to this
who don't know who Evgeny Progozhin is,
he ran the Internet Research Agency,
which did a lot of this troll farming stuff.
And the Wagner Group, which then turned against Putin
and ended up with Putin.
Then he had a plane accident.
It was very sad.
So Progozhin was a billionaire, right?
And so he could afford to rent a building in St.
Petersburg and fill it with several hundred young Russians who can write English well enough to
pretend to be Americans, even though they didn't do a fantastic job. But he could do that. The
number of groups that could do that was only states and only large states with lots of budget.
Now with AI, you only need one nerd with a couple of RTX 4090s, open source models
like Lama running locally, perhaps that you've retrained and you've done a checkpoint on,
uh, based upon certain kinds of news articles or certain kinds of conspiratorial writing.
And then you need an editor who understands the political situation, can read English,
but they don't even have to be able to write it that well.
And you can run a Russian style campaign.
And so that's, you know,
the IRA had writers, they had researchers, they had a graphic design department that made like
posters. I see where you're going. You can collapse all of this into a model. I mean,
I think it's, you know, often when we're talking about AI, the phrase that keeps popping into my
head is it's a trillion monkeys at a trillion typewriters, right? Yes, yeah. And so now it's not just Russia and China.
It is any state,
and a huge number of non-state actors
can run humongous campaigns.
And it also means for the large actors,
like the Russians and Chinas,
they used to not do a lot.
In this 2024, we have 435 members of Congress
being elected, 33, 34 senators,
something like 20 some governors.
There's a lot of races going on in the United States. You would only see them mess with a
couple of the big ones. Now you can totally afford to write propaganda for some local house race
where the guy is anti-China or the candidate said something about Ukraine. And that's what
really scares me is that the AI gives such a amplifier. It's scale. A scale. Yeah. Yeah. Okay. So back
to the scale, unwinding it a little bit. I think the thing that really concerns me the most is,
not today, but maybe for 26 or 28, but taking the Cambridge Analytica model
and then putting some super precision targeting against individuals and then creating kind of
the consensus bubble around them and everything they see completely fringe issue that using AI,
you can convince them that, no, this is mainstream now. And then doing that at scale.
Now the challenge is I don't think you're going to see a foreign state actor do that.
I think it's going to be a domestic operator. I think it's going to be a domestic operator.
I think it's going to be either a political campaign
or some other special interest group,
dark money group that's doing that.
And you're not going to see it until the deed is done.
I don't think under US law that would even be illegal.
As long as they weren't paying for ads,
as long as that was quote unquote organic content,
you could run a dark money campaign
that did that kind of widespread manipulation.
I expect it's happening.
You see this, the canary in the cold mine here is X slash Twitter, the platform formerly
known as Twitter.
And that now that Musk got rid of all their trusted safety people to do this work, it
is completely full of blue check marks that are clearly state operators or are political
operators.
All they do is talk about politics all day. A huge amount. Well, look, I woke up here in the United States this morning
to the news that a Chinese aircraft had dropped flares on an Australian helicopter that I think
was operating in the South China Sea and international waters. And you look at the
mentions on X of this, and all of the replies are flooded with what really do look like Chinese
government propaganda bots, right? And that's new. Oh my God, that aggressive helicopter you mean?
I read about that, Patrick. Where was it? And boy, like the Chinese, I feel like this whole South
China Sea is China's backyard and really Australia-ish. Yeah, yeah. We're not flying stuff
around near Tasmania, you know, is the vibe. Yeah, absolutely. And China is where, I mean, Russia is famous for this, but China is where the game's at.
Like two things happened that caused China to massively invest in their disinformation infrastructure in non-Chinese languages.
One, the Hong Kong protest and that the Chinese got their butts kicked by teenagers in Hong Kong who are internet native, much smarter about social media and very good at English. And so they lost the propaganda war about Hong Kong, and then COVID, that their
attempts to control the narrative around COVID completely failed. And so they've invested heavily
at the exact same time that AI becomes available, which for China is a big deal, just because like
kind of history and language issues, it's a lot harder to hire hundreds and hundreds of Chinese
people that can speak English well, or Spanish,
or, you know, Russian, or any of these other languages than it is in Russia, where it's kind of much more cosmopolitan educational system. And most educated Russians speak English quite well,
right? That's not necessarily true in China. And so the fact that now AI allows you to generate,
like we said, an infinite amount of stuff that looks like it's from a native English speaker
is a big difference maker for
Chinese propaganda. All right, Alex Stamos, Chris Krebs, thank you so much for joining me for this
conversation. It's been, you know, fascinating as always. Look forward to doing it again.
Thanks for having us, Pat. Thanks, Pat. It was a lot of fun as always. Bye.