3 Takeaways - Former National Security Advisor Jake Sullivan on What Xi and Putin Are Really Like Behind Closed Doors (#254)
Episode Date: June 17, 2025Jake Sullivan spent four years at the highest level of U.S. foreign policy—sitting across the table from Vladimir Putin, Xi Jinping, and leading the national response to crises like Ukraine, Taiwan,... cyberattacks, and AI risks.He shares a rare look behind the scenes of global power, including: what intelligence gets wrong (and why); how AI, drones & disinformation are reshaping war; why the U.S. is more vulnerable than it seems and what a China-Taiwan conflict might actually look like.His insights are sharp, urgent—and surprisingly personal.
Transcript
Discussion (0)
Most of us read the news about war, rising tensions and shifting power.
Today's three takeaways guest lived it and led it inside the rooms where decisions are
made.
What does he see that the headlines miss?
And what are China's leader Xi Jinping and Russia's Vladimir Putin really like?
Hi everyone, I'm Lynne Toman and this is Three Takeaways.
On Three Takeaways, I talk with some of the world's best thinkers, business leaders,
writers, politicians, newsmakers, and scientists.
Each episode ends with three key takeaways to help us understand the world and maybe even ourselves a little better.
Today, I'm excited to be with Jake Sullivan. He was the U.S. National Security Advisor from 2021 to 2025.
He was educated in the U.S. and the U.K. He earned a He earned a BA summa cum laude from Yale
and then a Rhodes Scholarship to attend Oxford.
He then earned a JD from Yale Law School
and he started his career as a lawyer
before joining government as a foreign policy advisor.
In 2020, he was named
as President Joe Biden's National Security Advisor
and he served as National Security advisor from 2021 to 2025.
I'm excited to find out how he sees the world today and what China's leader Xi Jinping and
Russia's Vladimir Putin are really like. Welcome, Jake, and thanks so much for joining
Three Takeaways today. Thank you for having me.
It is my pleasure, and Thank you for having me.
It is my pleasure.
And thank you for your service and government.
Well, thank you.
It's good to be able to get some sleep and also to go on podcasts.
So I'm excited to be here.
Jake, we've had surprises.
It seems like almost every year or every couple of years, summer market surprises like the 2008 financial crisis.
Some are health crises like COVID.
How can we prepare for unforeseen events?
Well, it's interesting.
I think we're never going to be able to fully predict everything that happens.
And that certainly was the case during my four years as national security advisor.
There were unexpected events, there were unexpected crises, and that's always going to be a feature.
So the question is, how do you create the kind of policy resilience that is necessary
to be able to withstand and weather those crises?
And for me, that really comes down to having a basic reservoir of strength nationally and
internationally. Nationally, it means that any given country,
for me, I focus on the United States, we cannot be dependent on any other single country or single
source for critical goods. And we learned that during COVID, having only a single source for certain
medical supplies, or frankly, for semiconductors meant both health and economic consequences.
So that's one.
Second is we have to prepare our publics to be resilient, to be able to deal with crises,
to get information effectively, to look out for one another, to have the government be
able to move rapidly and nimbly to provide the kind of support that people need.
And then third, at an international level, we need habits of cooperation and institutions
of cooperation, which worked during the financial crisis to allow the United States and China,
along with other countries, to steward our way through the Great Recession so it did
not become the Great Depression, part two.
And it totally failed in COVID when we just couldn't get
the nations of the world in 2020 to come together around a common strategy and everybody pretty
much went their own way.
So those are some of the things that we can do in advance to be ready for when a surprise
or an unexpected crisis hits.
But preparation will only take you so far.
Being able to react swiftly, calmly, effectively and in coordination with others is really the name of the game.
We've had multiple failures in intelligence. You were told that Kabul would hold and Kiev
would fall. And instead, the Taliban swept to power before the withdrawal of American
troops from Afghanistan.
And in Ukraine, rather than keep falling
within days of the Russian invasion as forecast,
they've held on now for three years.
In the Middle East, both Israeli and American intelligence
were blindsided by the Hamas attack.
How can we effectively plan and act when intelligence is,
shall we say, unreliable?
I want to start by saying that the professionals in the American intelligence community who
I worked with day in day out are really truly remarkable.
And they get a lot of stuff right, including, by the way, the fact that Russia was going
to invade Ukraine and roughly when and how it was going to happen.
So I think we have to start from the proposition that this is a human exercise.
There's not some scientific algorithm that can tell you precisely what will happen on
the battlefield or when a terrorist group will strike.
But one thing we can do is keep trying to improve the tools and capacities of our intelligence
community.
And then the second thing is to recognize that any judgment the intelligence community
makes is necessarily limited by their access to information or by frankly human nature.
So just to take your example of Kabul and Kiev, so much of what drives whether a war
goes one way or another is not the size of a military or the precise equipment
they have or their geographical position on the battlefield, it's morale.
It's the human element.
It's will it hold or will it collapse?
And in Kabul's case, the bottom fell out and in a matter of days, the Taliban swept across
the country.
In Kiev's case, the Ukrainian forces held and they held in the face of great odds and
then the United States and a coalition of countries came in behind them.
So part of what we have to do is recognize when we're making these kinds of judgments
is to account for the possibility that we are wrong, that in fact the alternative may
come to pass and be prepared if that happens.
That is the best way for policymakers
to conduct their affairs in ways that are most likely to lead
to good outcomes.
Chinese malware is pre-positioned
in American companies and infrastructure.
Where do you think that the US is most vulnerable?
Is it our electric grid, our water supply,
our financial system, our water supply, our financial system, our
transportation system, our phone and communication systems, or other areas?
And how do we fix this and counter it? Historically, the United States has been
the least vulnerable major country in the world because we have these two
great assets, the Atlantic Ocean and the Pacific Ocean, and
we have two neighbors who, whatever the current occupant of the Oval Office says, have been
pretty darn good neighbors in Canada and Mexico from the point of view of not representing
any material geopolitical threat to the United States.
So we've been the most insulated, the least vulnerable.
But when it comes to cyberspace, we are the least insulated and the most vulnerable for
two reasons.
One, we are connected across every sector you just named and many more and a massive
open architecture internet.
We don't have a great firewall like China does.
And two, most of the responsibility for security
when it comes to cyberspace lies not with the government,
but with the private sector making their own decisions,
often decisions that require them to spend money
to protect critical infrastructure.
And you just named a bunch of areas, all of which, I believe,
are in one way or another vulnerable.
The key is we got to up our game everywhere.
And part of the challenge we have from my perspective is there are very few sectors
that have clear mandatory minimum cyber requirements.
We have mandatory requirements for all forms of insurance for good reason.
The fact that we do not have broad-based, basic levels of mandated cybersecurity
protection in areas of critical national security priority, energy, water, pipelines, finance,
that to me is why we are going to keep having these vulnerabilities and we've got to find
a way to do better. The other thing that we have to keep in mind is that artificial intelligence is going to
create massive new vulnerabilities because AI systems can detect bugs in code and go
after them.
But AI systems will also be critical to defense because if you can detect it, you could detect
it upfront, fix it, and make sure before you've ever deployed the system that it is strong
and robust against cyber attack. So I think the most important thing for us to
do in the next few years is one get that baseline level of cybersecurity
protection across all of critical infrastructure and two begin to
incorporate and adapt AI tools for defense before the bad guys are able to
fully incorporate and adapt AI tools for offense.
American strategy has relied on deterrence with our massive military,
but a massive military is not the advantage it once was,
nor is, as you mentioned, the Atlantic and the Pacific Ocean.
Where, in your opinion, is the U.S. most vulnerable?
Well, we've just talked about cyberspace, and I think that that is an area of huge challenge.
A second area of huge challenge is I believe that we are very vulnerable to avalanches
of misinformation and disinformation that corrodes our democratic discourse, that leads
to the spread of conspiracy theories that cause all kinds
of harms, health harms, potential security harms, democratic harms. So we're very
vulnerable in that respect. And then increasingly, I worry that the United
States will be vulnerable as many other countries are through what I call the
democratization of lethal technology. The ability of small groups or even individuals to send a bomb
on a drone over a very long distance to hit a very precise location.
And we've just seen this incredible operation that was conducted by the Ukrainians deep
into Russian territory, deep into the most protected parts of Russia.
There are air bases that house their strategic bombers.
And they were able to score direct hits on many of those bombers at massive distances
with these relatively small hidden drones that could pop out of the top of a cargo truck
and go fly and hit their target.
One can imagine a risk to the homeland, not to mention a risk to fielded US forces around the world,
from this kind of evolution of lethal technology. And it's something that's going to require us to
have a more sophisticated understanding of the countermeasures necessary to combat this.
And another examples are the hoodies in caves.
Absolutely. I mean, if you think about what the Houthis have been able to do with components they're
importing from other countries, Iran among them, China among them, slapping them together
in their own little makeshift factories that they can move around underground to various
parts of this quite large, geographically large country, Yemen.
They can put a seeker on top and they can fire that missile or that drone thousands
of kilometers and hit targets that they have selected in a place like Israel.
Some of those missiles get through quite sophisticated air defenses and some of those drones do too.
They can hit moving ships at distance and hold the Red Sea at risk, all for relatively cheap, particularly when you compare it
against what we're spending on aircraft and air defense to try to fight back against what the Houthis are doing.
That cost trade-off is dramatic. It's billions to call it thousands, you know, over time or hundreds of thousands. So that's a preview of
What's to come now?
We can't lose sight of the fact that our peer competitors like China are also developing
Massively sophisticated weapon systems that can hold the United States and our forces and our allies at risk
So we cannot entirely shift our focus to these kind of smaller
So we cannot entirely shift our focus to these kind of smaller, more
attributable lethal technologies.
We have to prepare and plan for and deter against both.
What could a modern war look like and how well equipped is the US to fight a modern
war? I think we're to a certain extent watching a modern war in Ukraine.
It has elements, obviously, of an old war, World War I, literal trench warfare, but also real elements of a modern war.
A war being fought with uncrewed systems, software, artificial intelligence, and quite
precise targeting and counter-targeting. And to me, a modern war is going to have those kind of core set of elements, very large numbers
of uncrewed aerial surface and even potentially ground systems.
A war run with a backbone of data through various software applications, with artificial intelligence improving the
ability of both sides to see the battlefield, to target the other side, to run their logistics.
And I expect that that is the kind of war that we will see even when you have a much
larger and much smaller foe fighting out into the future.
What Ukraine has proven is that you can build this kind of capability under fire from a
relatively limited resource base.
Now, the United States helped a lot in funding the drone program in Ukraine, but Ukraine
has shown, has created a playbook that other countries can follow and I believe will follow.
Now, is the US ready for this?
I would say sitting here today, we have work to do to get fully prepared and that largely comes from
being less focused on just buying more of the old stuff that we've relied on
and putting more of our resources into this set of implements to modern warfare.
Uncrued systems, large numbers of attritable munitions so that we have real magazine depth,
software and AI that works for us with the appropriate guard rails.
And if we can do all that and integrate every dimension of war,
space, cyberspace, air, sea, ground, subsea, I think then we will be in a position
where we can have the kind of overmatch that
we've historically had, but there's a gap we need to get across between where we are
and where we need to be.
And that requires the Congress, the Defense Department, the White House, and frankly,
our defense industrial base, the companies that make up our defense industrial base,
all to pull together to say we need a new business model for how we create the
kinds of capabilities necessary to prevail in what you describe as a modern war.
How do you think that the Ukraine war will end?
And do you think Russia can be trusted to adhere to a peace agreement?
I don't think Russia can be trusted.
And the Ukrainians definitely don't trust the Russians
because they have not in the past adhered to peace agreements.
And that is why the Ukrainians have said the only way we can have a just peace is if we,
Ukraine have both the necessary deterrent capability ourselves to ensure Russia won't
attack again and security commitments from our partners, which many of us, including the United States,
have signed with Ukraine, that that is a sine qua non of a durable peace.
Ukraine having that confidence and that capacity and that backing from allies and partners.
Unfortunately, right now, I see this war going on for a while because I don't think Vladimir
Putin has achieved his objectives.
He would like to break Ukraine.
He would like to subjugate Ukraine.
He does not just want a neutral Ukraine.
He wants a neutered Ukraine.
He wants a Ukraine that doesn't have the deterrent capability to stop Russia in the future should
they choose to start again.
Because of that, he is going to keep fighting and the Ukrainians are going to keep defending bravely, fiercely
and continue to show Russia that they can reach out and touch Russia deep inside Russian
territory as well.
Ultimately, a lot of the question of how this war ends turns on whether the United States
of America under this administration is actually prepared to throw in behind the Ukrainians
in a fuller way.
Put more pressure on Russia, give more support to Ukraine.
If the answer to that question is yes, then I believe that Ukraine can achieve a just
peace.
If the US walks away or just throws its hands up, then I think it will be much more difficult
for Ukraine.
But you can never count them out.
And your first questions about us having thought that Kiev would fall in a week.
I am now very humble about making any predictions about what exactly is going to happen on the battlefield in Ukraine.
And we also see there the importance of leadership.
Yeah, I mean, look, at the end of the day, what was really lacking
in Afghanistan was a president who stood up to say, I'm going to stand.
We're going to fight for Kabul. The president fled.
In the case of President Zelensky, he went out that night, filmed a video with his closest
advisors, put it out to the world and said, we're not going anywhere.
And I think that had a huge impact and the galvanizing leadership that Zelensky has shown
over the course of this conflict has been vital.
I would also say that the United States of America showed real leadership in warning the world
this was going to happen and in building a coalition of 50 nations to support Ukraine and pushing back.
And if the U.S. walks away from that role, I think that will come at a real strategic cost.
Yes, it will come at a cost to Ukraine.
I think it will also over the long term come at a cost to the United States.
You've met many times with both China's leader Xi Jinping and Russia's Vladimir Putin.
What surprised you about both of them?
Well, I've met many times with Xi, with Putin only once in the Biden administration was
I in a face to face meeting and participated in that meeting.
That was the summit in Geneva.
Before then, I had seen President Putin when I worked at the State Department many years
ago and saw him on multiple occasions.
He's always been the same guy, President Putin, but his historic, almost destiny-based obsession
with Ukraine is palpable today in a way that I feel it was not to the same extent a decade ago.
It was there, but not revved up to the point where he has bet his entire country's future
on trying to prevail in this conflict.
He is a person of passionate intensity about his views of history and of his own place
in it.
And you can see that emanating from him when you're in the same room.
With President Xi, I think what's surprising about him is that he has the vibe of a natural
politician, which is interesting for someone who leads in a very hierarchical Communist
Party structure in China.
He has a natural way of interacting.
He will put his notes aside, have a back and forth.
He'll tell stories. He'll tell jokes.
It's a much more relaxed manner than you typically see from Chinese
leaders or leaders in systems that are in the same vein as the Chinese system.
What are the most likely scenarios you see with respect to China and Taiwan?
I think the scenario that we need to keep driving for
is the basic maintenance of peace and stability across the strait.
That is to deter an outright conflict in which China attacks Taiwan.
That is a scenario we need to avoid.
I believe that that scenario is achievable.
If the United States, on the one hand, takes steps to help support Taiwan's self-defense by
providing it with the necessary equipment, and here Taiwan also will be learning lessons from
Ukraine, and along with deterrence, the U.S. has to continue to engage in very strong,
robust, principled diplomacy
with Beijing.
We have that formula over four years of the Biden administration.
We deterred or averted any kind of larger conflict.
So if that can be deterred and averted, and I think it can, it is a risk, but it is not
inevitable, then the things that we really have to watch for are more in what I would
call the gray zone, the kinds of things that we're watching China do even now, encroaching closer and
closer to the island of Taiwan with their military, engaging in disinformation campaigns
to try to unsettle the politics on Taiwan, potentially using economic coercion, potentially using legal mechanisms to try to stifle Taiwan's
ability to operate.
These are the kinds of things that I do expect we will continue to see and that we need to
vigorously call out and push back on, even as we continue to state that we will remain
committed to the One China policy and that we do not support Taiwan independence. We have to have that kind of approach.
And if we do, I think we can continue to manage this issue in an effective way.
But the risk is there for something more catastrophic.
And it truly would be catastrophic for the world if there was an outright war over the
Taiwan St.
Dare I ask you what an outright war could look like?
Well, in the most extreme scenario,
it could look like the PRC deciding to invade Taiwan,
a massive amphibious assault on Taiwan.
That is one scenario that is constantly
war-gamed to kind of play out what that would look like,
what the US options are in responding to that.
And then there are scenarios short of that, scenarios ranging from a blockade of Taiwan
to embargoes on Taiwan to a seizure of one of those smaller outlying islands that is
not the main island of Taiwan itself, but as part of Taiwan's territory under their
jurisdiction and control.
So those are also some of the scenarios.
And one of the things we did when I was national security advisor was really pose this question,
what are all the possible scenarios?
Let's write them out and then let's form a playbook of all the steps we could potentially
take and then the presidential decisions that would have to be made so that
we can tee them up quickly and effectively if one of these scenarios broke out.
We did all of that at the direction of the president who said, if I get called down to
the situation room because something's underway over Taiwan, I want us to have thought in
advance about what the questions are, what the options are, what we've got ready to go
in every dimension
of America's response capability. And so we did a lot of that work and I hope that the
current administration is continuing that work.
Artificial intelligence is developing much more rapidly than anyone expected. Before
we talk about the US and AI, how do you see the future of AI and the risks of AI?
It's so interesting.
I'm trying to think of another example of a technology
where such smart people have such different predictions
about how far the capability will go and how fast.
Within the last couple of weeks, we've
heard from the CEO of Anthropic that we're right
around the corner from massive breakthroughs that are going to have huge effects on our
economy and our society, including massive job displacement.
Within the last couple of weeks, we've also heard from an Apple report that AI is faltering
when it comes to basic reasoning.
And what's true, what's not true,
what's likely to happen, I think it's just extremely difficult to predict.
But here's what we know.
What we know is just based on the capability that has come so far, there are real national
security risks that have to be managed.
There are risks with respect to how AI can identify bio weapons. There are risks, as we talked about earlier,
about how AI can be used for cyber offense.
There are risks of misinformation and disinformation.
And yes, there are economic risks,
that as these technologies evolve,
it is going to create real disruption and displacement
for large numbers of people.
That may or may not be compensated for by productivity gains and other jobs and so forth,
but what we'll know is a lot of people are going to be adversely impacted as this technology
continues to mature, and we need answers for those.
And then finally, there's this kind of larger existential risk, which a lot of strategists
are grappling with right now, which is, are
we going to end up with an intelligence equal to or greater than human intelligence?
Is that intelligence something that we actually can effectively control?
Is it potentially going to be misaligned in ways that can be more broadly harmful to humans?
These are very difficult, very real questions that anyone working in
policy is going to have to contend with big time. And these are the kinds of things that
I'm personally giving a lot of thought to right now, especially because as I look at
the current administration, they've sent a pretty strong signal that it's laissez-faire,
it's hands off. We're not really going to try to put guardrails around this technology
because we're in a race and we've got to win the race and so forth.
I believe in the race and I believe the U.S.
should stay in the lead when it comes to the frontiers of AI.
But I also believe that we do need to build guardrails to manage the risks
that I've just laid out and that we need an overall framework for approaching
AI so that it works more for us than against us.
This is easier said than done.
It's so challenging because we don't really understand how AI works right now.
How do you see the geopolitical risks of AI for the United States?
There's a kind of macro issue and then a more micro issue.
The more micro issue is that artificial intelligence is going to
supercharge every aspect of national security. Military capability, intelligence capability, the capability to hold your adversary at risk. And other armies and other militaries are going
to be incorporating elements of that into their military doctrine. The United States can have to
do the same. So there's that level, which in my last year as National Security Advisor, we did an entire
memorandum to the National Security Enterprise of the United States basically saying, we
have to tool up to be ready for the incorporation and adoption of AI capabilities into modern
war fighting, modern intelligence gathering, modern defense as well as modern
offense.
And frankly, I think there is a gap between where we are and where we need to be on that.
But then in a macro level, I would say the real question for me is not just how AI provides
additive capabilities in these national security areas, it's this more fundamental issue of
general intelligence or super intelligence.
Is that going gonna come?
And if that's going to come, what is that going to mean literally for the future of
humanity?
Henry Kissinger in his late years before he passed away was really focused on this kind
of question of whether the enlightenment endures in the face of a general intelligence that
is synthetic, that is not human, that is computer generated.
And in this regard, it seems to me every country, including the US and China, have to have a
deep and robust dialogue about what these kind of larger existential questions say about
policy, about risk management, about security and safety, about alignment issues.
And for that reason, I think it's really important that the US and China, even as we're racing at the frontier of AI,
are sitting down and talking about these issues in a structured, sustained and mature way.
There's the issue of AI use in the military. AI, of course, is instantaneously fast. And if there's a human in the loop, it will slow that down.
How do you see this risk or in November of last year in Peru,
agreed on a simple statement, which
is that humans, not AI, should make nuclear targeting
decisions.
That should seem very obvious to all of us.
But as a statement by the leaders of the two top AI
countries in the world, the US and China,
and two countries with significant
nuclear arsenals, I think this was a very important statement for the two of them to
make.
In general, from my perspective, a human in the loop for lethal decision making is vital.
And that is because at present, when you look at AI systems, we should have a sufficient
lack of confidence they're going to get every decision right that we should have humans ultimately there.
AI can do a lot with respect to the provision of synthesis of information, but at the end
of the day, it should be a human making the call.
The question is, and this is the question Henry Kissinger was grappling with, if we
get to an intelligence that is just making the call better than our human reason that
is the basis of the enlightenment could make.
Well, what then?
Well, we're not there.
And I don't think we have to really quite answer
that question.
We should focus on today where I do
believe in the centrality and necessity of a human envelope.
And what are the three takeaways?
First, policy making is an incredibly human exercise.
It is just a group of people with all of their faults and their foibles, but also all of
their brilliance and creativity too, sitting around a table doing the best they can.
And their best is going to be damn imperfect because that's just in the nature of policymaking.
A second takeaway is now more than ever, it is critical that people distinguish between
conviction and opinion. You should hold fast to your convictions, things that shape your
faith, your family, your basic values. But you should be prepared to change your opinions
as you get new information and hear new arguments. And in a moment of profound change and
transformation in the world, being open-minded and willing to change your
opinion as you're presented with something new, man, the people who are
able to do that are going to be the people who succeed and thrive the most
in the future. My third takeaway actually goes to something you said earlier,
which is leadership really does matter.
We can all do our part, but at the end of the day, we need leaders to step up,
to help point the way and we need strong, determined principle leadership.
I am worried about the direction of things right now, but I also believe that
when the moment calls for it, leaders emerge in this country and I think it will happen again
Jake, thank you so much. Thank you for your service in government and thank you for your time today. This has been great
Thank you so much. Really appreciate it
If you're enjoying the podcast and I really hope you are
Please review us on Apple podcasts or Spotify or wherever you get your podcasts.
It really helps get the word out. If you're interested, you can also sign up for the Three
Takeaways newsletter at ThreeTakeaways.com, where you can also listen to previous episodes.
You can also follow us on LinkedIn, X, Instagram, and Facebook. I'm Lynn Toman, and this is Three Takeaways.
Thanks for listening.