No Priors: Artificial Intelligence | Technology | Startups - America’s Plan to Dominate the Full AI Stack with Sriram Krishnan
Episode Date: July 31, 2025Sriram Krishnan was never interested in policy. But after seeing a gap in AI knowledge at senior levels of government, he decided to lend his expertise to the tech-friendly Trump administration. Senio...r White House Policy Advisor on AI Sriram Krishnan joins Elad Gil and Sarah Guo to talk about America’s AI Action Plan, a recent executive order that outlines how America can win the AI race and maintain its AI supremacy. Sriram discusses why winning the AI race is important and what that looks like, as well as the core goals of the Action Plan that he helped to author. Together, they explore how AI is the latest iteration of American cultural exportation and soft power, the bottlenecks in upgrading America’s energy infrastructure, and the importance of America owning the “full stack” from GPUs and models to agents and software. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @skrishnan47 | @sriramk Chapters: 00:00 – Sriram Krishnan Introduction 01:00 – Sriram’s Role in Government 03:43 – Impetus for the America AI Action Plan 06:14 – What Winning the AI Race Looks Like 10:36 – Algorithms and Cultural Bias 12:26 – Main Tenets of the America AI Action Plan 19:13 – Infrastructure and Energy Needs for AI 22:56 – Manufacturing, Supply Chains, and AI 24:52 – Ensuring American Dominance in Robotics 26:30 – Translating Policy to Industry and the Economy 29:30 – Should the US Be a Technocracy? 32:33 – Understanding the Argument Against Open Source Models 36:07 – Conclusion
Transcript
Discussion (0)
Hi, listeners, welcome back to No Pryors.
Today, Alad and I are here with Sri Ram Krishna, a top White House official currently serving as the senior White House policy advisor on artificial intelligence.
A former tech executive and venture capitalist, he's one of the lead authors on the American AI Action Plan released this past week.
We talk about the national implications of the AI race, what position we hold today, the workforce and energy needs of the future, and how to win.
Sjuram, thank you so much for joining us today for no priors.
Thank you for having me.
I'm a long-term fan.
I've never been invited before.
I was always a bit sad, but thank you for having me for the very first time.
And before, I just have to point out for folks who are listening on audio,
that Elad has never looked as good, as dashing, as handsome as he does.
Now, Eli, you're dressed up for me.
I'm honored.
This is how you can tell that Shuram is in politics now.
He has the liquid tongue of gold, which he coaxes everybody into doing his bidding.
So it's very good.
So, you know, for our audience,
Fioram has been a well-known Silicon Valley individual.
He worked on Andreessen Horowitz.
He worked in a number of this sort of marquee companies and names in Silicon Valley
over the last decade plus.
And now he's in government and he's really working on a variety of exciting
initiatives around AI and other areas.
Could you tell us a little bit more about your role?
And should we be calling you your excellency?
Or is there some special title we should be using now that you're in government?
You don't have to, but I will take it.
But no, thank you.
It's fascinating for me to be here talking to you in this capacity
because I've known both of you for forever and ever.
You know, we've had hundreds of interactions and also been such a fan of the part. Congratulations. Just a little bit of backstory. I've been in Silicon Valley for a long time. I feel very old. I did a tour of all the large consumer's social media companies. And then I was at Andreessen Horowitz for the last four years, competing actively for CDSA terms. With both of you folks, I'm sure. And all this while, I had no real intention of joining government. I wasn't very particularly interested in policy.
But what went up happening is a couple of years ago, I moved to England to head up all of Andresen's international efforts. And at the time, the UK was kind of a hotbed of all the AI policy debates. They had this AI safety summit at Bletchley Park. And this was kind of the peak of, I would say, the effective altruist versus the EAC kind of drama which was going on. And I got pulled into a lot of those discussions. And I remember thinking to myself, like, wow, a lot of
of people who are in very senior roles in governments in the United States back then and
other parts of the world didn't know what they were talking about when it came to AI.
I was convinced that they were doing the absolute wrong thing on many topics, for example,
open source or helping startups.
And it was just really, really bad in a way which I think the industry didn't really appreciate
until much later.
And that got me interested into just policy, which by the way, it was a word which I didn't
even understand what it means.
we didn't even get into what that means, but I got me interested in policy.
When President Trump got inaugurated, in the first week, he did two things.
One, he rescinded the Biden executive order on AI, which was bad and awful in many, many ways, which we can get into.
And then he signed a new executive order, which basically said that America should dominate and win on AI.
And then he called upon a few of us to say, you guys need to come up with a plan within six months to figure out how America is going to dominate and win.
And so that put us off with the races.
And I think everything that has happened since then,
culminated in the event that we had yesterday,
where we put together, you know,
we brought this long, 28-page document on America's the action plan
with a bunch of weeks ago, all over.
So that's kind of a little bit of history.
That's great.
And could you tell us a little bit more about what the main things
that you considered as you put together this plan were?
What are the things that you worry about geopolitically?
How do you think about AI and competition,
big tech versus small tech?
Like, it feels like there's a lot of threads in that.
And it would be great just to get a view
what are the main issues that created this plan,
and then it'd be great to talk through the plan itself.
One of the catalytic moments which happened was the day before I started this job,
I get a call, and this was the weekend Deep Seek had come out,
and there's actually some chatter that I've heard online that China timed it
so it can come out right after the president got sworn in.
And we were like, hey, we just want you to come in and brief a lot of people at the White
House on Deep Seek, because people like, hey, what is this?
Is it cheaper? Is it faster?
Do they have some magic way of training these models,
which only cost a few million bucks and not, you know, hundreds of billions,
all of what's going on. You folks might remember that narrative which existed that week. And so I got
to go and me and David helped brief all of the White House leadership. But it was really a starting
gun because I think that moment was profound because it immediately told us a few things. It told us
that America doesn't have a huge lead on AI. It actually has a very, very small lead. If you remember
at the time, DeepSeek was the only reasoning model, which was not open AIs. I don't think Claude had
come out of the reasoning model yet. I don't think Google had yet. It was the only non-open
AI reasoning model. It was very high up in the leaderboards. It was a bit unclear as to what
their cost claims were and how they had gotten there. I think we know a lot better.
And I'll not end up turning out to be very overstated, right? Like, basically, there was a
claim that is a few million dollars to train the model and they didn't really talk about
the hundreds of millions of dollars are probably spent to get to that point. It's sort of the last
training run, you know, was sort of what they paid for. Yeah. Yes, absolutely. I think I would say
there were claims were inflated and the claims we'd take seriously.
The claims were inflated was, to your point,
they kind of, I would say they put out like the final training run
and as to all the ablations and training costs.
And if you look at the paper, by the way,
I don't think they make the claim that the total cost was a million bucks.
I think it's what the press imputed.
But I do think they deserve a lot of credit.
I always make a point to say, look,
Bipzig did some very, very good technical work.
If you think about, they didn't have as good hardware as the American model companies do,
what they did with KV caching, MLA.
You know, there's some, you know, multiple theories on how they
actually got chain of thought. Maybe they did by themselves. Maybe they had some help from
American companies. But there was some really novel new work there, right? And I think it told
us that, okay, we don't have a lead that we can take for granted. They were definitely the best
open source model. So that way, and I think to even today, I would probably say the Chinese
models, Deep Sea, Quinn are the best open source models. But symbolically, it told us that we are
now in a race, in a very close race, by the way. What does it mean to win the AI race? Like,
why do we need to win that? And what, like, what was losing me and how we know we've won?
Well, I'm unsuspect you folks agree.
AI might be the most transformational, economic, cultural force of our lifetime.
And I believe that if the country or the ecosystem, which winds up getting ahead, is going
to have these cyclic effects, right?
Like, you're going to power productivity.
You're going to have drug discovery.
You're going to, you know, discover new material sciences, new technologies, which then feed back
into your infrastructure, feedback into your economy.
And you're going to get this flywheel effect.
where who winds up getting ahead could wind up really accelerating ahead in kind of a classic
network effect ecosystem way that all of us in Silicon Valley will understand. Now that is purely on
the civilian economic context. You can also imagine a military context, right? Think about everything
from drones to autonomous weapons. I'm pretty sure it's not in our best interest to have another
country have that same economy of scale and flywheel and race ahead of us. So that's the race.
Now, one interesting question that we have been pondering, which we can get it to, is how do you actually
measure what it means. How are we doing in the race? And one measure I've been playing around with
and maybe I'll already get your take on is I think Google just announced this morning that they
inference one quadrillion tokens a month or a quarter, I forget which one. And one of the measures
I've been thinking about is let's say the world inferences, I don't know, maybe it's called
10 quadrillion tokens a month. We don't know what the number is. What share of those tokens are
being inference on American hardware on American models, right? And how do we maximize that
market share? And that's kind of one of the mental models I've been playing with. And it's,
in a way, you can think about it as we are America Inc. We have a product stack starting from
GPUs with NMedia and AMD and a bunch of others. We have a model layer with obviously open
AI and GROC and Gemini and many, many others. We have an application layer. You had many, many
of them on your podcast, for agents, to all kinds of software. How do we make sure this American
stack is dominating that market share of tokens in France? Very good method. Yeah, it's really
interesting because one other thing that you didn't mention, I feel, is a cultural exportation
through the models. And so if you look at prior waves of sort of culture spread, it was the movie
industry, it was social media, and then now it's these models. Because a lot of people go to these
models is a source of truth for history, for information, for other things. And there have been some
famous examples in some of the Chinese models where there's a mission of Tiananmen Square or
a mission of other facts. And relatedly, there's some things in some of the U.S. models that
seem very politically slanted or otherwise not quite great. But it's interesting to also think
about it from the perspective of broader cultural exports. I just wanted to add that to your points
on defense and scientific progress and other areas. I think that's another key thing.
Exactly something we are actually addressing. And you're absolutely right. I grew up in India
and a lot of my exposure to Western culture was in the Internet and Google. And obviously, a lot
for the Internet was American. And, you know, that kind of
introduce me to Americana. And imagine if 1995, the internet was drawn by America, but
done by one of our adversaries. And so in a similar context, you're absolutely like when Deep Sea came
out, I think that all these great examples of lots of stuff in there, which doesn't probably
align to American values. Now, we are actually addressing this. President signed an executive order
yesterday. It's called no woke AI in the federal government. And what it does, and it's probably
going to be one of the spicy bits for your audience is that it basically says that, look,
from the day one of the Trump administration, we have tried to fight back against D.I, you know,
wokeness, critical race theory, whatever we want to call it in all parts of the federal government,
right, and all kinds of propaganda. And what this E.O. does is actually very simple. It says
that all models that the federal government will procure, aka your taxpayer dollars will be spent on,
has to do two things. They have to be truth-seeking, and they can't have artificial,
ideological bias added. If bias is added, you just have to be transparent about where you're
getting that bias from. It should be very simple for both people, but to your point, you know,
that cuts to the heart of, you know, if you're seeing nothing happened in Tiananmen Square in
19, you know, in the early 90s, that cuts the heart of that. It also cuts the heart of many,
many other things from the culture wars that we have now been trying to fight against.
Hey, Sri Ram, you used to work in social media for a long time, right?
Like, this sounds a little bit familiar in terms of, like, is it a platform?
Is it a publisher?
What is the information consumption that most consumers have?
Like, where does that analogy apply or breakdown?
It's a good question.
I think in some ways, that's for the industry and the ecosystem to answer a bit.
You're right.
I spent a lot of time at Facebook, now meta, at Twitter.
One of the things I saw when I was at Twitter was how easy.
easily you could inject cultural bias in your algorithms. I have so many stories about how if you
pick the right kind of Twitter accounts, which then feed into the trending algorithm, which then feed
into Twitter moments, and then which every journalist or editor will wake up. And next thing you know,
it's like one of the news stories off the land and Bathfield will write a piece saying,
people on the internet are talking about this. I saw this over and over again. And it left me
this profound appreciation of how algorithms can shape culture. And, you know, one of the
things I always say is Twitter or X is the memetic battleground upon which we fight a lot of these
ideological battles. So when it comes to AI, I think it's probably going to be very similar.
Like my kids use Chad GPT to answer everything, right? From history to geography to, you know,
just kind of silly kids questions. And you can easily imagine a world where people inject their own
cultural biases into this. And, you know, in the EU, we have a few good examples. We have
But, you know, we have examples of the Pope being seen as a black person, misgendering someone
being seen as worse than thermonuclear explosion.
And a lot of it's meant to say that you can easily imagine a world where these systems,
which are at the heart of so many things that the government is going to use, we all of us
going to use, we don't want them artificially injected with an ideology, at least without
being transparent about it.
What are some of the other main points of the announcement from yesterday?
So one of the ways David and I try to think about this with some of the people who work on is
it should make sense as a strategy for almost like a technology company.
And I hope that, you know, please go read the document.
It's actually pretty readable.
And, you know, and hopefully for those of you who work in the tech industry, it should kind of make sense.
And we think if America is going to win the race with China, they need to do three things.
And they ladder up to the strategy.
The first is we need to build infrastructure, right?
At the heart of this, if you kind of go back to the scaling loss, what do we need?
We need computation.
We need data.
And in the United States, it's been really challenging with the grid we have, with kind of this crazy permitting that we have around constructing new data centers to get some of these projects off the ground.
So the first part of the action plan really dives into what the president calls build baby build, kind of playing on drill baby drill, which is all about how do we make sure we are building infrastructure because obviously some of the other countries are.
And just an example, one of the things it talks about is to make permitting on federal land a lot.
easier for data centers when it comes to old environmental laws or other regulations which get in
the way. So think of that of like let's make sure we are building the infrastructure to power
these models as we scale up. So that's number one. The second pillar is innovation, which I would
kind of say as let's make sure all these amazing companies, everyone that you know of and, you know,
or maybe some companies which don't exist yet, can build applications and models or anything
they want as fast as they can. And what we're trying to do is,
a couple of things I really want to highlight. The first is we want to cut through red tape.
You know, like until last year and a half ago, I was in California, along with all of you.
California almost passed SB 1047, which if that had happened, it would have been the end of
open source, by the way, in the United States. We would not have a llama. We would not have an
Owen mini coming out. And a lot of states which want to do versions of this. And we think that
AI is a national priority. And if you're going to compete with China, we need to make sure that
these are things that we take a deal with at the national level, rather than every single state,
especially states which have ideologies that, you know, you and I may not agree on, you know,
try and set their own rules. And by the way, some people may not understand this. I don't understand
this. If you have a small state set rules, it can often become the de facto law for the country,
because if you're a company, you're like, well, I have to operate in the state or I have an office.
So let me just do that for everybody. It's kind of just like the EU does that. So we want to make sure
cut through red tape. Let's make sure if there's regulation.
it happens at the federal level. So that's very, very key because I think that's going to enable
not just the big companies, but every series A, series B, act with higher companies, whatever the kids
are doing these days, you know, we make sure that they are off the list. That's number one.
The second part is open source. Now, I think we probably talked about it's a bit offline.
Open source was one of the big reasons I actually got into the policy world. The Biden administration
really, really tried to scare people about open source, talk about how unsafe it was.
SB 1047 obviously try to kind of basically ban it in many ways.
And what the EU does is say, like, open source is a space where the United States needs to win.
It actually points to some resources that is going to be made available to research.
Because I think you and I know, open source is the way everyone from a kid in their bedroom or their dorm room all the way to a startup to all the way to somebody who wants a lower cost of inference in their IoT device or a robotic startup.
That's what they're using.
For context, too, much of the internet runs on open source software software.
Right? So the server software and other things. Much of that is open source. The protocols are all open for the internet. That's also true for crypto. And so, you know, it's interesting because removing open source from things like AI actually just centralizes power, right? It centralizes power and a small number of companies that could then be controlled by the government. And so to some extent, the fact that you all are supportive of open source means you actually are supportive of a thousand flowers blooming, but also a lack of direct government control and literally everything AI. So it's a very interesting counterstance to take.
So Elat has our talking points better than I have because that is absolutely right.
One very fundamental difference I think we have with the Biden administration is the Biden team
really looked as AI as something to be centralized and controlled.
Everything was about how do we make sure that we regulate these three or four companies and
only three or four companies can build AI.
They got to submit their models for testing.
It was all about control in a centralized fashion.
Now, when I moved to D.C., one of the things I realized is that's kind of the way D.C.
thinks, which is control and centralized in one place.
You and I know that's not how Silicon Valley thinks, and one of the reasons Silicon Valley is the envy of the world is because anybody, any day, can go to a Ycombinator seed round or race or just go off to the races and they could just build something amazing.
And it catches everyone's imagination.
And I think what we want to do is enable just that rather than say, okay, we want to centralize power, you know, within a 10-mile radius of where I am right now.
Yeah, in general, too, central planning tends to lead to very bad economic outcomes.
And so that's the collapse of the Soviet Union, et cetera.
And so it's something that's been tried many times before in many industries and it tends to lead to a very bad place in terms of innovation, in terms of economics.
I think one of the things that people like under price about open source models is they're going to happen and it's a strategic weapon.
They're happening and Western companies are using Chinese open source models very broadly already.
And so like if you believe that like not every model is going to be ideal.
like neutral or, you know, aligned with American and democratic values, then you probably
have a problem, right? And so the ability to like support, you know, whatever your point of view,
like pluralism and openness and innovation and have some control of like as a, as an ecosystem
versus in a centralized way is a very different point of view than like, you know, we'll let China
develop it. Yes. And I think you make a profound point. And you already seeing that where when
somebody's using deep seek or quen, that's an expression of soft power. And I think I would much
rather have, you know, them using a model built by somebody who kind of agrees with us and has our
values. That's number one. The other issue I would point to is that these models, we don't know
what's inside them. Interpretability is still a nascent field. And you could very easily see ways
where you plug a model into a cursor or windsurf and you generate a piece of code. And then two
years down the road, it turns out that code had a little if statement saying, if I'm running in some
piece of critical infrastructure, go do something else. And we don't have ways to validate all that.
And so a lot of reasons why we want to make sure that our American models or Western models
wind up winning. And this is something I think we're going to put a lot of it focus on.
Just because you have such a good view into this, can we talk a little bit about infrastructure and
energy, since you kind of made that like point number one in terms of what sort of stack we mean.
Like, people hear these claims from the leaders of the large labs that, you know, we're building a data center the size of Manhattan or, you know, it's the energy that a city uses at any point.
Can you contextualize, like, how much capacity we really need to build and sort of like what the biggest bottleneck is?
Like, is it, is it the grid?
Is it sources?
Is it workforce?
Like, you know, when you want to solve this problem, like, as a systems person, like, what is the first problem?
So the first thing I would say is it is a system.
And this system was one that wasn't really battle tested for decades.
Somebody showed me this number.
I think the United States basically had like one to two percent of power usage growth
for a very, very long period of time.
And so you can imagine this whole system of everything from gas turbines, coal, renewable energy.
There was a regulation of which kind of really stopped nuclear.
And then you had these per state utility companies, which often didn't have the incentive to innovate.
you basically kind of ran the state.
You weren't really, you know, getting new demand or competition.
You had a grid which wasn't really pushed because, again, you didn't need to.
And then you have essentially a patchwork of environmental laws, regulation, everything from
water to emission to a whole other sort of things, which I'm sure I'm forgetting, right?
So somebody kind of explains me as kind of this tangled spaghetti mess of things,
which, again, until two years ago, was just fine because you and I were not dramatically using more
energy than what we were doing 10 years ago. Now, that obviously changed. The scaling loss arrived and
everybody is trying to build new things. And I think the way we are trying to attack it is that every
single step of the way, which is, one, is how do you make generation better? Second is how do you make
sure we make constructing these data centers better, making it easier to kind of these regulations and kind of
get these red tape out of the way, making sure we put focus on the right energy sources and making
sure, like, you know, we have those lined up. So we're trying to take an approach to all of this,
but it is a complicated problem just because there is so many different players, so many different
states and so many different patchwork of laws and regulations involved. But I think what, you know,
I encourage folks to look at the executive water yesterday, which the president signs on
infrastructure, which I think is going directly at this. We also have something called the
National Energy Dominance Council, which works very closely with Secretary Bergam and Secretary
right, you know, for interior energy. And I think you're going to see a lot more from us on that front.
The short answer is it's complicated. I think we're taking a very, very strong approach to this,
but there's going to be more to come. How do you think energy infrastructure is going to feed into
these big data center buildouts? And so, you know, one theory I heard is that fiber is cheap and easy
delay while grid is hard, building out the electrical grid. And so therefore, you're going to centralize
data centers in your sources of cheap power and then you just run, you know, fiber into them versus
is moving things around based on, you know, other types of capacity from a telecommunications
to their perspective. Are there specific sources of energy that you think are going to power
this AI revolution of the things we need to be invested? Obviously, the president has issued
some executive orders around nuclear. I'm just sort of curious how you think about what that future
really will be and what are the major sources of energy that we really need to be dependent on
and how does that all shape up from an infrared perspective. What I think we see our role as is like,
get rid of the red tape. Let's make sure the permitting on these things are super easy. Nuclear,
There's another case where I think for decades and decades, the climate lobby and the doomers
that kind of stop any real efforts over there.
So I think we've seen a lot of effort.
Let's get the red tape out of the way.
Let's get construction going and see where we get.
The other thing that I think is interesting from an infrastructure perspective is manufacturing
capability and supply chain.
And a subset of AI supply chain is dependent on China or other countries.
Are there certain areas of supply chain that we should be repatriating back?
Or how should we be thinking about more generally American manufacturing?
I say that America needs not just engineers, but it needs people up and down the stack.
It needs electricians, technicians, we need to get construction going, and we need to get these
jobs and this whole ecosystem back in the U.S.
So if you look at the action plan, there's a bunch of stuff in there about this.
I think I mentioned two parts of the action plan, which is building and then innovation
on cutting out red tape and open source.
And the president also talked a little bit about copyrighters today.
The third piece of the app action plan, which I also think is a pretty dramatic switch away from how the Biden folks thought about it, is around making sure the world uses our standards and our technology.
So just for context, and again, this is something unless you are a policy one, you may not be super familiar with.
Under the Biden era, there was something called the Biden Diffusion Rule, which basically was a 200-page document, which basically made it illegal for America to export GPUs.
It was really hard for, you know, if you're Jensen or if you're Lisa Sue, to really kind of get your GPUs out to other countries, even some of our allies who want to act, who are really enthusiastic about AI and they want to help us out, but we're not actually giving them GPUs.
So we listen to that order, and one of the things we talk about is how do we make sure that we get all of our allies around the world using the American stack.
So that means how do you make sure, and we just did this in the Gulf with the American AI acceleration partnerships,
How do we make sure we are getting our GPUs over?
And one of the other ones are doing that is we get our GPS over.
We probably get them to run our models as opposed to models from another country and we go from there.
So having the sense of an American stack that we can export and the world standardizes on,
that's, I would say, is the third part of the action plan.
One of the topic that I think people believe that China has a lead in right now is certain areas of robotics.
That could be human form or other.
It's drones.
it's potentially catching up on self-driving and autonomy.
So if you think about that,
both from a societal perspective,
it's obviously automotive.
So if you look at European market share, cars,
D-Y-D and others are really taking enormous amounts of share.
And if you look at these,
these are the same technologies
that would also be used from a defense perspective.
And so to some extent, one could argue that there's two parts of AI,
there's the digital forum side of it,
and then there's the real-world robotics and drones and interactive side.
How do you think about that in the context of American policy
and what in the action plan addresses
as that capabilities to build these physical world products.
The Action Plan actually has a section in it of making sure, you know, we are set up for
robotics. I think it's obviously that's going to get super key within the next 18 to 24 months.
I would say it ladders from everything else that we talked about, both in the U.S. and
internationally. The first is making sure that our model companies can actually build as fast as
they can. Our startups can go innovate as fast as they can. The second piece, I would say,
is we want to make sure that the world is using our robotics companies and our models and not, say,
deep seek or quent. And that's actually one of the things. Because when I was talking to a bunch of
robotic startups, you're seeing a lot of distilled deep seek, a lot of distilled quen out there.
And what we want to do is to make sure that we have an open source response and American response,
which pushes our products as a standard out there. But, you know, it is a focus. I think it's going to
increasingly come into focus in the next, say, six months to 12 months. And we are spending a lot of time on it.
Related to that, there's always a question of how do things actually get done in politics
and how does it translate into the real world? And I think, you know, you've gotten something like
90 different agency actions listed in this action plan. And how do you think about these things
actually translating into industry, the economy, action by companies and other players? Like,
what are the mechanisms that you all have to sort of ensure that these things come together or
happen? And if they don't come together, what's plan B? Well, there is no plan B. We want to
get this done. And I think one of the things about the Trump administration you will see is that
the administration moves really, really fast, which is why in the first week we had a bunch of
executive orders. Look, we already work at all of it. We had three executive orders signed yesterday,
one for infrastructure, one for export, which can apply to a lot of things we talk about,
and one to stop ideology and wokeness in DEI, and I think you're going to see a lot more.
We had already at work on pretty much all of it. That is no plan B. We're going to go get this done.
And the other part I would say from yesterday is I've been indundated.
with just great response from the industry, a lot of folks that you and I know,
who are just really excited to see the government actually maybe understand AI
and is actually happy to, you know, make sure that American companies can go build American
AI. So I think they're also very excited to go partner with us. So it's go, go, go, no time to
waste. We're getting it done. There is no plan B. It's actually exciting because I think,
to your point on understanding AI in government, when I've looked at prior administrations,
be the Republican and our Democrat, a lot of the people who went into them from tech were
weren't the core driving forces of tech in the technology world. In other words, they get great
people, very nice people, but it wasn't the top of the industry. It wasn't necessarily the
deepest technical experts in some cases. Obviously, there's kind of examples of that. And so I think
one thing that's striking about this administration is the caliber of tech people that they actually
got this time around is very high relative to prior administration. So I think that impacts on the
understanding. It impacts how you all are thinking about the world. So I found that very exciting and
inspiring in terms of just having a really strong technical basis for what you all are doing. So I
I think that's really good.
Thank you.
There's a lot of great people in the administration for tech industry, not just in AI, but for example,
you have Emil Michael, you know, as the Undersecretary for R&E who's running DARPA in the Pentagon.
You have many, many others.
You know, one of things I think about is we just bring a understanding of how the tech industry works.
What is possible?
What is in?
We bring a sense of urgency.
We also just really deeply understand the technology.
Like, you'd be shocked at how often I've seen David Sachs in a meeting explain how inferencing works, you know,
what high bandwidth memory is, you know, how the world has shifted from a, you know,
pre-training context to a post-training context. And so we can just really mix it up on the technical
details. And we obviously also have a lot of strong social ties to the industry. So we can call
upon them to help us out. So I think it just adds a very different flavor of understanding of
AI where, again, to go to my other point, I think DC kind of just suffered from a lack of real
technical understanding of both the industry and of the products involved.
exposing my cards a little bit here, but it sounds from both your policies and what you're saying
that you're on the same page. Do you think that the U.S. should be like a technocracy,
like if it just said that simple statement? What does a technocracy mean?
Leading with technology and then having a bunch of people in technology leading the country.
I'm not sure I would think of it that way. The way I see it is America has been blessed to have the leading
technology ecosystem of the world. And that is an ecosystem, which is in an intense competition
right now. And I think we could have easily lost that competition, and it's still a very, very close
race. And we need to do everything we can to protect, preserve, and extend our lead. But at the end of
day, if you look at this administration, we are still trying to make sure that, you know, we serve the
American worker, the American workforce. If you look at the action plan, you know, that is at the heart of
everything we do. So I don't think I see it maybe exactly the way you describe it. I see it more as,
you know, we have something in a technology ecosystem that is the envy of the world. The president,
by the way, you know, when he was on stage yesterday, he talked about a lot of the inventions
that the United States had made, right? Like, we did the integrated circuit. We had shockily
invented the transistor. We had the fat children. All the way we, you know, internet came from us. We did
page rank in Google. We did the iPhone and Cupertino. So many of these things that the world winds up
using. So what do we do to make sure we preserve that lead, especially when it comes to AI.
And if you look at AI, look, there are so many potential timelines that AI could take.
I have read AI 2027 from Daniel. I have read much more optimistic takes on AI. I think there's
not going to be an event horizon beyond which you and I can have reasonable discussions on how
AI could play out. But in any one of those scenarios, I want to make sure that the United States
is well positioned where we can take advantage of the productive.
and the science and the technology breakthroughs
that's going to happen
and then be set up
for whatever happens next.
So I'm not sure
I really,
I kind of answer the question
you phrased it.
No, no, no.
You did?
I was trying to ask the question
in a bit of a triggering way
because I think a lot of people
would say, like,
it shouldn't just be driven
by the technologists.
And it's like,
what good does that, you know,
do us in sort of winning the AI race?
But I think that's actually
a really profound claim that you made,
which I hear as, like,
the country that builds the most
capable AI systems, they gain a lot of upstream control and influence that has been traditionally
very American, right? And we should all care about that. Like you use the examples of accelerating life
sciences, new materials, optimizing industry, being more efficient in healthcare and education
and things that matter to every American and like compounding national wealth. And so I actually
think that like sometimes a lot of this discussion becomes like, you know, an argument about like
what parties have influenced versus like what position do we want to have.
as a country, right? And whether or not we want that edge. That's right. And I think very simply
we want to win. Yeah. So I have two questions for you before we run out of time. One is just going
back to this idea of you being the strong proponent of open source and open weights. What is the
strongest counter argument to the people who would raise concern that the P Doom, the probability
that there's some sort of like a cycle of key man risk or, you know, some factor of abuse of
these powerful models increases with open source models. If you look at the
action plan. It's kind of a manifestation of how we think about things, right? Like, we do talk a lot
about risk. We talk a lot about having systems in place to identify cyber risks, bio-risk, etc.
I think the difference from the Biden administration or the folks who talk about Pidom a lot
on Less Wrong is that we are just inherently more optimistic. If folks haven't seen it,
I encourage them to watch the vice president's speech in Paris where he talked about,
look, we want to embrace AI with optimism rather than fear. And I think one of the things which
happened is that there was such a lot of fear, I would say, mistakenly placed on open source.
I think there are two kinds of fears people talked about. One was what you talked about,
which is, hey, what are the risks if these models could do really bad things?
The second was, hey, are we actually giving away our secrets to China? And what Deepseek showed
us is that China are actually building these models just perfectly fine just by themselves.
and exactly American models which were far behind.
So, you know, immediately, I think that argument got refuted.
On the P-Doom question, I think that's a perfectly fair question,
and I think we need to be vigilant about it,
and the action plan talks about it.
But we have to remember we are in a race with China,
and there are going to be catastrophic consequences
if Chinese models are running on every robot,
every camera, every car, every device around the world,
and we just got to face that reality.
I think also the people who are driving the P. Doom arguments, to some extent, are coming from one or two large companies that have close source models. And so I think we also forget the incentives of who's actually pushing for this. It's a very traditional form of regulatory capture. If you have a big pharma company, they work with the government to prevent other entrants into the industry. And this is exactly what it felt like is at least partially happening in the AI world. Now, that may be for perceived altruistic causes or other things, we're worried about humanity. But I do think the reality is,
a small number of companies have been kind of pushing this narrative pretty strongly that
open source is bad. And these are companies that control the close source models.
Oh, absolutely. I think, you know, I think there's a, I think there's a few things going on.
One is that, you know, people pushing for regulatory capture. Second is obviously, you know,
the schools have thought from effective altruism and a lot of people kind of worried about this
all kind of mixed together. Here's my rebuttal to that. Like, I think one of things that
open source software has shown us on the internet is that by default, open source is just safer and
more secure. What is Linus's law stage?
More eyes make every bug shallow.
And over the last 20 years, what has the security industry learned?
More scrutiny you put your libraries to, the more scrutiny you put your browser rendering engines to, the safer it becomes.
If you have seen that time and time and time again, and I think the same holds true for open source and open weights, where I sort of, you know, if you have a model up on hugging face and somebody downloads a 500 gigabyte file and the thousands of students and researchers just kind of pounding away on that,
I think there's a good chance they're going to find issues a lot better than a very small safety team inside a large lab.
So I'm a big fan of open source sometimes being a lot more secure than closed source as well.
Awesome.
Thanks so much, Sri Ram.
Thank you, your excellency, your governorship, your grace.
I'm not sure again what the right title is.
Your policy advisership.
Feel free, you know, the more inflated it helps my ego.
So thank you.
It was your excellency.
That's what we started with.
We really appreciate the time of day, your Sherramship.
So thank you for joining.
Thank you so much. It's such an honor, you know, and I love the work you folks do, and thank you for having you.
Find us on Twitter at No PryorsPod. Subscribe to our YouTube channel if you want to see our faces,
follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no-dashpriars.com.