a16z Podcast - Intelligence in the Age of AI with new CTO of the CIA
Episode Date: March 11, 2024Artificial intelligence has taken the world by storm. But despite the hype around personalized avatars or podcast language translation, artificial intelligence is not only impacting the creative spher...es; in fact it’s hard to find an industry that isn’t being touched by this technology – and defense of our country is far from excluded.In this episode, originally recorded in the heart of Washington DC this January during a16z’s American Dynamism Summit, a16z General Partner Martin Casado and a16z enterprise editor, Derrick Harris are joined by the first-ever CTO of the CIA, Nand Mulchandani.In this wide-ranging conversation, they discuss the evolving relationship between analysts and AI, how governments can keep up with this exponential technology, and finally, how it’s impacting both offense and defense. This episode is essential listening for anyone interested in the intersection of technology, national security, and policy-making in the age of artificial intelligence. Stay tuned for more exclusive conversations from a16z's second annual American Dynamism Summit in Washington DC. Topics Covered:00:00 - Intelligence in the Age of AI02:28 - Rethinking Jobs and AI's Asymmetric Power05:00 - The History of AI in the Intelligence Community07:00 - Operational Utilization of AI10:40 - Analytic Capabilities and Uncertainty12:56 - AI's 'Hallucination' Concerns16:37 - Analyst Skill Sets and AI Tools26:29 - Supply Chain and Open Source31:35 - Public-Private Partnerships41:33 - Government as a Customer and Partner in Tech42:43 - Policy, Technology, and Regulation ResourcesLearn more about AD Summit 2024: www.a16z.com/adsummitWatch all of the stage talks at AD Summit 2024: https://www.youtube.com/playlist?list=PLM4u6XbiXf5pAKmk1AeZ9964KGScf4lHMRead the CIA’s announcement around the new CTO role: https://www.cia.gov/stories/story/cia-names-first-chief-technology-officer/Find Nand Mulchandani on Twitter: https://twitter.com/nandmulchandaniFind Martin on Twitter: https://twitter.com/martin_casadoFind Derrick on Twitter: https://twitter.com/derrickharris Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
We're a human intelligence operation.
We can now make anybody be anybody and sound like anybody and look like anybody.
Stop thinking about automating, getting a 10%, 20%, 30% on your job.
Tell me in 5 to 10 years how you're going to reimagine your job.
This is no like asymmetric superpower, which by the way is very, very different than the internet.
It's because policymakers now have a knob where we now have to decide explicitly this or that.
or that. These are the largest compute projects that mankind has ever done before. Like,
we've never done anything close to this. Life finds a way, which if there's demand for it,
people will supply it. Artificial intelligence has taken this world by storm. I mean, just think about
it. Here in 2024, anyone with an internet connection and a few minutes to spare can literally
spin up a Disney avatar of themselves, translate a foreign podcast into their native language, and even get
help in writing their vows. But artificial intelligence is not just impacting the creative spheres.
In fact, you'll be hard-pressed to identify an industry that's not touched by this technology.
And the defense of our country is no exception. In today's episode, originally recorded in
the heart of Washington, D.C., back in January, during A16Z's American Dynamism Summit,
A16C general partner Martine Casado, and A16C enterprise editor, Derek Harris, are joined.
by first ever CTO of the CIA.
Yes, that is the central intelligence agency.
We're joined by CTO Nund Mulsindani
to discuss the future of defense intelligence.
In this wide-ranging conversation,
they discuss the evolving relationship between AI and analysts,
how governments can keep up with this exponential technology,
and finally, how it's impacting not just offense, but also defense.
I hope you enjoy this conversation.
As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16c.com slash disclosures.
Martin, I'll assume most people watching, listening to this, are fairly familiar with you and your role at A16Z, but not because the CIA CTO role is relatively new.
Can you give us a quick background on that role and kind of what your objective is?
Yeah, so there are really two stories here.
One is the agency needing a CTO and kind of what created that, and then my own journey to it.
It all starts with Director Burns taking over the agency beginning of the administration, and he, just like any great business leader, sat down and did a business review, which is what business are we?
and what are the big threats and the other pieces.
And the conclusion was actually fairly interesting
and not a surprise was that we had to pivot from C.T.,
which had been the big sort of focus for the agency
for a couple of decades,
to great power competition.
And then the interesting, unusual sort of thing
was this big amorphous thing called technology.
There was huge interest, obviously,
from policymakers in technology that we needed to start looking into
and build policymaking around.
And I think it might be helpful
to the listeners to understand kind of what CIA actually does
because I had to actually learn a lot about the agency
when I came on board and sort of what Amy Ziegart called spytainment
to like what we actually do.
And so this pivot in terms of rethinking the agencies
focus on technology, there were three things that happened.
One is we created the China Mission Center,
which is how we actually focus on threats and opportunities.
T2MC, which is transnational technology mission center,
and this weird thing called a C2C.
and we're a 76-year-old spy agency.
So we've been doing technology for a long, long, long, long time.
But technology has been somewhat latent.
What happened is with this focus on tech,
we basically needed to focus on three different things
that I think were different from what we were doing before.
Number one is we are as an agency fairly vertically focused.
So we have five directorates that focus on five things
that sort of come together in the mission centers.
So the idea of the CTO is really to go hard.
horizontal versus vertical.
The second thing is we as an agency are very focused on the here and now, which is there's
a crisis and we jump on it.
And the CTO function really gives us the luxury a little bit of looking a little bit out rather
than focus sort of inward.
And the third thing is a little more external versus internal.
So engaging with the outside world, those are kind of the big dimensions there.
And I guess, Martin, people know you as investor and founder, but maybe it's worth noting.
You also worked in the intelligence community for a little bit, right?
So you come to this with a little bit of background, huh?
Yeah, that's right.
Yeah, 20 years ago.
So before we zoom back out to maybe a policy and kind of broader discussion, like, let's start with talking about AI, which is kind of disrupting, I think, everything at the moment.
I would love to get both of your perspectives.
Maybe our team can start with you on, like, how are you thinking about AI as it relates to the intelligence community, like where we're at and probably where we're headed?
AI has been around for a very long time.
And I will say that even when I was part of the intelligence community 20 years.
ago, we talked a lot about if you have all of this information, how to you kind of detect
signal. A lot of this was like very significant big data processing. And a lot of the more kind
of advanced notions actually came out of the intelligence community. I mean, a lot of what
the intelligence community deals with is things like Entrabee and covert comms, etc. A lot of
these ideas are just fundamentally tied with AI. And so I just think there's a longstanding
history. What's interesting to ask is how does this kind of new generative AI world impact
the intelligence agency or intelligence in general.
And one idea is like, well, okay, so we can now make anybody be anybody and sound like
anybody and look like anybody.
And, oh, that could be a huge problem because now deep face could be a problem.
And I'll tell you my conclusion on a lot of this is it actually turns out that if a computer
generates something, the ability to kind of fingerprint that isn't that difficult.
It's actually not that hard, right?
So I actually think it becomes much, much easier to detect if people are using AI
in tooling actually is the reality. But that also means that we can't use them as tools. And then
we've got to go much more to the fundamental. So in this kind of weird irony, like we've got this
set of new tools that allows anybody to sound like anybody or be anybody. And that's going to be
heavily, heavily used around the world. But like for those that are sufficiently sophisticated,
it's going to be quite possible to detect them. And I think that in some ways, this is kind of this
nice cover in chaos for what our agencies and many agencies are very good at, which is kind of much
more human focus, less than core technical approach to these intelligence problems.
I guess it's probably fair to say yes.
Like we've been, AI's been around for a long time, machine learning, even deep learning back
a decade ago now, regenerative AI, LLMs, and then that sort of thing are definitely
Nura.
Do you think about it?
Like Martine might reference kind of like the defensive and the offensive side.
Yes.
And I'll just compliment what Martin said.
Really, when you look at the two big functions that we have as an agency, right, we've got
the operational side and we've got the analytic side.
And of course, those are composed to different things.
But those are really broadly the two big functions.
So what Martin absolutely nailed is the operational side of it, it's spy versus spy, it's cat and mouse, it's all the usual stuff, right?
The democratization of this stuff and the availability, we know is going to drive each one of our competitors to be driving this up.
We're going to be aware of that.
We're going to drive our own stuff up.
So there's this aspect of we don't know where this is going to go, but it's definitely not pulling back.
This is where the stops are off.
However, the thing we've always got to keep, at least we keep in mind,
we're a human intelligence operation.
We're a foreign intelligence operation.
We're all source.
So we have a particular focus within the 18 intelligence agencies to focus on a particular thing.
So everything he said in terms of using or wielding or scaling this AI in operations,
all is within the context of taking our case officers.
and operations teams and making them successful, right?
Because any good sort of team sport,
we play a particular role on the field
and applying this tech to scaling
and making our teams, basically, our folks, more effective
and win is basically the game.
Now, on the analytics side,
it's a complementary, but a different problem.
And this is where the big data
and the other pieces come in is,
to me, what's revolutionary about this
is underlying AI is the problem,
and excitement always is the pattern, the ability to discover patterns in large amounts of data
that typically humans can't see. What's different now in the older days of when we were doing
big data analytics and things, I call this the pull model versus the push model, which is
the analyst had to come up with creative stuff to think about first and say, I'm going to put
a query in to go find the information. The problem is, is the conceptual boundary of an analyst,
to hit the query that hits the data,
it was like spearfishing, right?
You're like going in and trying to get this one analytic idea
out of this massive data.
What's beautiful about this tech is
now this stuff can actually push stuff, right?
You can almost invert the process
where a lot of the stuff gets pushed to you
because it starts to understand
what you're looking for in some sense
and starts tailoring the stuff
and gets deeper and deeper and deeper over time, right?
this is the stuff you've been driving to.
Now, that has a evil twin problem to it
that I've been spending time on thinking about
just as it relates to our work
is what I call the sort of rabbit-hulling problem,
which is the very thing that makes these products
and technology so effective,
which is learning about you and knowing
these algorithms are built to please you.
They want to make you happy.
Well, it can then also, unfortunately,
take the things that you as an analyst are weakest at or it's your weak knee or get into your
head and start rabbit holding you down this thing, which amplifies your biases.
So we've got to be very careful, very smart about where we apply it when and how modulo
all these particular things that are out there. How close are we actually? I mean, maybe going back
to the early days of talking about big data, like how personalized something will be or how easy
it's going to be to sorts. If I mean, are we still? Are we still?
all ways away from like very much automating like maybe in a CIA analyst.
So I think we're now understanding the kind of power and limits of this technology.
I think there's a very important topic for us maybe prediction to make, which is, will
this change the existing equilibrium that is now an intelligence between kind of like
offense and defense, et cetera?
And like my deep belief is is exactly what that says, which is like it doesn't change
the equilibrium in the sense that there's going to be more telling for offense, it's going to be
more telling for defense.
This is no, like, asymmetric superpower, which, by the way, is very, very different than the Internet.
And this is why this is such a bad enlock.
The Internet was asymmetric, which is, like, the more capabilities you had with the Internet, the more vulnerable you were.
Which is why, like, when we were working on the terrorist threat, we're like, we're the most vulnerable nation in the world.
And, like, many of the people that we were focused on didn't even have laptops.
It's nothing like that.
AI is something that anybody can use.
It doesn't change some fundamental equilibrium in an asymmetric way.
So that's the first answer to your question is, does it allow for personalization or does it allow for kind of anything specific?
I think that it doesn't create any imbalance between two opposing sides.
So the second answer to that is, I think we're pretty clear about even the individual limits
of, let's say, LLMs right now, which is they're very, very good for a few use cases.
They're very, very good for, like you ask one question, they give you one response.
And if that question is in the corpus of training data, it'll give you a good response.
The problem is, if the question is out of the corpus of the training data, like, you don't
really know if the response is good or not. Which is fine if you're just asking one question
because most humans will be able to check that, right? So it's very good if you have like a co-pilot
for an analyst and you help them with their job. The thing that we've not seen any evidence of
is agentic behavior. By agentic behavior, it means like you ask one question, then you step
away and you go get a coffee and it does its own thing. And the reason is, is if it generates
anything out of distribution, like, and by the way, out of distribution means it's not
commonly represented in the training set, then that error is going to accrue,
and it tends to accrue exponentially, provably, right?
And so I think right now we view these as new tools that you use side by side,
but they don't become their own separate kind of entity.
Yeah, and that absolutely brilliant point is this idea of the accrual of the probabilities
times probabilities probabilities, which gets to the hallucination stuff, et cetera,
when it comes to my 11-year-old drawing unicorns, that's a feature not about.
No problem.
No problem. Awesome.
Loosate away.
That's awesome.
That's great.
It playing games, doing all kinds of crazy.
Like, that's amazing.
When it comes to analytic capabilities, when it comes to operations, we cannot have this level of uncertainty and not knowing and explainability.
I mean, the piece that's sort of interesting is I think that we're in such these sort of early stages of this game, right?
So everything we're talking about here is like we were in 1995.
I know. Like, just shut up two years ago.
Exactly. I mean, this thing just happened. And we were just talking before this, like all of us, three old fogies here sitting around the porch. This very well could be a porch with rocking chairs. And us sitting around talking about the early days of the Internet, we could not have possibly imagined the stuff that's happened over the past years. So at the agency, what we're doing is saying, great, there's a whole bunch of these basic use cases that are just, there's no question.
this stuff can get applied.
And by the way, we're all in on it, right?
So I never wanted to give you the impression
that this is something that we're slow rolling it
or thinking it.
But to Martin's point, the applications
on a per-use basis, we have to think them through.
This is not peanut butter that you can just spread everywhere
and you can get goodness everywhere with no thinking.
So, for instance, we're public with the fact
that we actually have LLMs in production at the agency.
Awesome.
We have it in production in the open source team.
So those easy use cases, business
automation, other pieces. We're experimenting, we're trying, we're doing stuff. Now, the
co-pilot piece, the way I view it is, typically people jump immediately to the hardest of hard
problems and say, we're just going to go replace this. We're going to go do this. What we're
challenging everyone to do inside the agency, though, is it's one thing to look at the low-hanging
fruit, get that stuff automated, get the value. The thing that's most interesting, though,
is we're challenging folks to say,
stop thinking about automating,
getting a 10%, 20%, 30% on your job.
Tell me in five to 10 years,
how are you going to reimagine your job?
Now, here's where things get tricky,
because many of the people
who are challenged to reimagine their job
are looking at this technology
and learning it, right, for the first time
and understanding the power of it.
I'm still, I mean, all of us are still very far away
from knowing where this is going to go.
So it's really hard for them to imagine this prototype thing
that's like still playing around.
How is it going to impact or rethink my job?
So this is where we're pushing and experimenting
and encouraging everyone, try stuff, learn stuff,
get up the experience curve.
But we're not going to settle on an answer
because that's not going to just appear magically off of that.
So I read something recently.
It was basically a CIA analyst imagining their job.
I don't know what the time frame was like five or ten years out.
Right.
It was very much what you described, right?
Like, enter a prompt, go get a cup of coffee.
Right, that's right.
And then by the end of the day, we've arrested someone in France for something.
Right.
That's how people worry about.
It's like, we took the kids to Disney World and you go on the Epcot ride, which was the
ride of the future.
It was from the 1950s or 60s imagining what the world would be like in 2024.
And it's like a handheld phone with a video monitor on it.
Like, okay.
So it's one of these things, which is really hard to see where the stuff is go.
Again, noting that it's early.
I mean, realistically, is this a thing where analysts just have different skill sets going forward?
They have different tools at their disposal.
Well, I think we can say something relatively specific on this.
And then there's a bunch of stuff we don't know.
But here's the thing that's relatively specific.
So the way that these large language models in particular work is they get a whole bunch of data.
And then they basically have a distribution of, you know, how come the data was represented.
That's what they do, right?
They do basically what's called kernel smoothing over positional embeddings, which is just like averaging a bunch of words.
So it averages a bunch of words.
And then for any time that you ask it a question, it kind of gives you like the most common outcome.
So what does this mean?
This means for mean things you want to do, for the average thing you want to do, it would be very good at getting you an answer.
So for any kind of standard rote thing you want to do, it's going to give you an answer.
The problem is, is if you want to do something in the tail or in something that's new or an exception, it doesn't know how to be.
to do that. Like, there's no mechanism within it that will do that. And so much of intelligence
work, I would argue, is actually in the tail, right? I mean, it's like, these are the problems
the intelligence agency is particularly good at. And so I think we can believe that there's a world
with every analyst will have strapped on an LLM that'll help them with the routine stuff. But, like,
so much of the job is tail reasoning, reasoning in the tail, that this is not going to remove the
humans. And by the way, this is just the intelligence community. I think this is most work,
but I think it's particularly in cutiny and intelligence.
Yeah, and I break it up into the first piece is,
does this technology replace the analyst?
The second piece that Martin talked about is the co-pilot model,
which is I do my work and I have a little wing person
that helps me with all the routine stuff or scaling,
but it's exactly his point,
which is it's not the creation of new information from old information.
Like human beings uniquely create new things, new information.
It's unclear whether these systems actually produce new information.
information or new thought. It's just finding it or routinizing it. The third one is what I call
this sort of crazy drunk friend problem, which is the hallucinating, which has a role in some
disciplines, right? Making new art, poetry. And in the analyst function, it's this point of finding
that point on the distribution. Because if the average policymaker could think through the average
use case, and what's the role of the analyst? The role of the analyst is to have this holistic
piece of thinking through probabilities. I mean, it's kind of what an AI program would do to some
extent if you start modulating where on the sort of distribution you want to go.
That was a great explanation, and I think it's exactly right. So imagine you're an analyst,
and imagine if you get paired up with some person that's been in the agency for like 40 years.
So this person doesn't know new technology. It just knows what everybody's done a whole bunch.
Is that the entire perspective? Of course not. Like you have to evolve, but it's a very important
perspective. So LLMs are very good at being that old person. They're very good at being like,
well, this is how we've done it in the past. Here's a recommendation. But that's why we have
people to make a decision. Like, do I want to do something new, something that's in the tail,
or do I want to listen to this person that's been around for a very long time? So it's a very
concrete kind of mental bookend for doing things. But the majority of the value,
which is the new stuff, will still remain with the person.
One interesting thing for me has been, having spent 25 years in the Valley, done a bunch of
startups, and now being on the government side, is a lot of the tech discussion around this is
about the possibilities and all the great stuff on the creation side, which is awesome.
Innovation, invention, getting great people to do awesome stuff.
But you have to flip over to the buy side of technology.
This is my second run, Pentagon was two and a half years and now at CIA, of being on the by side of technology and seeing all the stuff happening.
I just actually took a red eye in from California.
And over there, it's all about possibilities.
And here it's about how do we take the job, the function that we have to operate in with all the constraints and things, which are, by the way, not constraints that are artificially imposed.
I mean, the CTO office at the agency right next door is where the PDB gets made for the president.
And so by the time something hits that threshold of getting into the presidential daily brief,
you can imagine the level of scrutiny and analysis and focus that our analysts put into this work.
So it's funny because, like, the excitement and hype about this technology versus us absorbing it
and making it battle ready is a long, long distance.
And so hopefully portraying or representing that side of the equation,
which doesn't happen rapidly, right?
It takes a long time for folks
in their own disciplines and life and careers
to understand what's the actual impact.
The absorption of this technology does take a long time
and for it to disrupt a particular individual
or an individual discipline.
Yeah.
So I experienced when I was in the intelligence community
something very similar, but modestly different,
which is the following.
I was an ops, and we would have missions and stuff to do,
and they would have very,
specific requirements and the piece of technology that came from the private sector just were not
suited. And in fact, little-known story. So my PhD work was in software-defined networking.
Yep. That was before it was done in hard, it was too tough to program. Like, honestly,
the insight for that work all came from my time working with the intelligence agencies.
We had to build a network that had to like deal with certain things. And I actually came from
the computing side of the world. I'm like, these things aren't programmable. And to do what we need
to do here, I have to program them because Cisco just doesn't know what we need to do.
So there's also a flip side.
So everything you're saying, I totally agree.
It takes a long time to be adopted.
But also your requirements and needs are a bit different than what the market is solving for.
I've been a huge software-defined networking fan for a very long time.
But I stole it for a paper that I co-wrote with General Shanahan called Softwarefine Warfare.
I love it.
But yeah, basically this idea of like how do you actually take something that's on a hardware
curve and push it onto a software curve.
How do you do this reprogramming, et cetera?
And so the question we've been asking inside is what does software-defined intelligence look
like?
Right? So what's the next level of like stuff where to point?
It's maybe more push versus pull.
It's this idea of going across that distribution curve and starting to understand.
One other piece that's been sort of right in the middle of this whole thing is the whole
policymaking debate inside.
One key point I wanted to make was even in this analysis.
function. The work that each of us do has encoded in it the policies and outlines of what we have
to do as part of a job function. I call this code as law, which is that when you look at the applications
that you probably use at the agency and that we use there, we encode all of those rules and regulations
inside of it. And I call this the thresholding problem to some extent, which is inside a line of
code in our application, there is something that says, if probability of X has,
beyond this, then do this versus that.
Totally.
Right.
We have lines and lines and millions of lines of code,
but it has those if statements in there.
Now, it's interesting because what that means
is we've implicitly taking
human decisions that a programmer
or a policymaker made it and encoded into code.
Now with this new sort of AI-based systems,
both the previous sort of supervised learning
and unsupervised, and now with these newer algorithms,
these are still probabilistic algorithms.
Except now the probabilities actually stare you
in the face in a way that previous systems didn't push, right?
So previous application systems never came up with said,
do you want the 49% the 69% answer?
And now you decide whether 69 is high enough or not, right?
It would basically would encode it and say, great,
there's an arbitrary number 50 and anything above versus below.
Now, why this is becoming such a debate
is because the probabilities are now surfaced to the user's face.
And if they aren't, we have to train people to start thinking about when the system punches out a number.
How do you make a decision on a probability?
So I think that that's the big difference between before and now that we're having to retrain everybody
and why it's become a policy issue all of a sudden.
It's because policymakers now have a knob where we now have to decide explicitly this or that.
So to me, it's actually we're in a kind of the same world, but just more in an explicit world.
Before we're into policy specifically,
is there a sense where, like, I mean,
the advent of open source and then just the general
acceptance of open source now?
Especially in AI and also other emerging tech.
Does that ease the adoption to some capacity?
So open source tech, right?
Because in Intel, we have the open source intelligence question as well,
which are sort of too.
And in terms of like you want to adopt something, right?
So there's commercial technology, not up to par,
but you need to be able to remake it in your own image
and then get something that's actually functional for you?
I would love to hear Nanzas.
I've got a historic perspective on this,
but I'd like to hear your current perspective.
So the thing we're challenged with
on just this whole landscape right now
is each of the companies is offering, right?
A particular LLM.
And to me, it's turning into sort of like
different types of wine or different varietals
and I'm not a wine snobber or know why that well.
But I'm imagining it's it has something,
the lineage,
The data, exquisite data.
This one was grown in the hills of Normandy,
and this one comes from this part of Italy.
You're saying it's all nonsense.
He's ultimately.
Right, so you're getting these elevates
with these different lineages and different vintages
and different data, et cetera.
And each of these systems
are going to behave very differently over time, right?
Now, the question is the going big problem
of like, well, everybody's going to train
on the whole internet.
So everything's just going to look the same, right?
This versus that.
But I think there's a second question that we're having to do is,
and this is where the open source question comes in,
is the ability to start training with your own data.
And the question becomes,
do I take a base algorithm or system that somebody is built?
That, by the way, has ingested the entire Internet,
which both good and garbage, that has been ingested in.
And now I use that as a base platform,
and it may have a certain set of biases,
that it brought along with the garbage and cesspool
that the internet is, and all of a sudden now
I'm using that as my base.
Or do I want my pristine, handcrafted,
sort of made-in-bespoke fashion thing?
That's where the availability of these algorithms
becomes really interesting.
However, it has the opposite problem of.
It doesn't have the imprimatur
and the stamp of a large company
that has the experience building
large software systems and training
and verification and other pieces.
So we then have to sort of know and understand the stuff ourselves and do all that work.
So I think that's a tradeoff that we're dealing with.
Yeah, yeah, yeah. That makes a ton of sense.
Here's a bit of a historical perspective on this, which is similar to what I touched on before,
which is, in my experience, the government and the intelligence agencies have to solve problems
that market forces don't really solve for.
In order to do that, there has to be some sort of flexibility, programmability, and open sources
always been a key component of this, right?
I mean, quite famously, S.E. Linux came out of the NSA, and they used Linux to do it, it's because they've required.
And, like, there's been a number of chaining to algorithms like crypto, right?
I don't remember the details, but remember the NSA was like, oh, quadratic linear programming will break this,
so go ahead and do this kind of change in the algorithm, then it's better.
I mean, so these are types of things that have been coming out of intelligence for quite a while.
Now, I remember 20 years ago when I was in the depths working on one of these problems, and there's an old timer there.
And he goes to me, he's like, man, he's a tech old timer.
He's like, it's so great you have this open source because we can work with it.
He says, I remember in the time when all we would get was supercomputers.
And they would come out of IBM or come out of SDI or it come out of KSR or whatever it is.
We'd buy one of everything.
But we only got what the vendors created.
And something that I do think about, it would be great to your perspective, Nan, which is it's one thing to open source weights and biases.
That's one thing.
But these are the largest compute projects that mankind has ever done before.
Like, we've never done anything close to this.
And so even if the weights and biases are open source, I don't know how much you can modify it.
It almost feels like we're going back to this old mainframe day where it's great to have it and you can operationalize it,
but you're not going to have the same level of flexibility as you have with traditional software.
At least this would be my guess.
No, absolutely.
Anybody's ability to do something with that information is limited because it's not just the numbers.
You have to have the expertise you need to compute.
You need all the other pieces there.
How many of these 20s are hundreds of millions?
That's my point.
I have that information.
I just don't have the compute cycles to be able to do anything.
or modified or change it. So you're totally right. It will help it with verification. It'll
help with training, testing. It'll help with all that stuff. That's fine. But you're totally right
to retrain. It does feel like we're going back into this kind of almost supercomputer.
You're completely right. By the way, the government is the best at. Like 100% the best of
procuring. I mean, there's a long history of that, but it's a very different. The ballgame
is completely different. The other piece with open source that we have to deal with a lot,
by the way, is the supply chain issues. Of course, yeah. Right. And this is a whole new level of
paranoia that you need to have working in a spy agency is it's one thing for your program to not
run or some e-commerce customer having a bug or something. These issues are still, again, also
really, really important for us. So what does that look like that? If you get back, if you make
the supercomputer analogy, right? Like, what does a new, let's say, public, private, DC, Silicon
Valley partnership look like in terms of actually implementing and operationalizing AI models
and AISysis? I'll take a shot at that because it's something that we're, well, I think the U.S.
government is thinking about this writ large, is when you go talk to universities right now,
one point they make is they don't have the compute power to be able to rival Microsoft and
Open AI in these companies, which is mind-boggling, right? Because, to your point, the supercomputer
systems and everything, when I was at Cornell, we remember we had a supercomputing site on campus
with money that the government had put in place to have these supercomputing centers, right? Illinois
we had one, Cornell, et cetera. And we don't have that equivalent now. So now the National Science
Foundation, I think, some of the money that the government's allocated, I'm assuming, is going
towards building these large scale or not? Or the opposite. So we're not a policy-making organization.
We're actively trying to keep people from building them. Well, we should. I think there needs to be
this model of it has to come out from, away from just corporate organizations doing this, to
you know, and we're part of government, so it's a different thing. But the question is
whether this needs to get sort of opened up in a bigger way. So the thing that brought me into
the intelligence community is I was doing computational physics on supercomputers at a national
lab in the weapons program when 9-11 happened, right? And they're like, you have all the
clearances, you need to move to this kind of new area. And they moved me to the intelligence
community. I learned a bunch of stuff that way. I will say the work that we did on those
supercomputers, the government was the best. It created an entire new.
disciplines of scientists and career professionals in universities. It leaned totally into this.
And that's why we maintained a leadership position in the world through that in compute. And we still
do. I do think that it's a risk that we don't take the same attitude. Listen, I'm a VC here and I'm
saying I don't think the private market solve everything. I do believe in public-private partnerships.
I do believe in institutions. I do believe that the government has a big role to play here.
But I think that role to play is investing heavily in people and tech and careers and reaching out.
And my fear is that they're kind of doing the opposite, where back in the 90s was exactly what you said.
I mean, I went to like Northern Arizona University, small mountain school.
And we had ASCII programs where they would come out and invest in us.
And I just feel that now, even though we have this new technology, it's very powerful.
And we are the leader and it came from the United States.
Instead, we're kind of pulling back from it.
And so I do think that this is a moment of, I think the U.S. has to kind of,
take pause and understand, are we undergoing a doctrine change where when new technologies come,
we run away from it instead of towards it. And, you know, I think it's a real quandary.
Yeah, it does seem like that. Maybe there wasn't a shift at some point, too. Maybe it was
the internet. I'm not sure where, like, in the supercomputer days, yes, you bought a system from
IBM or SGI. Cray. Or something. Cray. But that was a hardware system, right, running some
very specialized software. But today, yes, everything comes out of these huge companies that have
access to all the data and all the computing power.
And, like, I don't know how that affected, like, the power shift, but do you sense
a way to project it?
Yeah, I mean, so here's the other side of the argument, which is the very dynamics that
have led us to this point of creating these algorithms and these systems, these breakthroughs.
There's also hundreds of companies that you and other VCs are funding hacking away
at the problem to make these things available, right?
There's huge amounts of work going on in AI-specific chipsets, both on the
training side and the inferencing side, a whole bunch of algorithmic changes that are going to happen
that refactor these algorithms to do better job in terms of scaling and being able to shrink them
without a dramatic loss of performance. So again, we're such in the early stages innings of this
game that we don't know what the next five years is going to bring. But for sure, you've got
thousands of really, really smart people hacking away at the problem that I think will come to some medium
where, yes, hopefully the government, or maybe the government funds or academia ends up with these large compute places to be able to rival commercial.
And at the same time, the availability of hardware commoditization and other pieces will get to a point where we'll be able to run all kinds of interesting algorithms at scale with really cheap, readily available hardware, right?
So that's a sort of techno-optimist aspect of it, which is, as I say, life finds a way, which is there's demand for it.
people will supply it.
Yeah, just the question is, is, will the private market solve the problems needed for things
like global defense or, like, national security, stuff like that?
And just historically, the government has played a role in innovation, in training.
I mean, think about, like, nuclear engineers.
Like, a lot of this actually came up from government programs.
So this is a slightly different topic, is this idea of what are we doing in government
to do a better job of working with industry, right?
A large portion of this job that I have is this idea, this new idea of, well, it's American dynamism.
It's this idea of Silicon Valley leaning in and the government having to lean in together for us to meet in the middle to be a better supplier and a great customer.
So in my role at the agency, one of the big areas of focus for us is how do we become a dramatically different customer?
I spent two and a half years of the Pentagon, which was its own gigantic problem in terms of that's where it's offered.
find warfare ideas came. And in all honesty, we don't do a great job in certain things where we
could be world class or better. And we are working really, really hard to change that. So for instance,
as we point out inside many cases, there is no ready app store for spy software. So there are
absolutely certain things that we need to build and write inside the agency that's very specific.
It's also our competitive advantage, right, which is we're not going to be buying this stuff that's
readily available for everyone. We have our secret sauce. We build it. It's our competitive advantage.
However, what we don't do sometimes is analyze that we take that too far, which is there's
stuff that's readily available outside from commercial land that we don't think about buying,
deploying, and implementing that scale. And in the past year and a half, we have actually spent a lot
of episodes. I've personally spent a lot of time focusing on what I call commercial first, right?
is this idea that we need to be rethinking our strategy,
that if something is available on the outset, how we bring it in.
However, we have procurement processes, we have ATO processes,
we have security processes that don't lend themselves well
for rapid acquisition in pieces.
So we're trying to hack away at those.
On top of it, the other issue is that the security needs
and requirements to run the stuff on the high side is very expensive.
And for commercial vendors to provide and go through that process
is expensive and an investment.
And so we have to create incentive structures to be able to bring them in. So it's not as simply we can will it into existence, but it's a systemic problem that we're trying to attack and hack away at. There's certain things that have been big breakthroughs inside the agency over the past year and a half, I can say. Probably can't tell you what those are, but we've made huge, huge progress in rethinking in other ways there. And as an agency, there's a number of cultural shifts that we're going through internally, right? So first is the human.
versus tech, right, is this idea that as a human intelligence organization, it's very
interesting because of the changes in tech outside that are case officers and we operate in,
right? We're publicly talked about what we call ubiquitous technical surveillance, right,
UTS, video cameras, biometrics, et cetera. So we as a spy agency hate tech when it's applied
against us, but we also wield it, right? So it's that aspect. The other interesting cultural thing
at the agency that's been fascinating is the power of the individual, which is we train
individuals to go do heroic efforts and things, which that's our job, right? That's the agency's
job, is to go into foreign countries and spy. However, the tech aspects of, there's no other way of
putting it. However, the other aspect of the thing is, is that applying tech, what is
the big change in tech that we've seen over the past couple of decades. Scale. And so this idea
of the individual versus scale is a cultural thing that we're trying to rationalize, right, is
applying enterprise large-scale tech to an organization that teaches individuals to basically
have agency to go do things. So that's a very interesting one. The short term versus long term,
which is the idea of us being an agency that's ready to go at a moment's notice, which the agency does
incredibly well, but again, to do large-scale enterprise-wide technology transformations and things
takes time. It's the open versus sort of clandestine, which is as an agency, folks aren't
not trained to be out there in public. And Director Burns has made this a big priority in
terms of engagement with the outside world, engagement with the technology industry, which
happens out in the open, right? The idea of carrying a business card and being present and being
on podcasts. These are all culturally new things for the agency to deal with. So we ourselves are going
through this huge, huge transformation, dealing with tech, how do we actually change the thinking
in this new world? And then the AI stuff on top of it, which then is another layer of complexity
in terms of changing how we operate, what we do in the discipline. So it's a very interesting,
exciting, but also somewhat confusing and transformational time for the agency.
Do you think that, given American dynamism, having a moment right now, right?
D.C. and Silicon Valley is going to be talking, I don't know, more than ever.
It's certainly not since the early days, but, like, people are talking.
Like, you guys both have both intel and software experience and startup experience.
Yeah, we're at the exact intersection, right?
Government tech.
Do you think that starts to help maybe ease some of the friction in terms of, I don't know,
whether it's making procurement process faster, making adoption a little easier,
or making it easier to kind of hack on stuff?
So maybe I'll say something that that would be great to be talking about.
took us home, which is the issues you just talked about have been around for a very long time.
And I don't think that's the higher bit, right?
Like, of course, we can make procurement better.
Of course, we can make communication better.
Of course, we can have better public-private partnership.
I remember talking about this whenever in the 2000s and the 90s, et cetera.
Here's what I think is the most important.
And you alluded to it before, and I think it's so important, which is the Internet caught the
U.S. flat-footed a bit.
There was this notion of asymmetry.
It ended up having exponential effects because there's so much connectivity.
And so when it came out, it took us a while to come to grips with it.
And what I fear, my biggest fear is with AI, people are fighting the last war.
And it's to our detriment, which is a lot of things people are concerned about with AI
or really internet things.
And we've kind of gotten on top of it.
So you can't take that mindset.
You can't take this mindset of investing in this stuff is bad because it's asymmetric.
You can't take the mindset of like this is inherently exponential.
You can't take the mindset of this is core critical infrastructure
because it's just very, very different.
This is a new type of technology.
It's as useful for us for doing good
as it is for changing the threat environment,
which I actually don't think it changes the threat environment a lot.
And so I think that both the government and us in industry
need to come together and acknowledge this is a new technology
that's beneficial and then we're better learning about it
than running away from it.
And we can't take these old lessons from the Internet.
and somehow kind of
apply them because then we're going to miss the train.
And so for me, this is kind of like the higher bit
and the most important thing.
And then a lot of the things you've talked about
are important, but they will kind of follow in due course.
Wow, I'm not sure I'm going to top that.
And I do actually have to be careful in the sense
that, again, sort of the big disclaimer,
I have to patch on this is CIA needs, you know,
we're not a policymaking shop.
Our job is to support policymakers
with very objective by the book
analytic support on the questions that they're asking.
What Martine, I think, just outlined is exactly the policymaking debate in discussion going
on, is this idea of how do you regulate versus incentivize, right?
Because I think the thing is that what happened with things like privacy and security
and other things has impacted consumers and therefore it impacts lawmakers.
And so we got pulled into Leanin and now whatever is happening in that area, right?
The issue around 5G was a big.
national security concern all of a sudden. Then all of a sudden, now the Chips Act, in terms of
semiconductors, look at what happened in crypto, and now AI. So I'm listing them because at CIA
we track, you can list 5, 10, 15. There's all these emerging tech areas that we follow now. And we've
got world leading domain experts that follow each emerging tech area because, again, there's
demand and support from downtown on those questions. Okay. So I listed what for, and then the fifth one
is privacy, security regulation around social media, et cetera.
So these five areas of technology now, there's a spotlight on them.
To Martin's point, where it lands and ends up, that's purely a policymaker domain.
Our job, though, that's tricky in many of these situations, is that these are, by definition,
emerging tech areas.
I mean, 5G and semiconductors are scaled industries, but the rest of these industries are
emergent industries.
And emergent industries, just having been in the industry,
in startups. We don't know where this stuff's going to land.
Totally. And so all of a sudden
becomes really hard to understand
who to talk to, who to believe,
do we forecast, do we not forecast,
where we think it's going to land. And there are no
solid answers to any of these questions
because every six months it's going to be
something new. And so
how do you build policymaking
on top of emerging tech areas?
That is an art.
And I mean, again, it's up to lawmakers
and policymakers to figure
this out, which is where then you end up with things like, for instance, executive orders rather
than laws, right? It's very interesting how the policymaking apparatus works, where you end up with
100 pages of executive order stuff that outlines generally some ideas and thoughts and questions,
and I think there's going to be this leaning in and convergence that happens between industry
and regulators and stuff, because this stuff's moving, the tech is moving, policy makers are
learning more. They learn more. They ask more questions. The tech industry moves this way.
So it's an iterative long-term process, but it's up to the players, including the folks with
the money and the investors, having a seat at the table to play this. The good news is the agency's
playing very much a, we're friends with everyone, we talk to everybody. That's our job. Gather a lot
of intelligence, analyze it, hopefully with AI. And then help our
policymakers sort of create this stuff. We very much appreciate what you do. And thank you for
that. Thank you. Now, if you have made it this far, don't forget that you can get an inside
look into A16Z's American Dynamism Summit at A16.com slash AD Summit. There, you can catch
several of the exclusive stage talks featuring policymakers like Deputy Secretary of Defense
Kathleen Hicks or Governor Westmore of Maryland.
both founders from companies like Anderil and Coinbase
and funders like Mark Cuban,
all building toward American dynamism.
Again, you can find all of the above
at A16Z.com slash 80 Summit.
And we'll include a link in the show notes.