a16z Podcast - Building the Real-World Infrastructure for AI, with Google, Cisco & a16z
Episode Date: October 29, 2025AI isn’t just changing software, it’s causing the biggest buildout of physical infrastructure in modern history.In this episode, Raghu Raghuram (a16z) speaks with Amin Vahdat, VP and GM of AI and ...Infrastructure at Google, and Jeetu Patel, President and Chief Product Officer at Cisco, about the unprecedented scale of what’s being built — from chips to power grids to global data centers.They discuss the new “AI industrial revolution,” where power, compute, and network are the new scarce resources; how geopolitical competition is shaping chip design and data center placement; and why the next generation of AI infrastructure will demand co-design across hardware, software, and networking.The conversation also covers how enterprises will adapt, why we’re still in the earliest phase of this CapEx supercycle, and how AI inference, reinforcement learning, and multi-site computing will transform how systems are built and run. Resources:Follow Raghu on X: https://x.com/RaghuRaghuramFollow Jeetu on X: https://x.com/jpatel41Follow Amin on LinkedIn: https://www.linkedin.com/in/vahdat/ Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
The good news is infrastructure sexy again, so that's kind of cool.
This is like the combination of the buildout of the Internet, the space race, and the Manhattan Project all put into one, where there's a geopolitical implication of it, there's an economic implication, there's a national security implication, and then there's just a speed implication that's pretty profound.
I mean, I think it's easy to say.
I've seen nothing like this. I'm fairly certain no one's seen anything like this.
The Internet in the late 90s, early 2000s was big, and we felt like, oh my gosh, can't believe that.
build out the rate.
This makes it, I mean, 10x is an
understatement. It's 100x what the
internet was.
The AI boom isn't just changing software.
It's transforming the physical infrastructure
that runs it. Today,
you'll hear a conversation with
Amin Vadat from Google, G2 Patel from
Cisco, and Raghu Raggeron from
A16Z on what it takes to build
the real world systems behind large
scale AI, from chips and power
to data centers and networking.
They discussed the scale of the current buildout,
the new constraints on compute power
and interconnect
and how specialization
in hardware and architecture
is reshaping both
the industry and global geopolitics.
It's a grounded look at how
infrastructure itself is being reinvented
for the AI era and what comes next.
Let's get into it.
What better time and place to talk
infrastructure? All right.
So we were back to the green room
and just as
the first question was getting answered
I got cut off. So this could be
an entire repeat for all and now.
So, but anyway, let's go, right?
The first question is similar.
So both of you, firstly, welcome
and thank you for being here.
And I hope you'll have a great day and a half
as well. Both have you been
in the industry for a while.
And both of you have lived
through many infrastructure cycles, right?
So have you seen anything
like this cycle from your vantage
point? Not from an investor
vantage point, but from your
internal vantage
point where you are responsible for building things and planning for things and so on.
Any one of you, where do you want to start? You want to start Amin?
I mean, I think it's easy to say. I've seen nothing like this. I'm fairly certain no one's
seen anything like this. The internet in the late 90s, early 2000s was big and we felt like,
oh my gosh, can't believe the build out the rate. This makes it, I mean, 10x is an understatement.
It's 100x what the internet was. I think the upside is as big as the internet was. Same
thing, 10x and 100x.
Yeah, nothing like it.
Yeah, I'd agree.
I don't think there's any priors to this size, the speed and scale.
I'd say the good news is infrastructure sexy again, so that's kind of cool.
It was a long time where it wasn't sexy.
The thing I would say that's really interesting is this is like the combination of the
build out of the internet, the space race, and the Manhattan Project all put into one,
where there's a geopolitical implication of it, there's an economic implication,
there's national security implication
and then there's just a speed implication
that's pretty profound
so yeah none of us have ever seen it
at this size and scale
on the other hand I think we are grossly underestimating
like there's the most common question
I asked right now is is there a bubble
I think we're grossly underestimating the buildout
I think there's going to be much more needed
than what we are putting the projections towards
so that's the follow on questions
where are we do you think in the CAPEX spend cycle
But more importantly, what are the signals that you guys use internally, right, in your thinking?
I mean, you have to plan data centers, whatever, four or five years in advance, you have to buy nuclear reactors and whatnot.
So how do you think about the demand signals as well as your technology signals?
And JITO is the same thing for you, but from the point of view of enterprise and neoclodds, etc.
We're early in the cycle, is what I would say, certainly relative to the demand that we're seeing.
And internally, externally, we're, I mean, I can say here, over-subscribed tremendously.
In other words, our internal users are, we've been building TPUs for 10 years.
So we have now seven generations in production for internal and external use.
Our seven- and eight-year-old TPUs have 100% utilization.
That just shows what the demand is.
Everyone, of course, prefer to be on the latest generation.
But whatever they can get.
So this tells me that the demand is tremendous.
but also who we're turning away and the use cases that we're turning away.
It's not like, oh, yeah, that's kind of cool.
It's, oh, my gosh, we're actually not going to invest in this.
And there's no option because that's where we are on the list.
Same with many of you in the room.
We're working with many of you in the room,
and many of you are telling me directly and thank you.
We need more earlier.
Now, the challenge here, though, is, as you said,
we're limited by power
we're limited by transforming land
we're limited by permitting
and we're limited by
backup delivery of lots of things in the supply chain
so one worry I have is that
the supply isn't actually going to catch up to the demand
as quickly as we'd all like
I heard in the previous session
some of the discussions of the trillions of dollars
that we're going to be spending which I think is accurate
I'm not sure that we're going to be able to cash all those checks
you all have some money
you can't spend it all as fast as you want
I think that's going to extend
for three or five years
wow
and how do you deal with the depreciation cycles
that are involved there
is the demand curve
and the depreciation cycle curves
match up well fortunately we buy
just in time but the nice thing is
just in time for the hardware
the depreciation cycle for the space power
is more like somewhere between 25 and 40 years
so we have benefits there
I think if you think
Think of on the networking side and you look at both enterprise and the hyperscalers
as well as neoclouds, I think the story is quite different.
So the enterprise is pretty nascent and it's built out of true infrastructure.
I just don't think that the data centers, like if you assume that 100% of the data centers
at some point in time will need to get re-racked and you will need a very different level of
power requirement per rack that's going to be there compared to what used to be there
in the traditional data centers.
I just don't think that the enterprises are far enough along.
Maybe the few enterprises that are at super high scale might be there,
but I don't think the enterprises are far enough along.
Hyperscalers and neoclouds is a completely different story.
And to a mean's point on this notion of scarcity of power, compute, and network,
being the three big kind of constraints in this thing,
I would say right now that because there's not enough power,
singularly in one location
data centers are being built where the power is available
rather than power being brought to where the data centers are
and that's why you're seeing a lot of projects
that are being built out all throughout the world
and the other point though is
the lion's share of the constraints
that we're going to have
I think are going to be sustainable for a long period of time
and as you have data centers that are being built
farther and farther apart
one there's going to be a huge demand
for scale up network
so that you can have a rack
that gets more and more networking for scale up.
The second is you're going to have a lot of demand for scale out
where you have multiple racks and clusters
that need to get connected together.
But we just launched a new piece of silicon
as well as a new chip and a system
for scale across networking,
where you might have two data centers
that act as a logical data center
that could be up to 8,900 kilometers apart.
And you will see that just because
there's not going to be enough concentration of power
in a single location.
So you'll just have to have different architectures that get built out.
Actually, that brings us to the next topic that I want to discuss,
the future of systems and networking and so on and so forth.
So Google bought the first, or at least, large scale,
scale our commodity servers in production for the web revolution,
and now Nvidia is bringing back the mainframe in a different form.
So what do you think happens next?
I mean, is this a new style of coherent cluster-wide computing
that we need and there's going to be shared memory and all sorts of things, or do you think
the pattern changes again?
I don't think we're quite too, back to mainframes in that it is still the case that people
are running on scale-out architectures across these pools.
In other words, whether you have GPUs or TPUs, you're not necessarily saying, hey, that's
my GPUs supercomputer.
You're saying I've got 16,384 GPUs.
And maybe I'm going to go grab some subset.
Now I've got uniform all-to-all connectivity in many cases, which is fantastic.
same with TPUs.
It's not like I say
I have a 9,000 chip pod
and I have to make my job fit on that.
Maybe I actually only need 256.
Maybe I need 100,000.
So I do think that actually
the software scale out
is still going to be there.
I'll know two things, though.
One, you're absolutely right
that, say, about 25 years ago
at Google and other places simultaneously,
there was really a transformation
of computing infrastructure.
Like the notion that actually
you would scale out
on commodity PCs, essentially,
the same ones that you could buy off the shelf
running a Linux stack
and that's what you would do for disk,
that's what you would do for compute,
that's what you do for networking.
I mean, you all take it for granted.
This is sort of, it was radical.
There are many people who thought this was a terrible idea
that wasn't going to work.
I think the exciting thing about this moment right now
is actually that we're going to be reinventing,
I'm not saying Google,
we are going to be reinventing computing.
And five years from now,
whatever the computing stack is,
from the hardware to the software,
it's going to be unrecognizable.
And by the way, there was this co-design
because if you think about it,
I'll use Google examples
because I know those best,
Big Table, Spanner, GFS, Borg, Colossus,
they were hand-in-hand co-designed
with the hardware,
the cluster scale-out architecture.
And we wouldn't have done the scale-out hardware
if you didn't have the scale-out software.
Same thing is going to happen in this moment.
So I think actually the mainframe
is going to look very, very different.
Okay.
Yeah, I do think that it'll be like this,
extreme demand for an integrated system because right now we are very fortunate at Cisco
where we do everything from the physics to the semantics and you think about the silicon to the
application and other than power one of the constraints is how well integrated are these systems
and do they actually work with the least amount of lossiness across the entire stack and so that
level of tight integration is going to be super important and what that means the industry will
have to evolve into is we will have to work like one company
even though we might actually be multiple companies
that actually do these pieces.
And so when we work with hyperscalers like Google or others,
there's a deep design partnership
that actually goes on for months and months together
ahead of the time before we actually even do the deal.
And then once a deal is done,
of course, there's a tremendous amount of pressure
to make sure that they're moving pretty fast.
But I think the industry's muscle of making sure
that you operate in an open ecosystem
and not be a walled garden
is going to get important at every layer of the stack.
the stack.
Really great.
So let's talk about the, to segregate the stack a little bit, one of the most interesting
topic is processors, right?
Clearly there's an amazing vendor producing an amazing processor that has massive market share
today, right?
And we see startups all the time doing all sorts of processor architectors.
You got an amazing processor inside, your fortress.
What do you think happens next in processor land?
Yeah, we're huge fans of Invidia.
We sell a lot of Nvidia products and chips.
Customers love them.
We're also huge fans of our TPUs.
I think the future is actually really exciting.
And actually, it's not that I don't think that we've hit the point of,
okay, there's TPUs, there's GPUs, there's whatever, trainiums or something else.
We're really seeing the golden age of specialization.
And that's my observation.
If you look at it, a TPU, I'll use that example again,
because I know it best for a certain computation
is somewhere between 10 and 100 times more efficient per watt,
and it's this watt that really matters than a CPU.
That's hard to walk away from 10 to 100x.
And yet, we know that there are other computations
that if you built even more specialized systems for,
but not just a niche computation,
computations that we run a lot of at Google.
For example, maybe for serving,
maybe for agentic workloads
that would benefit from
an even more specialized architecture.
So I think that actually
one bottleneck is how hard is it
and how long does it take
to turn around a specialized architecture?
Right now it's forever.
Yeah.
Right.
For the best teams in the world,
really from concept to live in production,
speed of light is two and a half years.
Yep.
I mean, that's if you nail everything.
Right.
And there are a few teams that do.
but how do you predict the future
two and a half years out
for building specialized hardware?
So A, I think we have to shrink that cycle
but then B, at some point
when things slow down a little bit
and they will,
I think we're going to have to build
more specialized architectures
because the power savings,
the cost savings,
the space savings are just too dramatic to ignore.
And this will actually have
a really interesting implication
on geopolitical structures as well
because if you think about what's happening in China,
China actually doesn't make
two nanometer chips.
They make seven nanometer chips.
And so if you think about what,
but they have unlimited amount of power
and they have unlimited amount of engineering resource.
And so what they can do is do the optimization on the engineering side,
keep the seven nanometer chips
and make sure that they give people unlimited amount of power.
We might have a different architectural design
where you have to get extremely power-efficient.
You don't have as many engineers as you might enjoy in China
and you can actually go to two nanometer chips
and those might be power-efficient in some ways,
but they might have thermal lossiness in other ways.
There's a whole bunch of things that have to get factored in
on the architecture that will get more specialized
even by geo and by region.
And then depending on how the regulatory frameworks evolve,
how that geo then expands.
Like if China expands to different regions in the world,
you will have a very different architecture
that plays out than if,
America expands to different regions in the world.
So this is a very interesting kind of game theory exercise
to go through on what happens in the next three years
in tech in general.
And no one knows right now.
That's the beauty of the world that we live in.
So we'll soon be measuring systems by engineers per token
in addition to watts for token.
All right, so let's turn to another topic which
engineer per kilowatt.
In the US.
networking, right?
Obviously, you alluded to it, scale up, scale out.
In your case, you mentioned scale across.
So it seems to me that networking is also going to get reinvented
in a fairly significant way.
So what are the leading signs that you're seeing
and the signals that you're seeing
on the direction networking is going to take?
Yeah, networking is going to need a transformation for certain.
In other words, the amount of bandwidth that's needed
at scale within a building
is just astounding.
I mean, and it's going up.
The network is becoming a primary bottleneck
which is scary.
So more bandwidth translates directly
to more performance.
And then given that the network winds up
actually being a small power consumer
that delivered utility you get per watt,
like it's a super linear benefit.
Like spend a little bit here, get way more there.
So I think that that side
is absolutely there.
I'll put in a plug here in that
for these workloads,
we actually know what the network communication patterns are,
a priority.
So I think this is a massive opportunity.
In other words,
do you then need the full power of a packet switch
when actually you know what the rough circuits
are going to be?
I'm not saying you need to build a circuit switch,
but there is an optimization opportunity.
The other aspect of this here
is these workloads are just incredibly versy.
and to the point where
we've written about this
power utilities notice when we're doing
network communication relative to computation
at the scale of tens and hundreds of megawatts
like massive demand for power
stop all of a sudden and do some network
communication and then burst back to
computing so how do you build a
network that needs to go at 100%
for a really short amount of time
and then go idle
and then same actually for the scale across use case
which we're absolutely saying
you don't run large scale pre-training
across all your wide area data center sites
12 months of the year
so and then you're going to
this is the problem I think about a lot
is let's say you build the latest, greatest chips
in these three data center sites
how long are you going to be there
before you migrate to the latest latest chips
and three other sites
and then what do you do with the network
that you left behind? People are going to run jobs on them
but you're not going to need nearly the network capacity
that you did for large-scale training, pre-training, anyway.
So the shift of needing massive networks
for like 5% of the time,
I don't know how to build a network like that.
So if any of you do, please let me know.
I mean, if you don't know how to build this,
there's nobody that knows how to build this.
We're trying to figure it out.
It actually is a fascinating problem.
Yeah.
Yeah.
I do think, like, if you think of power is the constraint
and if compute is the asset,
I think network is going to be the force multiplier.
Because, you know, if a packet,
if you have low latency and low performance
and high energy and efficiency,
every kilowattor power you save moving the packet
as a kilowattor power you can give to the GPU,
which is, you know, super important.
The other thing is, you know,
when you think about scale up versus scale out
versus scale across, you'll also need,
especially on inference versus train,
there are different things that get optimized.
You might optimize for latency much more on training runs.
You might optimize much more for memory on inferencing.
There's architectural...
And so I also feel like the way that networking will evolve
is rather than it being a training infrastructure
that then gets applied to inferencing,
you might have inferencing native infrastructure
that gets built over time.
time and so there's there's good considerations to look at on like how all of the architectural
components are are moving but um in my mind like if i were to say strategically one of the
biggest things that's happening in networking from our vantage point is if you're just a rapper
around broadcom then you've got a monopoly that's going to be a very predatory one um and so one
One of the big reasons where Cisco is super relevant is you don't just have a Broadcom world
with people just wrapping Broadcom, their systems are on Broadcom, but you will actually have
a choice of silicon.
And that choice in diversity of silicon is going to be super important, especially for
high volume consumption patterns.
So last question on the system, since you brought that up and we'll move to use cases.
Inference, both of you have mentioned,
you talked about in the context of the processors,
you just started talking about the architecture.
Are you deploying today's specific architectures for inference, I mean?
Or is it still shared workloads?
We are deploying specialized architectures for inference.
And I think as much software as hardware,
but the hardware is also deployed in different configurations
is the way I would say it.
And then the other aspect of inference
that is becoming really interesting
is reinforcement learning,
especially on the critical path of serving,
because latency just becomes absolutely critical.
And I think that,
so how you would build your system
and how you would connect it up to one another,
and of course networking plays a key role there,
becomes increasingly interesting.
But are there singular choke points
that if removed would accelerate the thousand-fold
reduction in the cost of inference that we need,
or is this a natural curve that we are writing down?
So we're massive.
I mean, two things here.
One, again, maybe many of you are familiar with this.
Pre-fill and decode on inference look very, very different.
So actually, ideally, if you would have different hardware, actually.
The balance points are different.
So that's one opportunity.
It comes with downsides.
We can talk about that.
What I would say, though, is that maybe something people don't realize
is that we're actually driving massive reductions in the cost of,
cost of inference. I mean, 10 x's and 100 xes. The problem or opportunity is the community,
the user base, keeps demanding higher quality, not better efficiency. So just as soon as we
deliver all the efficiency improvements we're looking for, the next generation model comes out,
and it is whatever intelligence per dollar is way better, but you still pay more and it costs
more relative to the previous generation. And then we repeat the cycle.
And it's almost like the longer, the reasoning that you have,
the more impatient the market gets, right?
So, for example, if you have a 20-minute reasoning cycle,
like for example, with deep research,
you could have autonomous execution for about 20 minutes.
That was interesting.
Now you have, you know, most of the coding tools
that can go up to 7 hours to 30 hours of, you know,
duration of autonomous execution.
when that happens, there's actually a greater demand
for saying compress the time down.
And so it's kind of a self-fulfilling prophecy
where you need to have more performance
because of the fact that you've been able to go out
and do things for a longer autonomous amount of time.
And so it's almost a never-ending loop
where you'll need to have more performance for inference
in perpetuity.
Yeah, though intelligence per dollar
is a business model metrics,
metrics, so it is not just the processor
capability. No, it's end-to-end, absolutely.
Yeah, so, okay, so let's
change topics and talk about actual usage,
right? So both of you
have massive
organizations. Where
are the key wins
that you're getting today with
applying all the AI that's
available to you?
And then we'll talk about
what your customers are doing, but I'm
actually curious about what are you doing internally.
Within the teams? Yeah.
So I mean, coding is the obvious one,
and that's actually picking up increasing traction
and increasing capability.
We just actually in the last couple of days
published the paper that showed
how we applied AI techniques
to do instruction set migration.
So in other words, we actually had a fairly massive migration
from X86 to ARM, making our entire code base
and at Google it's a very, very large code base
sort of instruction set agnostic
and including to future Risk 5
or whatever else might come along.
tens and thousands, hundreds of thousands
of individuals. Your entire codebase
you're going to make it acknowledged.
Entire code base, because we
want to need all of our code base to be
agnostic. Man, that's a crazy-ass project.
Yeah, so we, it was.
And the motivation, though, for this
actually was a few years ago. We had
this amazing legacy system called
Bigtable, and then a new amazing system called Spanner.
And we decided to tell the company,
hey, everyone needs to move from Big Table to Spanner.
And by the way, Bigtable was amazing for its time,
but Spanner was better.
The estimate from doing that migration for Google
was seven-staffed millennia.
How much?
How much?
Seven-staffed millennia.
We had a new unit that we had to actually
to see.
And it wasn't like made up people being lazy.
It's like this is what it was doing.
It's endearing that they came up with that, though.
And you know what we decided?
Long-lived Big Table.
it just wasn't worth it
honestly
the opportunity cost was too high
and we have these sorts of migrations
tensor flow to jacks
we actually I mean again somewhat private
but not too secret we've affected this
internally with AISS went
integer factors faster now
there are other tasks which the tools
probably aren't quite yet up to the
whatever standard for
but the area under the curve
it's getting bigger and bigger and bigger.
So we're seeing probably like three or four really good use cases,
and then we're seeing some use cases which are not working yet.
And so what is working, code migrations is working relatively well.
So far we use largely a combination of codex, clod, and cursor, some windsurf.
And so code migrations tends to work pretty well.
Debugging, oddly enough, has actually been very, very productive with these tools,
especially with CLIs.
Where we've not done as good a job, and then front-end zero-to-one projects tend to do extremely well.
Like, the engineers are super productive.
When you go to code that's older, and especially further down in the infrastructure stack,
much harder to go out and get that to happen.
that we have to orient our engineers on.
This is actually much more of a cultural reset problem
than it is just a technical problem,
which is if someone uses something
and says this isn't working right,
you can't put it back on the shelf saying
this doesn't work for another six or nine months.
You have to come back to it within four weeks
and see if it works again,
because the speed at which these tools are kind of advancing
is so fast that you almost have to kind of get,
I was with 150 of our distinguished engineers today,
and what I had to urge them to do is assume that these tools
are going to get infinitely better within six months
and make sure that you get your mental model
the way that tool is going to be in six months
and what are you going to do to be best in class in six months
rather than assessing it for where it is today
and then putting it aside for six months,
assuming that that's not going to work for the next six months.
I think that's a big strategic error.
So we've got 25,000 engineers.
I'm hoping that we can get at least
2 or 3x productivity
within a very short amount of time within the next year.
And we'll be able to see if that happens.
The second, a couple of the big areas
that we are starting to see some good responses
is in sales.
Preparation going into an account call.
Really good.
Legal contract reviews.
Actually, much,
better than what we had thought.
And then the last one is not super high-influence volume,
but product marketing.
I think the first chat GPT take on competitive
is always better than what any product marketing person
comes up by themselves.
So we should never start from my slate
to start from chat GPT and then go from there.
Okay.
You could be talking about the topic for a long time,
but they showed me the two-minute warning.
So I want to focus on one last question here.
So we've got a lot of founders here,
building amazing companies.
So what is the most interesting development
they should look forward to
in the next calendar year, let's call it,
or the next 12 months,
A, from your company, and B, from the industry.
If you were to look at your crystal ball.
I mean, I think to build on the point,
these models are getting more spectacular
by the month,
and then they'll be from whatever companies you like.
a bunch of really exciting, including ours.
Oh, I forgot to say, you're not allowed to say models will get better.
Yeah.
Everybody knows.
The models are going to get, but I mean, they're getting scary good, is the part that I would say.
But I think that then the agents that get built on top of them and the frameworks for making that happen are also getting scary good.
So the ability to have things go quite right for quite long over the coming 12 months is going to be transformative.
Do you want to leak any aspect of your roadmap?
Next 12 months?
Not so not right now.
Okay.
Do you too?
I'd say the big shift
and what I would urge startups to do
is don't build thin wrappers around models
that are other people's models.
I think the combination of a model
working very closely with the product
and the model getting better
as there's feedback in the product
is going to be super important.
So you are going to need foundation.
models, but if you just have a thin wrapper, I think the durability of your business will be
very, very short-lived.
So that would be something that I would urge you on.
And I think the intelligent routing layer of some sort that says, I'm going to use my models
for these things, I'm going to probably use foundation models for other things, and dynamically
keep optimizing will be, I think Cursor does that pretty well.
But that'll be a good way that the software development lifecycle will evolve.
what you should expect from Cisco is
look truth be told for the longest times
people thought Cisco is a legacy company
like there were a has been
and I think in the past year
hopefully you've paid attention
and if you haven't
our stock software doing pretty well
I think there's a level of momentum in the business
there's a spring and the step in the employee base
so you should expect
like I said from the physics to the semantics
in every layer from silicon to the application
a fair amount of innovation in silicon and networking
and security and observability in the data platform
as well as applications from us
and we're excited to work with the startup ecosystem
and so if you ever feel like you want to work with us
make sure that you reach out to us.
What are you going to say something?
I mean one aspect that I want to highlight about the models
is where we were with let's say text models
two and a half three years ago.
they were fun
like hey write me a haiku about martin
did a great job
now they're amazing
I think that what's going to happen
in the next 12 months is the same thing
is going to be happening with
input and output of images and video
to these models
and to the extent that
even for images
imagine them as productivity
and educational tools
not just okay here's martine
as Superman on a
high school too
right but
using it for productivity gains
and learning I think is going to be
really, really transformative.
Awesome.
So on that note, we're allowed to end this session.
Thanks for a great conversation.
I mean, thanks to you too.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our substack at A16.
16Z.substack.com. Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only. It should not be taken as legal
business, tax, or investment advice, or be used to evaluate any investment or security, and
is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies
discussed in this podcast. For more details, including a link to our investments, please see A16Z.com
forward slash disclosures.
