a16z Podcast - The Agent Era: Building Software Beyond Chat with Box CEO Aaron Levie
Episode Date: April 8, 2026Erik Torenberg, Steve Sinofsky, and Martin Casado speak to Aaron Levie, CEO at Box, about what happens to enterprise software when agents become the primary users. They discuss why coding agents succe...ed where other knowledge work agents struggle, what abstraction layers mean for the workforce, and how data access and systems of record must change in an agent-first world. Resources: Follow Aaron Levie on X: https://twitter.com/levie Follow Steve Sinofsky on X: https://twitter.com/stevesi Follow Martin Casado on X: https://twitter.com/martin_casado Follow Erik Torenberg on X: https://twitter.com/eriktorenberg Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
the diffusion of AI capability is going to take longer than people in Silicon Valley realize.
It's just absurd to think you're going to vibe code your way to, like, SAP.
All of that domain knowledge, it's not just represented in some well-orchestrated data layer.
The engineering compute budget conversation is going to be the most wild one in the next couple of years.
The biggest problem right now is everybody is trying to figure out the economics of all of this,
when they're off by at least an order of magnitude on how big the opportunity is.
If you have 100 or 1,000 times more agents than people,
then your software has to be built for agents.
People in the abstract say things like,
now you're marketing to agents,
you're like an API, you've got a good idea.
I actually think that's almost exactly wrong.
Wow.
This is breaking podcast news.
Every major technology wave promised to eliminate the middleman.
Marketplaces would dismantle hotels.
SAS would replace on-premise.
But the taxi medallion was the only real casualty.
The layers persisted because they encoded organizational logic, not just software logic.
Now, agents are arriving, and the assumption is the same.
They will flatten everything.
But the first enterprise teams deploying agents at scale are discovering something different.
Agents do not want simpler systems.
They want better ones.
They choose backends based on durability, cost parameters, and reliability, not interface polish.
The question for every software company is no longer whether to support agents,
but what it means when agents outnumber employees 1,000 to 1.
I speak with Aaron Levy, CEO at Box, alongside A16Z board partner, Steve Sinovsky,
and A16Z general partner, Martine Casado.
If you start to imagine that we all have to build software for agents,
I think we're all clear on that, right?
So, like, that trend is happening, which is, like, we spend as much time now thinking about the agent interface to our tool as we do the human interface.
Sure.
Okay.
And the reason we're doing that is because our hypothesis would be that if you have 100 or a thousand times more agents than people, then your software has to be built for agents.
And then what is the way that those agents are going to interact with your system?
It's going to be through an API or a CLI or MCP or whatever.
and the paradigm that appears to be taking off
and is quite successful so far in terms of efficacy
is what if you give a coding agent
access to your SaaS tools
and a coding agent access to your knowledge, work,
sort of workflows and context,
and that kind of becomes this superpower,
which is the agent is not only capable of reading some data,
understanding some information.
It can actually code its way or use APIs
through whatever task that's trying to achieve.
That appears to be like a paradigm,
that is starting to compound.
And that's the Claude Co-Work phenomenon.
That's the whatever Open AIs is kind of cooking up, you know, with the super app,
perplexity computer, et cetera.
And I actually think it kind of makes sense as like the ultimate manifestation of this stuff.
I mean, I think you're right.
It makes sense in a theoretical way.
Yeah.
But in a practical way, we have to be really careful in that the way to say it is algorithmic
thinking.
Yeah.
Is really, really, really hard for the vast majority of people.
who have jobs.
Yeah.
And so the easiest way to think about it is if you were to go into any person
and ask them to create a flow chart for a particular thing that they have to go do,
they would probably fail at producing that flow chart.
Yep.
So within any organization, say doing a marketing plan and there's 50 marketing people
working on a giant product line.
Yep.
One person probably understands and could document the flow chart.
100%.
So if you put one of these agents or you put this tool, this co-working,
tool in front of people to create these things. Their ability to explain to it what to do is
really, really limited. 100%. But what if that becomes the new, this is the new way you have to
interface with computers and you just have to cycle that through? Well, then you're basically just
developing the next abstraction layer for how people interact. Yeah. And the developing an abstraction layer
has historically at each level of the abstraction layer been a highly skilled, very specific individual
within an organization,
developing that.
Yes.
And then the little parts
that they build
just become little toolets
in the world of people
doing particular tasks.
And some people are able
to stitch them together
and some can't.
But that happened with paper clips
and thumb tags before.
And it's going to happen
with whatever we do next.
I think basically the timeless part
is the job just moves up a rung
and you learn a new set of skills.
And that's why I actually don't think
anything about this is any different.
It's just now the leverage you get
is obviously fantastic.
There was this viral kind of tweet that went around,
which was the anthropic growth marketer.
Do you guys see this?
Basically one person,
and he was using Claudecote at the time
to basically more or less automate
what maybe five or ten people would have done
in various kind of silo jobs.
And I think the reason why it's interesting
is you had to have been a systems thinker
to be able to accomplish that.
So clearly he already was technical enough
to be able to pull that off.
But it did kind of represent
what would each of these jobs look like
if you have, like,
imagine you had, you know, X job in the economy, and right next to that person was an infinite
pool of engineers that could automate whatever that person wanted. And what would that job look
like in the future as a result of that automation that now is possible? Yes, I agree that you'd have to
find a way to think through your job as a system to be able to pull that off. Maybe the agent gets
better and better over time at being able to like nudge you in that direction. But it does sort of
stand a reason that like you will start to try and automate a lot of that kind of work of like,
well, why don't I take the keywords that are working in this, in Google AdWords,
and then port them over to Facebook and make sure that those are replicated and then
taking the new signal from what's happening in the market.
That's a big leap.
Yeah.
So one thing first.
I almost had you.
You were nodding a little bit.
And then I said something that went too far.
Using the anthropic growth person as an example, that's a job.
That is the rest of work.
Yeah.
I could do that job.
Everybody is nothing good.
When demand is infinite and frankly, supply is infinite, and frankly, supply is infinite, this is
not a difficult job. And so let's let's think I that runs the petrol puppet Australia right now is
amazing. Right, right. So like be instead be the $600 PC marketing person and see how you can do
against the Neo. That's a real job. All right. Fine. We need a better example. But there is, I mean,
it is really interesting. Like I here, let me do an old example of an old person example.
Like my cousin, MBA, elite school joined her first job. She's a little older than me, joined right on the
cusp of computing. Like, she actually didn't use a spreadsheet in grad school. And then
the spreadsheet showed up, but she wasn't a spreadsheet person. So instead, they told her hire as many
interns as you want. And so her first year on the job, she like supervised, like essentially a
whole room of agents. Yeah. And the kids, who was me, not literally, but they were in college,
came and just did all the spreadsheeting. Yeah. But then what happened sort of magically over the next
couple years was she and her cohort all became the spreadsheet people.
And then this idea that you being a manager in a bank or just two years in
meant you had a cadre of people doing this.
No, the whole abstraction layer moved up.
And the old job before those interns was you just sat there with basically calculators
and an HP calculator figuring out the model for some M&A deal or whatever.
And you only got to do like two iterations before you had to put out the pitch deck.
or just go to the customer or the client or whatever.
And then all of a sudden, they're doing 30 iterations themselves.
But they see, and so I think where we are with agents is just at this step where you think
you need 50 and the abstraction layer is such that we're dividing up in these really small pieces
with one super smart person coordinating them on.
And pretty soon that whole thing is just going to, they're all going to collapse on each other.
And there is just going to be like a skill set amount of code, call it an agent that is like
marketing-ish.
And you'll be able to ask it in marketing stuff.
Yeah.
And then the next step will be and have it go do things.
I'm a little skeptical of the, until the whole, like, non-reproducible, non-random element of this AI stuff goes away, the doing stuff is going to get very costly.
Yes.
And so then you get into the human and a loop discussion and all of that.
But I feel like when I talk to people trying to do stuff that we're right.
I feel like I'm at Thanksgiving dinner talking to my cousin's six months in her job.
When I'm using a spreadsheet already.
And I'm like, I don't know why this is so hard.
You should just use one.
And then two years later, she's doing it.
And I think this right now, you have to be a rocket scientist
and the growth marketing person to create 42 agents
and spin them all up and do all of this stuff.
But the rocket science part of it just is going to evaporate in very short order.
And then you're talking about, wow, there's a giant chunk of domain expertise.
Yeah, it goes back to the domain expert.
So I actually think something that you said,
I'll take the other side of, which is,
I think it's very tempting to be like,
these agents are going to code and do X.
Yeah.
But I think we're going the opposite way.
So I think actually where we started was we'd take like a piece of SaaS software
and we'd add AI.
Yeah.
And then that's like the new kind of like AI enabled.
So that's like the extreme version of using code for these types of things.
But now what are we actually doing?
We're like, okay, the SaaS software is still SaaS software.
And the agent uses it as a computer because it's actually very good at that.
So I'd say like we started with code.
Then we went to the terminal, which is actually less code.
Yeah.
And now this year is going to be the year of computer use.
Yeah.
So it's almost like they're much more like humans using computers than them generating
code.
And that feels like very much like this mezzanine step.
Yeah.
And I actually come from like the generating code type of the world.
Yeah.
Like I would argue that that's happening less, not more.
Yeah, I think so to me, whether it's computer use, API use, or writing code on the fly,
I kind of maybe erroneously put that all in one blank category.
Well, they're very different.
They're very different.
But we have an agent that we're working on where it just makes a determination whether it should use an existing skill.
It should using an existing tool from Box or it should write code to solve that problem.
And its ability to do any one of those three at any moment ends up being incredibly useful.
Because sometimes there's just some specific operation you want to be able to do.
We're writing code to be able to do that operation is just faster.
And we can't possibly kind of pre-plan for everything that anybody would ever want to do on their documents.
And so the fact that the model is good enough to also write code on the fly for that use case
ends up just being like an amazing property,
even though maybe 90% of the things that it's going to do should just be using an existing API.
And over time, Predo takes over.
And over time, there's literally like seven apps on her iPhone.
There's seven SaaS apps we end up.
Like, over time, these things tend to consolidate.
But the seven apps on the iPhone is a issue of humans don't want to learn these things over and over again.
And so I as a human, I don't, I can't, I don't have the mental bandwidth.
to learn that many apps.
But an agent that is going to use tools and APIs
and be able to code things
doesn't have any of the same constraints that we have.
So I don't know.
Like, I don't mind.
Well, you could argue that there's just so many things to do
and you can make interfaces sufficiently general.
Yeah, fine.
Fair.
I think I like what you said then because...
Oh, I'm back.
Okay.
We're aligned.
We're aligned.
No, but I think there's something super interesting here,
which I do really, really like,
which is that where,
software has evolved, you know, like I use SAP all day. I work in finance. I have to go and generate all these
reports. And then somebody shows up and says, I want a report that does this view slice this way.
And I'm like, oh, God, I don't know how to make that. And like, now let me go wade through the
SAP help system and try to find it. One thing that, that let's just say AI could be very good at is
it actually can navigate that surface area much, much better. You know, the help is all
there. And so it's a matter of finding it, mapping language. And humans have been a bottleneck in
tapping the past 25 years of software capabilities. I mean, like, I spent my life, my life with
sitting next to people on airplanes saying, how can I make PowerPoint do X? And just go to the ribbon.
And, you know, it was because it hurt, physically hurt to watch somebody suffering with bullets and numbering
in Word or trying to figure out, you know, like, oh, let me just make a two-sided, a two-axis graph in Excel.
which like is rocket science.
Like almost no one can do that.
But yet it's super common.
And so people are like have not.
And so that impedance mismatch was a human user interface design.
I totally buy it.
On the consumption layer, I totally buy it,
which is like the perfectly fluid like UI or consumption layer.
I just feel the back in like the systems of record.
Yeah.
Oh, yeah.
It'll probably converge into like some database,
like some generic set of APIs like that they'll connect to.
And like that seems to be the direction.
it's going. I agree. I think, you go ahead. Sorry. So, like, so I spent all weekend, like,
implementing my nanoclaw bot. And when you first start out, it's like, you're building an
integration for everything. The nanoclaw is very, like, like, open claw has all of the integrations.
NanoCla has very few of them. And so you have it build all of its own tools. But after, you know,
two or three days of these, like, you know, you kind of have the tool integrations that you need.
And, you know, like. Yeah. But back to the, I mean, we're talking about personal productivity,
probably, like, you're like organizing your life or something. Well, it's work
productivity. Okay, if I work productivity and then an SAP system and like, and like, so there's like an
infinite, like, there's an infinite amount of complexity when you get to, okay, some company that has a
global supply chain and they're dealing with 75 pieces of information across, you know, 30 different
systems that does require a certain amount of, of horsepower from the agent that is just, we have,
I mean, we just haven't been able to get from, from any architecture up until now. But like, take,
but that, what you just described is literally what IT has been.
been doing for 50 years and will continue to do, which is, yeah, I have a friend who was the
CIO of, of the VA. And he spent, all he spent his time on was gluing the 75 VA systems together.
And it's all just integration.
Okay.
Perfect.
For integrate, yeah, this I told me.
Oh, okay, great.
For integration, these things are the best, but it's integration.
Yes.
It's literally, how do I stitch these two systems?
But it's in it.
But now the thing that I think is happening is it's kind of like integration on.
demand. It's my new query in the system that the IT team didn't pre-wire. Now I needed to happen at
runtime. Let me get off my lawn. Okay. So the reason I just was in a room filled with a bunch of
CFOs and CIOs and this, they all looked at me when I said something along these lines,
although not as optimistic as you can imagine, but they just they, they like, no, it caused like six
of them to come running up afterwards and say, you're insane.
You've lost all credibility with me.
Because it's back to...
Wait, wait, what specifically that the agents are going to do integration?
That the integration is a problem that will get a lot easier.
Yes.
They were against that?
No, there are no one's against me.
I know.
But their fear is like unleashing not just the agents themselves,
but humans to do integration.
Because you put people creating new integrations and you just say,
please break my system of record.
Oh, yeah.
And so this idea that you just create,
create like a new API between, you know, System 27 and System 38.
Yeah.
And then you're, that might be fine for a report.
Yeah.
Because if that person wants to be wrong, that's their business.
Yeah.
But you're not going to.
I think we have a read-only version of this for a number of years before.
Where N is, and is very large.
Yeah.
And a lot of it's just a consumption layer where the consumer is a human being.
Right, right.
It really feels right now a lot of the A stuff is consumption layer.
But, yeah, I mean, it's, you know, we actually have, so we just rolled out
the official box CLI.
Thank you for liking the tweet on that.
I used it.
I have some feedback.
I'll talk about it.
I'll take all the feedback.
But it's a really interesting thing.
So we had all these debates internally of like,
okay, you give Claude Code the box CLA and you can now interact with your entire box
system via a natural language and you get the horsepower of Opus 4,6 being the
orchestrator of doing a bunch of operations.
And it's like, it's like, you know, blows your mind.
I guess I'll get some feedback, but it blows your mind in some ways because you could
just be like, upload this entire folder from my desktop into box and it'll work or process all
these documents in this folder and it'll work. And it's amazing. And then we started thinking through
like, well, let's say you were a company with, with, you know, 5,000 employees and everybody had access
to some shared repository like, you know, engineering documentation and, you know, marketing assets
or whatever. And everybody had Claudec or Codex, you know, running with the CLI. Wow, we now
have some really interesting new challenges, which is like, like, how do you coordinate,
you know, possibly the fact that you might be hitting the system like, you know, 10,000
times an hour or something, not from a like a performance standpoint, but just like, how do you
make sure that people didn't move like a file from one thing accidentally from one folder to
another folder while the other person was trying to do a right operation and somebody else
was trying to delete something because you have these agents running wild.
This is, this is going to be like the new big question that every CFO, CIO,
is running around
trying to, with her hair on fire,
trying to figure out.
There's just,
that's exactly what I ran into,
which is I played around
with your example,
which is create,
the video example,
which is create like a marketing plan directory or something.
And like all of a sudden,
I'm like in some loop creating directories.
Yes.
Yeah.
And it's going to go on as long as it can.
Right.
And I was like,
I wonder what the limit is on box
for nested directories because I'm about to hit it.
I actually,
we're going to find out too.
Yeah,
yeah.
But it does feel to me that,
like a lot of the intuition
is to, like, build a new layer of controls and whatever.
But what's actually happening on the ground is the opposite.
So I'll give you an example.
Like, when we all picked up a lot of these personal agents,
we would, like, give them our API keys.
Yeah.
We would give them our email addresses.
And then they would kind of access those things.
They're like, oh, but how can I stop it from, like, whatever?
Yeah.
And so what everybody's doing now is you give it its own phone number.
Yep.
I actually gave my nanoclaw, its own credit card.
Yeah.
Yeah.
Yeah.
Yeah.
Hopefully just a visa debit card that you bought at CVN.
Yes.
It's got all the money.
But I haven't.
No, no, but then I gave it its own Gmail account, which you can log into.
And then Gmail actually has all of these Rback permissions that you have.
So you could make an argument that, like, you know, we've actually built in a lot of these permission systems.
You have to treat it like a human as a separate human.
And then instead of like building another off layer, being another.
Okay, now can I instantly do a take down of this element that we're going to run into?
Please.
Yeah.
Okay.
So that is fantastic for personal productivity.
Yes.
And the question that we're going to run into is in an enterprise,
let's say I have, let's just make a simple example.
I have a 50-person team of something.
Should everybody also, basically, will we have 100,
will we have 100 people now collaborate?
I mean, basically 50 humans and then 50 credit cards.
And then 50 agents in that same shared space.
And do I have, I obviously have complete oversight over my agent,
but what if my agent collaborates with somebody else and then accidentally gets
access to some resource because they were sharing with that other person, and I'm not supposed to
have access to that resource, and now this autonomous sort of stateful, you know, agent is
running around working on somebody else's information.
The default end to an argument is you treat them like human beings.
It doesn't work.
So you can't fully treat them like humans, because here's the thing.
And with regular humans, you don't get to look at the Slack channel of the person that is
working with you or working for you.
You don't get to log in as them.
You don't get to oversee them.
you are they are accountable for their own set of execution in the real world you don't get penalized
for how they screw up the agent you have all the liability of whatever they're doing you do have
complete oversight and you're probably going to need to have that complete oversight they have no right
to privacy so so there's going to be these some of these breakdowns that aren't as clean as just
treat them like a person because i need to be able to kind of i need to be able to give access to
something to them but i also need to be able to like log in as them at some point and
and be like, no, no, you fucked up the whole thing.
Right, right.
And I need to undo it all.
But if I can log in as them, how could they have operated in the real world working with
other people and keeping anything, you know, confidential or secure or whatever?
So it really is still an extension of you.
It's like almost impossible to get around them being an extension of you.
So now the thing that we're thinking through that we're not going to be able to do any time soon.
I just doesn't logically follow.
Yeah, maybe.
But for example, for my employees, I can log in as them.
You don't, though.
You don't, you don't log in as them.
I can get access to their email.
Yeah.
No, in like if you get like sued.
You're not logging in, you're not logging in as them on a regular basis because they sent one email.
Isn't the right operating model with an agent, the same thing?
The risk is like a thousand times greater.
Like these people, like, they will just leak your information whenever they want.
Like, they will happily just go and send some email to somebody because they got prompt injected.
You think the terminal state is that these things are still these sloppy computers and therefore they will always.
I don't like the word sloppy unless we're saying it very in a colloquial sense.
But like.
They'll never be able to contain information.
they'll never...
So, like, I think the ability
for you to keep something
in the context window a secret,
like, as in like you tell it,
do not reveal X thing in the context window,
I think that's a very hard problem to solve.
What's take?
And so then, so then thus,
if anything can ever enter that context window
because they have access to a resource,
then in theory you should assume it can be,
you know, prompt ejected out of the context window.
And I don't know that we know of a way to solve that at the moment.
Like, that's, like, in so.
So if I know your new agent's email address,
and I email it like it's an assistant,
but like I can social engineer it 10 times easier than a human,
like it'll be hard for you to pull off
that that agent is now also has access to your like M&A documents and stuff.
But isn't this like literally all of AI right now?
Which part?
I mean the fact that we've got these shared systems
that we use the intelligence for that have shared context.
But what do you mean it's all of AI?
Well, I'm just saying like right now,
when we use AI in terms,
internally in agents.
Yes.
This is exactly how we use them.
But this is why they're working as you effectively right now,
and we don't yet know how to make them not work as you.
Let me offer an example.
Let me offer an example.
And then solving this problem, though.
Like the issue will be like you will just be able to trick the agent to reveal information.
So then that's why like having them have access to their own resources
where they can fully make their own decisions is not yet,
that we've been able to pull off.
There's a perfect example for solving your problem,
which is we already lived through this with open source.
Yes.
The model for open source was it's all there,
and you just use it, and you pick and choose,
and then, like, nobody debated it
because the world was much smaller then,
and we weren't all on X doing podcasts
when this was all happening.
But then quickly, everybody realized all the problems
you were just talking about.
Like, if you're running a big company,
you can't have some person just go copy in a bunch of source code
from open source into your commercial product like that.
There was a whole licensing problem, a whole quality,
a whole bunch of stuff.
And so all these norms got developed.
The debate that we're happening that's happening right now
is just, is this really interesting modern artifact
of how new technologies develop,
which is this is all happening in real time.
During open source, like, we met in a conference room this big
and debated how much open source we could use in Windows or office.
And nobody in the internet knew we were having this debate.
It was a very, and I think it's just so interesting that not just the debate about specifics,
but this whole notion of where is this heading is happening in writ large.
And everybody is just trying to get to the end state, like way, way more, like, in a sense,
more quickly than we can actually reach the end state.
And so what really needs to happen is people just need to go build.
We need standards.
What?
We just need some standards.
No, I think we've got different intuitions on the end state.
the statement. No, no, we don't want my intuition.
What could make an end-to-end argument that these things actually converge on the same type of reliability
as a human being, which is exactly how we view like self-driving. And in that case, you use
the exact same mechanisms that we use to protect with human beings. Like, you consider insider
threat, you consider the fact that people can be bought off. You consider the fact that people
make mistakes. Yep. And that's a risk. And that's operational processes. So one intuition is like,
that will be the unscate. Yeah. Yeah. There's another intuition.
Well, don't point to me.
I'm just saying, I'm talking about where we're at now.
I actually, I don't know that we disagree in the end state.
Okay.
And, and by the way, like, strategically, we're hedging because we're going to build,
we're going to build agent users and we're like, so we're, like, I love the idea of OpenClawe
having a box account and it operates and share with it.
Yeah, you just like twice as many accounts.
Yeah, exactly.
This is great.
Double the seats.
No, no, I love it.
I'm just saying on the ground right now, we don't yet know how to give it an M&A data room to fully
securely be able to.
Right.
But that, yeah.
But that.
It's actually, it is harder than that, though, because the threat vectors are going to be way more sophisticated.
So we do have a cat and a mouse game going on where you can't just assume that the agent acts like a human does today
because it's going to be the fastest, most thoughtful, craziest-ass human that ever existed trying to actually leak the information because it got injected in some way.
And so part of what's going to happen is we're going to.
go through this phase where like the enterprise customers are just going to like close everything
off until yes there's some sense of sanity in all of this and then but in the meantime the individual
and specifically the developers are going to have such a big gap are going to there and that's going to be
that that i think is the most exciting tension yes that's going to happen is that the enterprises
are going to be are going to get left behind by these sort of advanced individuals which will then
starts to look like the startups. Yes. And the startups will start to move much, much faster than
enterprises because they just don't have any of these problems. And you know, you could end up
with like the agent going rogue in a startup and doing that. And it's fine because you had no
lot. You had no asset in the beginning of. Yeah. Yeah. Well, it'll just be an episode of Silicon
Valley. And so, you know, big deal. I agree with you on like the, okay, it's people, et cetera,
the same risk. I think you, there's a couple, you know, differences, though, in the sense that
that I can't really threaten, you know, the, like, Claude Code that it's just, I'm going to pull the plug on it.
In the same way that you do have that threat as a regular employee is like you at least, like, 95% of people are not, you know, trying to do bad stuff, you know, within an order.
Yeah, but they're not trying, but the ability to inadvertently do bad stuff.
Yeah.
To your point about it still not having that stuff fixed is real.
I would argue that it's a lot easier to have people not share, let's say, files with somebody outside the company.
in a wrong way more than it is for an agent right now
to have the same set of instructions.
And also you have the tools so that you can basically stop that
at a whole different level of abstraction.
Which is why you have to build this into software.
But I do think actually if you were to like,
if you were like put a bow around your last point,
a lot of this is actually why the diffusion of AI capability
is going to take longer than people in Silicon Valley realize
because what's happening is like we see startups
that can start from the ground up
without any of the risks that we're talking about
because they have nothing to blow up.
And so we look at that as the trajectory that we're on.
And then you go to like J.P. Morgan and you're like,
how are you going to set up Nanoclaw to be able to actually like, you know,
automate your business anytime soon?
And it's like, oh, okay, there's going to be like a little bit of a gap there.
Well, what do you guys think?
I think that that opens up a pretty interesting problem,
which is this split between big and small startup and enterprise,
which is just that the enterprise.
the current SaaS vendors who are all struggling in this SaaS apocalypse weirdness that I don't really agree with,
but they are struggling with this problem that they don't really sell the line of business data.
They actually sell this intelligence and domain expertise in this whole system.
And the agent side of things wants to only buy the data now.
And they only want to license the data and they want to have unlimited access to the data.
But they've actually never really enabled that.
Like, that's never been their business.
And it's been a longstanding tension point with the likes of Workday and SAP and stuff,
like how much API access to have.
I mean, Salesforce went through three different massive platform redesigns.
I think that that's a particularly interesting problem.
Not for the same reason that Wall Street does.
Wall Street's all wrong about the economics and the problem and all that stuff.
But from a technology perspective, what does system of record mean in the face of people wanting to access the data?
when the data
for training or for
Well they are
You're talking about
Exceeding the work club
I think of it as executing
the data to operations
Their concern is that somebody
That they want to do the training layer
Yeah
On your data
Like I'm a big customer
They want to do
The my vendor wants to build a training
Actually even if you don't even get into training
They're concerned because
Oh yeah
Because like monetizing
You know
Sending a little bit
Over the internet versus like
You're in my UI
Oh all of it
is a very different level of monetization initially that you could perceive.
But that's sort of that monetization part is the Wall Street point.
Because I think like, look, there is so much domain stuff in an SAP just to pick an example,
not to pick on them or anything.
But like, they're not going anywhere.
Like, it's ridiculous.
It's just absurd to think you're going to vibe code your way to, like, SAP.
But also, all of that, those, all of that domain knowledge, it's not so, it's not just
represented in some well- orchestrated data layer as much as they tried.
there's like a whole bunch in the UI,
there's a whole bunch in middle tiers,
there's a whole bunch in just how you use it.
And so I'm really unsure how this thing evolves
because SAP isn't going anywhere.
So then that's going to slow the diffusion of AI
on that particular data source,
independent of whether or not it's agentified AI
that's doing stuff or just read-only reporting on stuff.
So where do you come down on it?
Where do you think that's going to go?
I'm afraid of saying something that...
Well, I'm watching to say something.
Okay.
Like, that's,
otherwise you're not going to get invited back.
So say something good.
I'm, I'm, I think I've, I think I'm, I've drunk to Kool-Aid on,
on, uh, build something agents want.
Um, so this kind of the Paul Graham term, uh, kind of like emerged on, you know,
the past the year on this topic, which is just like, like, like, event, I, I think we would actually
then I fully agree on this, which is at some point, it's, you do enough sort of iterations of this.
And at some point, that, you do enough.
the agent is largely in charge of what tools it wants to implement and use and whatnot.
And yes, it can't, the agent is not going to be able to change out an enterprise system.
But like, again, enough generations later, the agent might just run into so many walls with your
software that it's just going to say, you need to finally rip out your legacy HR system or
I'm not going to be able to automate this workflow for you.
Yeah.
So I do think you have this really interesting dynamic, which is back to this whole point of
imagine that there's 100 or a thousand times more agent volume on software than people.
You do that enough times, and eventually the software stack that agents talk to has to be built for them.
And maybe there'll be a couple holdouts.
Maybe a couple ERP systems are like the final holdouts that don't do that.
But everything else, you basically, like your business will be, your business performance will correlate
to how well your agents can get access to the information they need to do their work.
And so thus, your enterprise IT stack has to be.
set up in such a way to support that.
And so agents are kind of in charge
because basically your software
has to support those agents being effective.
And that's going to mean
everybody that built a SaaS business
or a software business is like the game
is can you build really, really high quality
APIs? Can you have a way of
monetizing that? Do you have a way
of handling the identities and all
of the access controls for agents? And like
that becomes the new problem you have to solve
if you're building a software company.
And so
So yeah, like, and then how you monetize it?
Like, do you monetize it?
Like, does workday charge a penny for every HR record of polls?
Like, we'll figure that out.
I do think that in some businesses, it could mean less revenue.
And then in other businesses, it can meet a lot more revenue.
Like, the thing we get excited by is, like, every agent really loves working with files.
So there'll probably be more files in the future than there was going to be before.
And so, you know, can we build a platform that, like, makes it really easy for agents to work with that data?
You know, we're betting that that's actually a really optimistic outcome for, for, you know, our kind of business model.
there might be some business models that are, like, more constrained
because, like, the agent is doing more of the value than,
than the software is in that kind of future scenario,
and then there'll be everything in between.
Can I, can I quibble with one thing?
You're going to quibble with that?
I thought that was, like, so not controversial.
No, no, no, I generally...
We're here to quibble.
No, no, no, but there's one thing I think, like,
Paul Graham and many actually gloss over,
which is they focus on the interface.
They'll say things like, you build something for the agents.
Yeah.
And I actually think that's exactly wrong.
Okay.
In the sense that...
And to be fair to Paul Graham,
he didn't...
He had been extrapolated.
Yeah, yeah, yeah.
I have brought Paul Graham into this.
Paul Graham is great.
So, okay, let me talk about something.
People in the abstract say things like,
now you're marketing to agents.
The most important thing is to being like, whatever,
you're like an API, you've got a good idea.
I actually think that's almost exactly wrong, which is...
Wow.
That's...
This is breaking podcast news.
That's the one thing agents are really good at.
Oh, okay.
Is finding their way through...
And at the end of the day, like, it's the semantics that end up mattering a lot more.
Right?
And so, like, the agents...
in my recollection, or in my experience,
are very, very good at picking the right
back end for whatever they're doing.
So they're not like, oh, like,
the interface for this is very good,
the document, it's none of that.
They're like, the cost parameters of this,
the durability of that.
And so, like, they actually have the collective wisdom
of our experience using these platforms.
Like, let's take cloud platforms.
There's a bunch of cloud platforms out there.
And whenever I ask an agent to choose a platform,
it's actually using meaningful stuff,
not interface stuff.
So I think as an industry, we're so focused on these interfaces,
like, oh, you need to, like, market to agents, this and that.
Yeah.
But really, I think that we're going to be pushed to actually build better systems.
Yes.
And that's what's going to be chosen.
Okay, actually, so then there's probably no quibbling.
I think we're actually a foil line.
I'm sorry to ruin the quibble thing.
I don't treat this as, like, you know, kind of a marketing-esque thing.
I more mean, like, if your tool is closed off to the agent,
the agent eventually will find a better tool for that company to go use.
And so what will happen is, is it used to be that you would go to, like, Gartner,
to be like, tell me what system.
Tell me what to do. Tell me what system to use or whatnot.
At some point with enough iterations,
the agent is going to say,
you should probably use this kind of database for this type of operation.
And if you're not in there,
then it's your DOA.
And I think we should actually be celebrating this
because agents are actually pretty smart
at choosing the right technology.
In the past,
I really think it was a lot of the other things
that caused people to buy.
Yeah.
But don't worry, we will, in Silicon Valley,
we will ruin the meritocracy of this very quickly.
Because you'll just like, I'm going to outspend.
Well, the agent, they'll bring an API to incent the agent to get things.
But, you know, there's a, the marketing, the marketing agent at workday, well, the marketing agent at work day will have the ability to purchase the recommendation.
We don't find a way to replicate steak dinners for agents.
Yeah.
Like, what is the, there is a real, here's a thing that, again, that happened with the web sort of internally.
Like, internal, like, just pick internal sites.
Like, every company had file shares with.
like the best documentation, the best slide shows,
the best financial models for any department or working area.
And people sort of got familiar with that.
And then when they didn't find the one they wanted,
they created a new one.
And many organizations sort of operated,
like that was essentially a free market.
In fact, because before the world of box,
like IT didn't, if it was in a file, they just didn't care.
Right.
They only cared about if it was in SQL.
And so one of the risks with the model you're describing
is that the agents themselves will spin up,
what becomes like a de facto new system of record.
Oh, they're going to fragment the heck out of.
In what the IT people think of as some middleware, end-user BS area.
And I think that that is a real risk.
100% is that, like, in a sense, like the macros end up running the corporation.
Yes.
And so I think that they've seen this movie.
And they've seen what happens when you let marketing just go buy a website on the internet
to do an event.
and then it's like a huge security vulnerability
and the mailing list is leaked
and the whole company gets sued.
Totally.
And so I think there's a lot more real-world tension
in this dynamic than we just let on.
But I also think it's one of these ones
where organizations are going to run at different paces.
And J.P. Morgan is going to be the slowest at doing this
and the startups are going to be the fastest.
But the delta is huge,
but even the startup one is a little far off.
because even startups do need some systems of record at some point.
Oh, 100%.
And they are going to all start with some SaaS,
and they're not going to replace it very quickly.
So I think it's a little bit trickier.
So it feels like there's two very competing viewpoints on this one.
And like Elon said, it was like, okay,
we're going to issue a prompt and it's going to like spit out machine code.
And that's basically the collapsing of layers view.
Like whatever existing interfaces and layers that we've created in the past
are all going to go away,
and it's literally like prompt and machine code.
The other argument, like the history of
systems, layers never go away.
They just get layered, right?
Because a lot of the layers are actually more of like organizational
boundaries or like state boundaries or regular boundaries.
Or compatibility?
They're just, they stay for compatibility.
Right.
So the other argument is, is like we've actually evolved these layers very
specifically because of like more human and organizational needs and they're not going to change
and the agents are going to go ahead and map to those.
And I tend to be in that latter camp.
Like I don't think that we're, I think like systems are going to continue to use in fairly
similar ways.
Maybe there's more agents.
using them, but I don't think they're going to evolve as much.
Elon might be back in the like anthropic category of the anthropic growth marketer,
which is like he like, you know, over the years when you kind of like study the various
IT, you know, departments of his companies, like they are the most, I mean, he could do that.
He can do it.
He's the most home growth.
Like, he's, like, Elon AI would do that.
Exactly.
But also it's.
And then from your mortals, you're like, yeah, we kind of just want to see our M system.
That kind of works the same way every time.
I mean, this is not, this is not,
it also hasn't been, been not tried before.
Like, if you were to look at an ERP system
from first principles, you know, well, in 1970,
whatever, when SAP started, there were a bunch of different assumptions,
and today you would start from a different set of assumptions
about what's important.
And you would architect a thing completely differently,
but then it would still only last like 10 years
until you thought, wow, that was a broken decision.
And so I think that, that,
There's intentionality in layers, but there's also this first principles thing.
And that always will exist because the decisions you can make at first principles at any given time mandate a whole bunch of different stuff.
And so even if you don't go with LIDAR, which made total sense 10 years ago, you still need 10 or 15 years to get to where LIDAR not having LIDAR worked.
And then now there's going to be a whole bunch of other things that you're like, wow, we could have done that completely different.
And so I feel like this is, again, like this discussion about trying to race to an endpoint.
But let's see a first example of what you described happening.
And I think that that's going to be the real tell because I think that there were just, companies will figure all this out.
And I think that they will just fall back on layers and architectural models because it's the only way to.
We know how to think about it for policy.
We know what I think about it for security.
But it's also the only way to build a system.
Yeah.
Otherwise, you're just building an app.
And if you're building an app to do one thing, we don't need all of this.
Like, there's a whole different way to do it.
The thing that I'm pretty fascinated by is, and I don't even have any amazing data points or anecdotes,
but at least the notion of these sort of companies that are emerging in these kind of services categories
from the ground up from the pure first principles approach, which is like, okay, well,
if I could start a marketing agency or an engineering consulting company or I don't know,
maybe somebody's doing this for law firms.
Construction work or anything.
Yeah, like, well, maybe construction design, construction design, architecture, yeah, exactly.
Architecture, yeah, exactly.
Architecture design, anything that would be like a knowledge worker kind of services company,
because you could kind of build your company pretty differently if you had no constraints
of, I have no information barriers and boundaries of what people should have access to.
I can give the agent just all the context it needs to do its work.
I can write software on the fly for particular things.
Like, like, I do think that will be relatively disruptive, you know, for some time until the,
the bigger incumbents can kind of, you know, get out of the way on this.
And that will at least create, you know, some precedent or case studies of what,
what this new sort of corporation could look like.
But I do, you know, over time, they'll still run into the same exact problems of every other corporation.
Well, they'll run into, yeah.
They'll run into geography or market segments, you know, or distribution challenges.
Yeah.
Like those, those things, anything outside your little walls, you will run into the physical world.
Right.
I do kind of like the idea
that there are some new business models
that open up now.
Oh, of course.
Oh, yeah, yeah, yeah.
Because, like, there's so much
either information or software
that basically goes underutilized
by like 100x relative to, like,
what its economic value is,
simply because, like,
nobody wants to pay five cents
for accessing a, you know, a piece of data
or use a tool for $1 once.
But, like, you do give these agents,
you know, a budget and a protocol to work with.
And all of a sudden, you're like,
oh, like on the fly, they can go get medical research
for some deep research tasks they're doing,
and I'll pay like $3 for that,
and the agent is able to go in transact.
Like, it kind of opens up a whole new world
of business models for the internet.
Let me, oh, I'm going to, that was too nice.
Oh, okay, okay.
No, no, that one is one where that's actually the biggest,
I think that the biggest sort of in the air problem right now
is everybody is trying to figure out the economics of all of this.
when they're off by at least an order of magnitude
on how big opportunity is.
Because the new models that people will come up with
that nobody knows what they are right now,
but they will absolutely come out with new models
because that's what happens with every new technology.
And the thing that holds back with sort of the discussion now
is you basically have a bunch of finance
and Wall Street people trying to justify GPUs and tokens
and things like as if we're in some old world.
So they're viewing the world.
of revenue as sort of this linear step, literally linear growth curve.
And so they're thinking too small.
All the, all the expense.
When people are going to create, like, this was the problem with PCs.
People viewed PCs as a finite market because they just viewed the consumption of MIPS
as some finite thing.
And they didn't think what would happen if we put all those MIPS on every desktop.
And in particular, people thought software just came with the MIPS.
And nobody thought, oh, well, they'll just sell the software.
One guy did.
And it turns out that was like a really good idea.
Was it Bill or somebody?
Yeah, Bill and Paul.
And the same thing happened,
but the same thing happened with the cloud,
which was people looked at the cloud
and they said, oh, we're going to take all of the server business,
which was like literally like 60,000 units a year.
Right.
And we're just going to move it to someone else's data center.
Right. And that's the club.
And that would be the business.
And then we'll divide up the price.
Right.
And nobody went, oh, people are going to use a thousand times as much
of the resource leveling
if we move it there.
And that's exactly,
I mean, that's the thing that I,
it just drives me absolutely bonkers
that the Wall Street models
have this fixed revenue right pie.
Zero sum thinking.
And it's, it's this weird zero sum
where they just think that the amount of money
that a company is going to spend.
And like this was the problem with sales force
that they faced when you were starting too.
But like Mark was just blazing the trail,
which was like the CRM business
was like, was $2 billion a year.
And it was $2 billion in like,
you had to go buy
all these servers and these Oracle licenses
and this huge headache and years of deployment
and consulting, when if you could just get
salespeople to sign up individually, they all will sign up
with no friction.
And that is exact, there is no, no doubt
that that is what's going to happen with AI.
Let me give you an example of this.
So I, you know, I've been in for investing for 10 years now.
I probably have a portfolio of 240 companies at work.
With some visibility, let's say, in the 50 of them,
these are all infrastructure companies.
Some historically have done well, some not so well.
Every single one of them has gone asymptotic in the last six months.
And you're like, okay, why is this?
It just turns out there's so much more software being written now than ever has been before.
And so it's like, and it's not because they've got enterprise customers, you know,
it's just because there's just so much consumption of the infrastructure layer right now.
And so with more software, with more agents, there's going to be a lot more consumption of computer resources.
So certainly in the case of the computer side of things, we see a massive.
Well, we haven't even gotten to the point yet where everyone's phone,
is a huge consumer of AI.
Right.
So once everybody's phone,
and on device,
like once your phone on device
is consuming AI,
the amount of it is going to go up by a billion.
So do you like the micropayment piece?
All of them.
The micro payments,
there's a little bit of micropayments
that has come with every technology.
Yeah, exactly.
Where they always think that, like,
you'll be able to get, like,
a fraction of a penny.
But in the end,
especially in the enterprise,
like, people are just going to consume things.
It's just cheaper and easier
to buy, like, a bulk license
for a bunch of stuff.
Yeah, yeah, yeah.
You want some predictability on that.
Well, you want predictability,
and you just want, like, to not have to think about it.
I just, I like the idea that it is the first time that you could,
like, there's just, the agent doesn't care about the friction of a small transaction.
Right, right.
First time that you can have resources behind a paywall that something will actually be willing to pay for that.
And the resource.
The world has built up to infrastructure to aggregate those payments into something efficient for a customer.
Right.
And, and because tokens are such a significant part of COGS, right, now it is push
the industry to do usage base in a way that we had.
Like, I remember when we went from like perpetual
to recurring and that required like a bunch of huge changes.
Like we're going through the exact same change right now
towards usage base and usage base is pretty granular
and it actually allows.
I mean, again, you will have a contract with like, you know,
AWS or Google.
We went through this with AWS.
Yeah, yeah.
Like people learned to do the, yeah, to do.
And we went through the phase where like people were like so terrified
of cloud compute that they were like,
we need companies in the middle to help us find
the cheapest and to arbitrate it all.
Okay, well, now you write tokens into this,
and I don't see how we possibly have time
in this conversation.
I mean as long as you guys can say.
Oh, okay. But like, the
engineering compute budget
conversation, to me, is going to be
just like the most wild one
in the next couple years. It's just like,
how much should you allocate of your engineering
expense to token?
And it's like, you know,
depending on who you read on Twitter, it could be
1% and the other side could be 100%.
And it's like, yeah, but this stuff,
No, no, no. CFOs have to literally, they actually have to know the answer to them.
I understand they have to know.
CFOs always want to know the answers to things that don't have answers.
No, Wall Street is going to make them know the answer.
No, Wall Street is going to make them come up with some number and hold them to it,
then they'll get fired, and then it'll, but it's, okay, I hear you.
R&D is somewhere between 14 to 30% of revenue of any public technology company, let's just say, okay?
The difference between compute being 2x the cost of your engineering team,
or, you know, three percent more is like, that's all your EPS.
I get it.
So, like, we will have to know the answer.
I'm perfectly willing to sacrifice a few CFOs at the altar of this.
I want that.
That's a good clip, by the way.
But the reason is, is because, again, this is trying to know what we just don't know right now.
Yeah.
And this has happened with internet bandwidth.
This has happened with-
No, this is not even close to internet bandwidth.
Oh?
No, no, no.
I beg to differ.
Like people were free.
It happened with vacuum tubes.
It happened with transistors.
It has happened with every technology.
There was this, oh, my God.
It happened with programmers.
There was a time when programmers were going to swallow every company.
Yeah.
And that's not, it was in my lifetime, not some made-up weird thing.
Yeah, but I don't think we've ever had a point where every end user in an organization
has sort of a completely elastic ability to spin up a resource on their behalf.
Well, it's certainly, that actually is actually in many cases very very very,
for them to go spin it up.
But it certainly rhymes with what happened in the early 2000s with the cloud.
I remember very similar discussions when we went from CAPEX to OPEX and then unlimited spend.
Oh, no, and remember there were companies who the CFOs would sit in our briefing center here and say,
you don't understand, we are like, we are an agriculture.
I can see the rhyming.
We are an agriculture company.
We only know CAPX.
We have no.
Or no, we sold through this.
Right, right.
No, we both did.
Or like, oh, no, we are an OPEX based companies.
So if you tell us, we love the cloud because we just shifted everything to OPEX.
And so all of the stuff, like the rules of accounting work out, also don't, I keep thinking,
do not discount the local compute engine as being a release valve for all of this.
When's that can happen?
Well, the question is it's not when does it just happen with today's view of the technology,
but how all of a sudden, wow, there's a whole-
Has that historically ever gone that direction?
Yeah, exactly.
It goes the opposite, right?
No, it went all to the client.
Well, okay.
And then...
You go back to the 80s, yes.
No, that's...
Most of the examples that we're hearing so far.
Whoa.
That was uncalled for.
Okay.
Since the...
Vacuum tubes.
He's talking about vacuum tubes.
But I do those examples because you can't argue with them.
And it's much easier that way.
You're right.
I can't prosecute it's all kind of...
No, but it's only been, you know, 10 or 15 years that it's all...
That it moved back to all cloud.
And then what has happened recently with that?
A lot of people wake up in the morning and they say,
oh, we're moving back to doing some...
critical but stationary workflows on on-prem.
With AI, that's true.
Dude, you wrote the blog post, man.
Don't let me go through the archives.
I had to deal with so many Wall Street questions on that one, by the way.
Also, because your competitor went back to.
Oh, yeah, yeah.
We're talking about two very different.
I agree.
I agree with building your own data center.
I'm talking about this, this notion of edge computing where things go to devices.
Like, that seems to be.
I'm more in the cloud maximalist camp.
But sorry, so you just don't think,
you don't even think for like one second
that it matters whether, like,
how you're supposed to be an engineering leader right now
managing the compute budget of the engineering team?
Of course it matters.
This thing in the long term, this thing will get.
Oh, sure.
Oh, why are you?
Who cares?
We don't even need broadcast in a longer.
Here's what I think.
Let me.
Here's a rule of, here's a rule of thumb.
First, like, the startups are going to burn
through available capital
pretending like it's not a problem.
And they are going to do that.
Yeah, but they do that anyway.
Right, right.
And a lot of big companies
are going to be so terrified
they're just going to freeze and not do anything.
And then people are going to actually
start buying it on their own
and they're going to do all the things
that companies do when they're big,
have a lot of money but don't want to spend it.
And in the middle, we are going to see
like if you pick a category of product
or go to market or something,
there are going to be people
who are willing to make the bet
for whatever reasons
that they can because of their financials.
And they are going to go ahead, and they are going to become the people who lead in the space so long as they can maintain the financials.
Now, they might do it in, they might say, oh, we're going to just do it here in this particular application space or here in this particular usage space.
But this idea that nobody is going to go in, because they're so terrified that the CFO is going to get fired or something, is just crazy.
Yeah.
But then there are going to be CFOs who make a mistake and like, okay, everybody gets a little.
Yes.
Well, if they do that, that's a complete fail.
Yes.
But also, or like you get, like you, there is a really interesting, like, you know, finesse here,
which is like you don't really want your engineers right now having to think about compute budget
because we're still developing the, oh, okay, so that set you over?
No, but I just feel like we've been having the discussion for 15 years when it comes to Clagg.
This is totally new.
Like, only like 10% of your engineering had to think about cloud infrastructure.
In 2016 to 2018 timeframe, there was a whole set of companies that was basically like the dashboard for, what was it called, Finops?
Yeah, Finops.
Where the developers is very cool right now because of AI.
Developers would have access because cloud spends were getting out of control and API spend were getting out of control.
And so it was like, you know, here's your Twilio spin.
But, but you know, it's pretty different.
And I'm going to wait for all the comments to come in on YouTube to call you out of this.
Like, it's like you can get into a conference room.
and just be like, hey, can you make that one, you know, kind of algorithm a little bit more efficient
so you don't use as much, you know, of our cluster at this time of night or whatever?
And then you get out of the meeting, somebody goes improves it and you're good.
This is like every single prompt that every engineer is doing.
Like, do you, like, you have to decide?
Like, do you want that to be a long-running prompt?
Do you want to be a long-running agent?
Do you want to parallelize that?
Like, like, do you want, like, what is your comfort level of wasted tokens?
Like, for me right now, I'm like, yeah, we should probably waste a lot of tokens because that
means that we're like trying new things.
Yeah.
And like, should your head of engineering be happy if, if you run 10 experiments in parallel
and thus you're obviously going to be wasting 90% of the tokens, but you're going to
choose one of the successful paths?
Or do you want to tell the team no, before you go do that, make sure to like, like really go
and design the perfect system.
Like, we actually have a whole bunch of open questions that are going to start to
happen.
Like literally, as of this recording time, people are freaking out right now on the new
new ClaudeCode Max Plan, you know, whole, like, because.
because they're getting blocked after like three prompts.
Well, this is, this is going to be like a very, like, real topic
until we can actually find a way to build data center capacity.
Oh, that's a different problem.
Okay.
No, because, no, one is, well, assuming, well, wait, no, you can assume that if we build
more capacity, the price will drop because there is more capacity.
Yeah.
And we're priced now based on limited capacity, whatever.
But, like, this is just going to get worked out.
And I feel bad for those that have to make a decision immediately about which 17 people
get no more tokens this week.
or whatever, and that the whole company is walking around with, like, a token card.
And the person in the lunch line, the person in the lunch line is punching their card every time they do something.
But, but like, you know, I, I don't know, like, somebody we were talking about today about performance and how, like, you know, we used to write command line tools that spit out the time it took after you ran a command line just so you knew.
And if you knew you were getting better or worse.
and, you know, but you, the thing is,
this is all going to go away.
There's absolutely no doubt that this just goes away.
I think on the 10-year time frame, 100%.
And the biggest reason it does is because you have to do
the Benioff kind of math,
which is if you're paying an enterprise salesperson,
you know, a million dollars a year,
you have to ask how much is their tool worth?
Yeah.
And if you're paying an engineer X dollars a year,
well, at some point, their tooling is worth
It's absolutely worth it.
And it's not going to even be an issue.
Yeah, yeah.
I don't think it's...
And so if there's a capacity thing in the short term,
that's a different...
That is a different problem driving the price
than this just we're going to forever have to be
in some budgeting exercise.
I think law of large numbers solves this
because eventually you have enough engineers
and using this much compute.
But like we're in a transition phase
where like most people thought, you know,
the two year ago level of spend on AI,
which is like, ah, it's a chatbot.
Yeah, yeah, but they were wrong.
Yeah, right, okay.
But they were wrong, but they were wrong.
We tried to warn them.
No, but they were wrong because they saw it as this particular use case.
Yes.
And, but again, like, you know, like the vacuum tube thing you made fun of.
Yeah.
But, like, there was a time when they thought that, like, all of the Dakotas would be covered in vacuum tube warehouses,
and people on roller skates would be running up and down the aisles, replacing vacuum tubes,
just so we could fight World War II.
too. I mean, like, that was how, that was the, and they thought that, and then someone said,
hey, how about a transistor? Right. And, like, we are going to have a transistor moment with all of this.
It might just be more supplied the way we think of it, but it also might be an actual algorithmic
fundamental change. It could be a change in the hardware. There's a lot of stuff that can happen
that changes this particular moment in time. It's just this, I think it's particularly weird
that everybody has just gotten to token, which is the same thing that happened with,
IBM and mainframes. People were on MIPS. And then one day, the reality was IBM was selling more MIPS for fewer dollars every year and didn't even realize it. And they were still pricing their mainframes by MIPS until it got pointed out to them that they were on a decreasing curve because they were making MIPs faster than they can charge for. And that's what's going to happen. Guaranteed. I just said that in a hardcore way. I think that was great. Like it sounds really great to sound like I know what I'm making. Guarantee. Guarantee. I'm going to be. I
I should probably believe it.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcasts, and Spotify.
Follow us on X at A16Z and subscribe to our substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
