a16z Podcast - Aaron Levie and Steven Sinofsky on the AI-Worker Future
Episode Date: August 25, 2025What exactly is an AI agent, and how will agents change the way we work?In this episode, a16z general partners Erik Torenberg and Martin Casado sit down with Aaron Levie (CEO, Box) and Steven Sinofsky... (a16z board partner; former Microsoft exec) to unpack one of the hottest debates in AI right now.They cover:Competing definitions of an “agent,” from background tasks to autonomous internsWhy today’s agents look less like a single AGI and more like networks of specialized sub-agentsThe technical challenges of long-running, self-improving systemsHow agent-driven workflows could reshape coding, productivity, and enterprise softwareWhat history — from the early PC era to the rise of the internet — tells us about platform shifts like this oneThe conversation moves from deep technical questions to big-picture implications for founders, enterprises, and the future of work. Timecodes: 0:00 Introduction: The Evolution of AI Agents0:36 Defining Agency and Autonomy1:54 Long-Running Agents and Feedback Loops4:49 Specialization and Task Division in AI6:20 Human-AI Collaboration and Productivity6:59 Anthropomorphizing AI and Economic Impact9:10 Predictions, Progress, and Platform Shifts11:31 Recursive Self-Improvement and Technical Challenges13:20 Hallucinations, Verification, and Expert Productivity16:20 The Role of Experts and Tool Adoption22:14 Changing Workflows: Agents Reshaping Work Patterns45:55 Division of Labor, Specialization, and New Roles48:47 Verticalization, Applied AI, and the Future of Agents54:44 Platform Competition and the Application Layer55:29 Closing Thoughts and Takeaways Resources: Find Aaron on X: https://x.com/levieFind Martin on X: https://x.com/martin_casadoFind Steven on X: https://x.com/stevesi Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
We thought that we were looking at the form factor of AI,
which is you're talking back and forth to something,
the real ultimate end state of AI and thus AI agents is these are autonomous things
that run in the background on your behalf and executing real work for you.
The more work that it's doing without you having to intervene,
the more agentic it's becoming.
Somehow it produces the output that it feeds back into itself.
It's literally just the ampersand in Linux,
which is, it's a background task.
And it's like the worst assistant in the world.
And agentification is just hiring a lot of these really bad interns.
What exactly is an agent, and how will agents change the way we work?
To unpack this question, we brought together three people with deep but very different advantage points.
Aaron Levy, co-founder and CEO of Box, Steven Sinovsky, former Microsoft exec in A16Z board partner,
and Martine Cossato, Girono partner here at A16Z.
From the old-school idea of agents as just background tasks, to today.
Today's vision of fully autonomous systems, we'll explore what this means for coding, enterprise
workflows, and how whole industries might reorganize around agents.
Let's get into it.
I thought I'd start this wide-ranging podcast by asking the very simple, very provocative question,
what is an agent?
Oh, to who?
Steven.
So I actually have a very old person view of what an agent is, which is literally just the ampersand
in Linux, which is, it's a background's house.
Because, like, you type something into O3,
and then it's like, hey, I'm trying this out.
Oh, wait, I need a password.
Can't do that.
And it's like the worst assistant in the world.
And really, it's just because they need to entertain you
while it's taking a long time to answer your prompt.
And so that's my old person view of what an agent.
And agentification is just hiring a lot of these really bad interns.
They're getting better.
They are getting better.
But they still don't remember if I have a password to nature.
Is it possible you guys just had bad interns in like the 80s and 90?
We had terrible interns.
I have like a very high esteem for interns.
But now a real answer.
No, no, I mean, I think collectively we're seeing what these are becoming.
So if you think about two years ago, the post-chat-chat should be team moment.
We thought that we were looking at the form factor of AI, which is you're talking back and forth to something.
And I think to Stephen's point, the real ultimate end state of AI and thus AI agencies, these are autonomous things.
that run in the background on your behalf
and executing real work for you.
And you're ideally in an ideal world
interacting with them actually relatively little
relative to the amount of value that they're creating.
And so there's some kind of metric
where the more work that it's doing
without you having to intervene,
the more agentic it's becoming.
And I think that's sort of the paradigm that we're seeing.
The only addition I'd have it,
in addition to long running, which I agree,
is that somehow it produces the output
that it feeds back into itself
as input, which you can actually do
long-running inference. Like, you can make a video that's really long-running, but it's just
basically a single-shot video, and you just throw more computed it. I think there's, like,
tactical limitations if you start feeding the input back in, because we're not quite sure how to
contain that, too. And so I think you can measure things based on how long they run, and you
could also measure it by how many times it's actually taken its own guidance, which would be kind of
more of an agency. Yeah, because I do think it's important that in this transition, look,
what Aaron described is where we're going to be? It's just that what are the interesting steps that
happen along the way.
Because we are going to need for the time being it to stop and say, am I heading in the
right direction or not?
Because putting aside all the horror stories about taking action without consent and using
accounts and data or whatever, there is this thing where you just don't want to waste your
time on the clock while it's churning away, way off in the wrong direction.
Yeah, so the question is to what extent do they have their own agency, which to me means
they've spit something out and they've kind of consumed it back up again and it's still
a sensible thing.
Which, by the way, as you start thinking of these things in distribution,
it's actually a very difficult thing to do
because it doesn't know if it's going to be spitting something out
that's still in distribution when it brings it back in.
They don't have that self-reflection.
So I think there's actually a very kind of technical question here
to what extent we can make these things have independent agency.
But we can make them long run pretty easily.
Yeah, yeah, we're good at the long run.
The long run is what you get back is, yeah.
Yeah, I mean, I think the interesting thing is how the ecosystem
is sort of solving or mitigating then the issue.
like you're seeing sort of this logical division of the agents.
So they might be long-running,
but they're not actually trying to do everything.
And so the more that you subdivide the tasks out,
then actually the more that they can go pretty far
on a single task without getting kind of totally lost
on what they're working on.
Well, Unix is going to prove to be right,
which is like you're going to want to break things up
into much smaller granularity and tools.
And I think to other points that you've made on X,
like, you're going to want to divide things up
so that it's like an expert in this thing.
Yeah.
And then it might be a different, let's just say, body of code, where you go and ask, you know, are you good at this thing?
Let me get your answer on this part of the problem.
Yeah, it's kind of interesting.
I don't know how much you've plotted this, but like the conversation on AGI has sort of evolved very clearly in the past like six months.
And I think that the consensus was, maybe not even consensus.
What some of the view was, let's say two years ago, was this sort of monolithic system that's just super intelligent and it solves all things.
And now if you kind of fast forward to today,
and let's say whatever we agree kind of state of the art is,
it's sort of looking like that's probably not going to work
for a variety of reasons, at least in today's architecture.
So then what do you have is maybe a system of many agents,
and those agents have to become very, very deep experts
in a particular set of tasks,
and then somehow you're orchestrating those agents together,
and then now you have two different types of problems.
One has to go deep.
The other has to be really good at orchestration.
And that maybe is how you end up solving
some of these issues over the long run.
I just think it's very difficult to think,
cleanly about this.
Like, I've still yet to see a system
where they perform very well
and you don't draw a circle
that doesn't have a human being in it somewhere.
Oh, yeah.
So in a sense, like the G
often seems to be coming from,
like, the general seems to get.
So, like, I just, listen,
these things are tremendously good
at increasing productivity of humans.
At some point, maybe they'll increase
productivity without humans,
but until then, it's just very hard
for me to actually talk cleanly.
Well, and it's so important for people
to get past sort of the anthropomorphization
of AI because that's what's holding
everybody back.
Like, AGI is about robot fantasy land,
and that leads to all the nonsense
about destroying jobs and blah, blah, blah.
And none of that is helpful
because you have to then you dig yourself out of that hole
to just explain, wow, it's really, really good
at writing a case study.
Right, right.
Which it writes a better case study
than all the people that work from it.
But it doesn't know who to write it about.
It doesn't know what necessarily you want to emphasize.
It doesn't know what the budget is, what's needed, how many words.
But it also turns out like AGI just does an awful lot of work.
Yeah, yeah.
So, for example, someone asked me recently, they say, well, are you worried that if we have AGI, then you'll no longer be investing in software companies?
I'm like, well, I mean, you're AGI.
I'm still investing in software companies, right?
And so, like, just because your AGI says nothing about economic equilibrium or economic feasibility, et cetera.
So, like, just the term AGI does basically infinite work for every kind of fear we have and maybe every hope that we have.
And then we tie it down to, like, not only it solves a class of problems, but the economics pencil out yes or no, we can.
can actually have a more sensible discussion, which I actually, I think, is finally entering the
discourse. I think we're actually talking a lot more sensibly now than we were a year ago.
And so when people say things, or the AI 2027 paper, when they talk about sort of automated
research, recursive self-improvement, does that feel like fiction or fantasy, or does it feel like,
or is thinking that even with those things, where, you know, sort of nowhere near peak software
and there would just be unlimited sort of demand?
I think you've got to go first for each question.
I don't want to answer this.
I need you to anchor us in reality and then we can deviate.
Well, look, I think that, first, I'm just not a fan right now of buying into anything by year.
Because whatever year you want to buy into, in 2027, we're just going to be having a fight over what we meant by the metrics.
And it just turns into like OKRs for an industry, which is just like a ridiculous place to be.
That's really funny.
But I think that everything takes 10 years, but you can't predict anything in 10 years.
So how do you even reconcile that?
And I think that you just have to recognize that we're on an exponential curve.
So no one's predictive powers work.
Right.
And it's just going to keep happening.
It's not going to plateau.
It's not going to, you know, all of a sudden we're done.
And that's what makes this a different kind of platform shift.
If you just, you look at the progress, and that's the same that went through with storage,
that went through with bandwidth, that went through with productivity on computing,
on connectivity around the world.
Like, because it's exponential, you can't predict it.
And it's just folly to sit around.
and try to bring.
Now, you can do science fiction,
and you could say in the future
when we all have our personal AI
with all this stuff and stuff.
And then that's great,
but then you say it's going to happen in 2029,
you're an idiot.
Yes.
And so...
That sounds totally correct, right?
Because basically, three years ago,
you would not have been able to conceive
of cloud code or cursor
or name your background agent writing code.
So it's like, what is the point
of having some date at which you're naming something?
And so we've actually seen probably vastly more progress
in the past just two years
of actual applied AISN.
than we would have thought.
And yet, does it matter that one or two of the predictions didn't play out?
No.
So I think it's probably more interesting to think about, like, where is the technology
from more of a classic Moore's Law standpoint, like, how much compute do we have,
how much data are we working through, how powerful these models?
Let me ask you, like, as semi-old.
Guilty.
Like, nobody, after AI collapsed and machine translation and machine vision failed,
you couldn't find anybody.
who thought that those would become solved problems.
Or after neural nets imploded, and, like, literally, you were teaching.
Or expert systems.
But you were teaching, and if you tried to teach neural nets,
like, the students would rebel because you were wasting everybody's time.
In 1989, like Hinton couldn't get funded trying to do neural nets.
Grad school was this three-volume history of artificial intelligence thing.
Neural nets was like eight pages.
Oh, you know, ironically, I remember when ML was the cool thing
and that's was the old thing.
And now, like, you know, ML is like the old thing
and neural nets, so you're the cool thing.
Right, or NLP.
And so the fact that,
so we will return to all of these problems
that couldn't be solved.
Like, even, like, this, everyone's favorite one,
oh, it doesn't understand math.
Right.
Like, okay, that is a solvable problem
because math is solvable.
Like, there's just, no one put the math layer in
to understand what a number was
and to, you know, hard-cote it
and just build in an expert system for math,
which is actually a well-understood thing,
Because we've had Maxima since, like, 1975.
I think it's important to, like, maybe for us to describe how hard it is to predict anything, right?
So let's take recursive self-improvement.
This is one of my favorite one.
So the theory of recursive self-improvement is you have a graph where you have a box, which is the thing.
And then there's an arrow that goes back to the box, which does improve.
And then, of course, you look at that, and you're like, it works.
So I guess, you know, like, from an intuitive lay perspective, every time you have a box with an arrow back in it, you're like, okay, we're done, right?
But, like, if you know anything about nonlinear control theory,
answering that question is one of the most difficult question
that we know in all of technical sciences, right?
Like, does it converge?
Does it diverge?
Like, does it asymptote, right?
So, for example, you could recursively self-improve
if you're doing basic search, but you asymptote, right?
And so, like, saying recursive self-improvement
from, like, a deeply tactical perspective
says almost nothing.
It says, but unfortunately,
because we tend to anthropomorphize
AI, we say recursive self-improvement.
All of a sudden, we're like, and then
it, like, overcomes energy boundaries
and human intelligence. Well, that's how it goes
from being a taller to being, like, an eight-year-old.
It's just because it figured out to learn.
It recursively self-send improves, right? And so, I mean, the reality
is, like, non-linear control systems, which are
feedback loops that are adaptive, we don't even have the math for
a relatively simple system to understand what happens. You have to actually
know the distributions that come out and go into them.
And so these things are going to improve.
They're going to continue to improve.
maybe they'll improve themselves,
but just because they do improve themselves,
doesn't mean they can continue to do it.
And this is kind of part of this entire journey
as we're learning about these systems.
Again, the good news is I think we're talking
a lot more sensibly now
than we were a year ago
and hopefully that will continue.
Hopefully the discourse
can recursively self-improve,
so we're just more sensible.
Well, the good news is that's involving humans
so we don't actually have to wrong.
But I think that, I mean, you must be seeing this
even with customers.
I mean, like, take the conversation
about, like, hallucinations and things like that,
how dramatic.
that's altered in just the past two years, say.
Yeah, on two dimensions, actually.
So on one dimension, the problem of hallucinations has improved.
So as the models get better, as our understanding of how do you, you know, whether it's
rag or whatever, you know, even the problem of actually the efficacy of the context
window has improved.
So you have the technical improvements, you know, kind of across the stack.
And equally, you have a kind of a cultural understanding.
standing to some degree within the enterprise as to like, okay, actually, no, these are non-deterministic
systems, they're probabilistic. So you're starting to see almost a culture shift, which is, okay,
you can actually implement AI in essentially more and more critical use cases because the employees
that are using those systems understand that they do actually have to do the work to verify.
And then the only question is, is what is that ratio of time it took to verify versus if I had
done it myself and how much efficiency gain for whatever that workflow is. But we are, we're going
from probably like two and a half years ago where there was, you know, this instant excitement as
as to, oh my God, this is going to be the greatest thing of all time to a reality check within
three to six months because everybody is like hallucination is going to be the massive, you know,
kind of problem to now a couple years later after that, which is like, okay, like we're seeing
the hallucination rates shrink. We're seeing the quality of the outputs increase. And we understand
that you do have to go and review the work
that these AI agents are doing.
And that takes on a different form
depending on the use case.
So in the form of coding,
that means you just have to go review the code
which you had to do anyway.
People seem to be forgetting.
You had to do anyway,
but like there was probably
at least a little bit of like theory
as to like what part you should go review
with extra level of detail
because you kind of knew the person you were working with.
It also implicitly limits the value of AI
which people are uncomfortable with.
Right, right.
It just basically says it helps people
that will know more than the AI does,
and it says it knows more than, you know,
like it starts to actually kind of bisect the utility.
Yeah, yeah, basically it's super interesting,
which is the experts are now becoming,
the productivity of an expert is outpacing everything else,
which is, which was this, you know,
I think we could have probably predicted it based on historical events,
and I think you've got some good theories about how, you know,
the skill, the type of skills that are, that, you know,
they kind of, the right user for these models for the kind of use case.
So we're seeing that, you know,
where the expert engineer,
are like, I don't mind that it's a slot machine
where I'm pulling it and I see what comes out
because I know I can still get 10x productivity
and I get it good enough
that it's worth that productivity gain
whereas if you were like not an expert engineer
and you do this slot machine
you probably would try and go and deploy
all the ones that were also wrong
and you actually don't know which lever to pull
which is a big thing is like literally knowing
what to ask for and what language to use
we'll get to a better response. I think that this
is just an incredibly important point
that you're making and it really gets to the
part of what it means to use a tool.
Like, you know, you put me in front of like a 12-inch chop saw
and say, like, go fix the fence.
Really, really bad idea.
I mean, I could go buy one.
I could cruise the home.
And I'm like, oh, dang, man, I don't have it to wall.
And I could buy it, but it's really not a particularly good idea.
Right.
And I think that how these platform shifts happen
and why there's so much excitement over coding
is that, well, the best way for a platform shift to take hold
is it's the experts that are,
the closest you have to an expert in the new platform
is who becomes the most enthusiastic
and the biggest users overall.
Like, I've been practicing yoga over at the Cuberley Community Center
in Palo Alto because the studio is closed-free model.
But what's neat is that was like the OG place for computer clubs.
Oh, nice.
Like in the early 1990s and the late 80s,
like if you ever wanted to meet the computer club,
and you would go,
And, like, this is, like, halt and catch fire.
Like, you, and it's like, like, a bunch of people with soldering irons and shit.
And, like, there, that's who.
And, and, you know, when it didn't work, when something was broken, that wasn't like, oh, man, these things are terrible.
I'm wasting.
That was, like, the whole meeting.
Right.
Was, like, who could get, like, one of these new discrete graphics cards to actually work and debug the driver.
Does, can anyone print?
Is there anyone in this room who can print in this new thing called PostScript?
And I think that's what's really happening right now.
And so first, it's obvious it should happen with development
and coding first because they're the most forgiving
and the most understanding of, like, what's a bug,
what's a thing that can never get fixed?
And the thing to watch for is no one is saying
that coding can't get fixed.
Right, like whatever it's been generating
that's bad for like a 2x coder rather than a 10x coder,
no one is saying, well, that'll never be fixed.
And then the next thing that's going to happen
is going to be what I think is just going to be like the creation of words,
like the marketing document, the positioning document,
all of this long-form stuff where if you're really good at that job,
you can, you know the right questions to ask, you know what looks good,
and then you can get really domain specific, like on the next,
you know, the next level is like, oh, I need to understand like a competitor,
which then is using real information from the internet in real time,
not just statistical.
And then you're like, well, they already know what the competitor does.
Right.
Like they're, and then my favorite scenario is the one that just kind of,
just has these aha moments is attack this thing I just wrote.
Yeah.
I'm not interested in you adding m-dashes and making it a little bit better.
I just want to know what did I miss?
You said one, I think, recently on this last one, about like, here's my earning statement.
Yeah.
For people, that's the thing you read that you read after to the analyst.
Now like attack it like an analyst.
And there's like 6,000 hours per company of analyst questions.
It knows what they're going to.
They only ask three questions anyway.
Yeah, expense line, you know.
And I feel like this is the thing that...
Do not watch this if you're an analyst.
This is not any advice about being an analyst or it is.
But this is what's really going to happen with writing,
and then it's going to happen with PowerPoint and slides,
and then it's going to happen with video.
But it's really important to call out,
which is you're getting the consensus mean response,
and so in the limit, it's offloading,
a lot of kind of busy work if you're a professional.
Like, you're a professional, you actually know all of these things.
You just don't have the time to go through all of it,
and you may not remember it.
So in a way, it's productivity helpful,
but it's not solving some problems where you are a particular expert in.
And this is maybe why, for those that are non-expert,
it's a little bit more threatening because it can do that job.
Yeah, well, maybe to bridge a view and probably throw in a different tangent,
like, so, Stephen, you're asking, like, so where is the enterprise now?
So that was the coding piece.
I think where you're seeing this is kind of clear understanding,
which is, okay, what I'm going to get out
will be correlated to what I put in.
So how precise I put the prompt.
What, like, I think prompting doesn't go away anytime soon
simply because the leverage you get on the set of instructions
you're going to give the AI at the start
is still going to be massive.
So we see...
Wait, wait, what would...
The prompting went away, what would you end up with?
Well, I mean, two years ago, I think that, like,
people were like, like, you'll just tell the AGI
what you want it to produce.
Oh, I'm going to get my...
There's just one prompt.
Like, you unbox it and you say, go do something.
My agent.
Be a software engineer.
Right.
No, literally, that was, like, that was like an open debate.
And it was like, no, you're probably missing the fact that what is in my head is going
to be unbelievably germane to the thing that I'm trying to produce.
And, like, I have to somehow give you that context.
Like, there's no world where you have that context without me telling it to you.
And now you're seeing it.
Like, you're seeing these incredibly unhinged prompts, which are, like, pages long.
And the output you're getting from that is actually, like, way to...
better than if you didn't give it that context.
So I think there's a clear understanding of that
side on the enterprise use cases and then a clear
understanding that you've got to go and review it.
And then on this point about like, well,
you know, what is
we forget that formal languages came out of natural
languages for a reason. We didn't start with like, we didn't
start with like formal language. It was like, oh, it's much
easier to speak in English, just speak in English. It's the opposite.
It's like we had this natural language. We're like,
it's very tough to convey the information that I want
to. You and I are both experts. We
understand the solution space, so let's communicate more efficiently, right?
So to think that this somehow wouldn't happen.
And that's what jargon is.
Of course, jargon is just a formalized way that people who have domain expertise talk to each other.
That's exactly right, yeah.
So the thing that is kind of, kind of the most, like, fun to kind of think about right now, at least, is, and maybe you give us a little history less on this in kind of interesting parallels.
So when does the style of work change because of the tool versus, yeah, yeah.
the tool sort of adapted to the style of work.
And so we're like only in day one of this,
but what I'm starting to see kind of some patterns emerge,
which is we thought agents would go and learn how we work
and then automate that.
And so basically agents conform to how we work.
The question is when is the moment
when we conform to how agents are best used?
And you're seeing this in a couple areas.
So you're seeing this in engineering to start with,
which is like people are saying,
okay, I'm going to have agents
and then subagents for parts of the code base
and then I'm going to give them kind of read-me files
that the agents read,
and then I'm going to actually optimize my code base
for the agent as opposed to the other way around
in other forms of knowledge work.
So within how we use Box, with our AI product,
like you're starting to see people like basically tell the agent,
like it's complete, you know, job.
And the workflow is now starting to be almost like,
the agent is almost dictating the workflow in the future
as opposed to it's just mapping to the existing workflow.
So I don't know, like, what the history is on this of like,
when does the work?
work pattern itself shift because of what the technology is capable of.
But I think probably where this goes has to be some version of that,
which is it's not going to just be that agents just plop into how we currently do our work
and then just automate everything.
I do think you start to change what the work is itself.
And then agents actually go in and accelerate that.
Well, as important as that is, it's actually more important.
Because what happens is where there's to reuse the word in a different.
this anthropomorphization of work,
what happens in is that the first tools
actually anthropomorphize the work.
And so, like, if you go back,
this is every single evolution of computing.
I mean, like, how long did it take for Steve Jobs
to get rid of the number buttons on a smartphone?
Like, they still had number buttons.
Or, like, you look at cars,
and until Elon got rid of all the controls,
everybody kept all of the controls.
I don't want to get in that fight.
But, like, what happened with every technology,
shift is, you know, if you were to look at what accounting software looked like in the 60s,
before IBM said, stop, we all use double entry, but we need to have people skilled in how
computers can do the accounting, not how people can, because we're never going to figure
how to close the books.
Right.
If we have to automate this whole room of people's green eyeshades that have a manual
process based on how far apart the desks were.
Right.
And everything that happened with the rise of PCs and personal.
productivity started off, and I always use this example because I've watched it happen like five
times now, which is the first PCs that did word processing, the biggest request was how do I fill
in, like, expense reports? And so this whole world grew up of tractor-fed paper that was pre-printed
with the expense report. And so then software, we wrote all of this code, like, are you using
an Avery 2942 expense report, or is it a New England business system's A397?
And, like, you know, and then you had, like, these adjustments in the print dialogue, like, 0.208 inches, and you, you moved little things.
And then you would print out, like, eight dinner, $22, and that was all you printed.
And then someone said, you know, we could use the computer to actually print the whole thing.
Right.
And then, like, fast forward and finally, concur said, you know, why just take a picture, why not just take a picture of the receipt?
And then we could do all of it.
And so then the whole thing gets inverted.
And every single business process ended up being like that.
And then there are things that really, really do change the tools.
Like when email came along, you know,
it used to be to prepare an agenda for a meeting.
Somebody would open up Word and type in all the things and then print it out
and everybody would show up the meeting with this very well format.
And now, and then like email came out.
And that whole use case for Word just evaporated.
And then an email agenda became no format,
nothing, just like, here are the eight things we're going to talk about.
And you show up and everybody's like, did you get the agenda?
You know, what's interesting about the AI one is it's kind of, it's like,
we're seeing the same thing, but vis-a-vis AI.
So nobody really predicted the generative stuff.
And we've had AI for a very long time.
So we had chat bots, we've had, you know.
And so you had these kind of like AI-shaped holes in the enterprise for a long time.
And a lot of the mistakes that we see today is people are taking the genitive stuff
and trying to kind of cram it into the old models.
We're like really a new behavior that's emerging,
that's very much more, like, it used to be you'd centrally sell, you know,
AI to some platform team, and then they would kind of try to get the NLP thing to work
or the voice to work for, like, talking to people on the phone for support,
and it was this kind of very central.
A lot of the adoption that we see is, like, much more individual, for example.
And so I just think that there is a bit of a mixed match that we're seeing now
that it's getting ironed out, too.
Well, and so I think the question is, is, yeah, are we in the phase
where we're trying to graft the agents and work in, basically,
the what we've been doing for 30, 40 years of software.
And is this going to be actually like a,
like the first real step function shift we've seen
in what the workflow itself should look like?
Oh, we are.
I mean, like, if you, you know, remember, people, like,
I tried to jam the internet into office.
Right.
And it was fun to watch.
But, I mean, you were, you were not watching.
But, but, like, but everybody around
was trying me to jam the internet
into their product because that's the only way
you could envision it.
And it didn't really, like, you were like,
well, where else would the internet go?
Like, there's no word processor on the internet.
Like, there's no spreadsheet on the internet.
And then other people would be like,
well, let me just try to implement Excel
using these seven HTML tags with no script.
That turned out to not be a really good idea either.
The best was like, let's do PowerPoint.
Well, how do you do it?
You give them five edit controls, tell them
their bullet points, and then we'll generate a GIF on the back end
and send it back to you as the slide.
Yeah.
Okay, that was not.
And so there was that whole, like that.
I think actually maybe the main point
is just the durability of office.
It transcends all disruptions.
I like to think it pretty much rises above everything.
Yeah, exactly.
But the thing is, is that that's where we are now,
is everybody and, you know, like,
but do you think, I mean, just to dig a little bit,
so do you think this is similar to the Internet
and that is a consumption layer change?
So that always viewed the Internet as very much a consumption layer change.
Like I go to a, you know, instead of going to my computer,
I go to the Internet.
But otherwise things kind of are the same,
AI has got this weird quirk,
which for the first time, I can recall,
programs are abdicating logic to a third party.
Like, we've always abdicated resources.
Yeah, yeah.
So we'd be like, okay, I'll use your disks or whatever,
but, like, I'm writing the logic.
But this time it feels like we're changing the consumption layer.
So, like, you know, when my son, you know,
talks to an AI character,
you know, he's not going to Wells Fargo.com,
he's going to an AI character.
And so, like, that's changing kind of how we're interacting
in the computer.
But also, these programs are no longer kind of
by a human in the same way.
So I feel like the change is maybe a bit more sophisticated.
Oh, I think, but this is the,
this is why it's a platform shift,
and not just an application shift.
Like where each platform shift
changes the abstraction layer
with which you interact with computing.
But what that also does is it changes
what you write the programs to.
Do you remember ever abdicating logic?
Oh, here's a great,
here's an example of how disruptive this can be.
The first word processors in the DOS era,
the character mode era, they all implemented their own print drivers
and clipboard.
So if you were Lotus and you wanted to put a chart into a memo,
you couldn't because you didn't have a word processor.
You didn't sell a word processor.
So you actually made a separate program
to make something that the leading word processor could consume.
And if you were perfect, your ads said,
we support 1,700 printers.
And you won reviews because you had 70,
and Microsoft had 1,200.
And so then along comes-
That's a great one, actually.
And then Windows comes along.
And if you were trying to enter the word processing business,
step one, I need to hire a team of 17 people
to build device drivers for Epson and Okidata and Canon printers.
Because you can't get them anywhere.
Microsoft came along and for Windows built print drivers and a clipboard.
And all of a sudden, and also Macintosh did it,
all of a sudden, there was a way that two applications
that had no a priori knowledge of each other,
But, of course, if you were perfect or Lotus, that's a dissing, you got creamed by that
because your ability to control your information.
And so, and what happened was a bunch of developers were like, wow, this is cool.
Because now I'm just by my, when we did C++ plus for Windows, like, we were like, the demo,
in fact, at that Coverley Community Center, I would go and I would show brand new Windows programmers
in 1990, like, hey, you don't have to write print drivers and use the clipboard.
And, like, literally, standing ovation of, you know, 10 people at the thing.
And, but they were, like, more than happy to let data interchange between product.
Because they were like, that's nothing but opportunity from me.
Can we get you, like, they probably from an emotional standpoint felt exactly the same way as like a vibe coder does today.
Which is like, you've just given me the platform that it was just a print driver.
The writing code for Windows book was like this big.
But the writing a device driver for an Epson printer was this big.
Writing it for a canon printer was this big.
But I'm just actually trying to think of like that,
but the paradigm shift is the same,
which is there's been many times
where we've reduced the amount of work a developer takes.
But I just don't remember ever where the programmer abdick is logic.
Like, so for example, SDM didn't?
Not logic.
Like, I would always say what is correct and what's not correct, right?
I think you undersold it, though.
No, this is the thing, by the way, everybody,
that Martin invented and worked on.
Oh, yeah.
But it's a big deal.
Maybe we should post more of your pitch at the time.
If you're not pitching this, yes.
Well, no, look, like logic specifically, which is I am writing an app.
My app is, whatever, some vertical SaaS app
for a certain customer base.
The answer the app gives is based on logic
that I've written historically, right?
Like, if I run it on the cloud,
the cloud is not producing an answer, it's providing resources.
If I'm using your device driver, it's, you know,
providing access to a device resources.
But if I'm like, hey, large model, tell me the answer here.
You're actually abdicating application.
Maybe you're right.
Maybe this not going to feel.
I think what you're almost playing like incumbent in the sense of trying to,
no, trying to like decide this is abdicating the logic and this isn't.
When in fact, like, it really was like a huge competitive advantage for Word Perfect.
And they didn't want to give it up.
And they fought against it.
And the number of people who didn't want to do, like,
like great clip.
And the next example, of course, is the browser,
where people literally gave up,
like you, in Windows or in Mac,
you could rasterize anything you wanted.
You wanted a button that you pushed
and it spun and animated like a rainbow.
You could do that in your product.
But then the web came along,
and you're like, wow, I have to use a gray button
that says, submit.
And that was like...
Yeah, I guess it's a point.
We do use a bunch of third-party things.
Well, but it took a long time for those to show up.
And so early in the Internet,
magazines in particular and the printed media were the ones who absolutely wouldn't go to the
internet because they would not give up their ability to format.
And this is another part about the tooling and where what's going to happen with AI is
that a huge amount of the productivity software space today is like the preparation of output.
Like office is basically a format debugger.
Right.
Like all it is is like 7,000 commands for how to do kerning and bold and italic.
And, like, it turns out AI not only doesn't care,
you can ask it to make whatever you want.
Like, you could just say,
I'd like this to be a double-index part chart thing.
That's not a thing, I just...
But you can do that, and it will just figure out something
that looks like that, and you'll go, ooh, cool.
And this was where to this disempowering and experts
and who's not an expert.
When productivity software arose,
the big thing about it was that there were people
who figured out how to, like, make, like, killer charts.
Like, Benad de Evans, like, killer chart guy.
And there were people who were like, every meeting started with,
how did you make that chart?
Like, I could be on an airplane and somebody would be like making a shady chart.
So, like, in this case, the abdication is like,
actually, what's the way to visually represent the data?
Right.
And it turns out, well, because like 90% of the people
never really got to be expert at doing that task,
even though 90% of the tool is about, like to even,
so what happens is each generation.
But the programmer didn't have to keep the logic in this case.
This is the user.
But, you know, what's the user, what's the programmer
in that.
And in fact, what the programmer was doing
was like, we would invent a thing
called wizards or whatever, you know,
and that would make a whole bunch of choices
for you, style sheets or whatever.
And so in a sense, we were making a bunch of choices
for the user, which to the experts
looked like disempowering the experts
who were tweaking all.
And so this is all like this,
there's some Steve Jobs quote
that he loves about Schopenhauer,
how if you've seen the conjurer.
The grand continuum of,
if you've seen the conjurer,
it's not a trick anymore.
And I really feel like this is like the third or fourth time
that this has happened just in my lifetime of watching this.
So something that's really caught my attention
because it's the most senior people I know
is that a lot of very senior developers
are spinning up a lot of background agents,
like code agents, and they're interfacing at like the GitHub PR level.
Right.
And so it's not obvious to me why you do a bunch
as opposed to one, and it's not obvious to me
why you wouldn't interact directly.
So it feels like something's going on here,
but I'm not quite sure what,
and I would love your thoughts.
Well, the, my read on it, and then the quite, I guess I would kind of sort of throw out like what then happens next as a result of this, because to me it's actually a little bit of an epiphany on what the future work design could look like in this world, because engineers back to the prior conversation are just the first to experience this.
But I think what my read from talking to kind of similar folks that are like all in on this is, is this mix of basically the, effectively the context rock.
problem, which is, you know, the more that we put in the context window, the more it gets confused,
the loss of the answers get.
And so you have to have some kind of way to partition what an agent should work on.
And we see this in building agents internally, which is, you know, the panacea that I think
we maybe would have hoped for is like, well, you just put a million tokens into the context
window and then obviously...
Oh, so you're seeing this is almost like a counter trend to the AGI is almost like the,
it's like the opposite.
It's the opposite, but it's it's...
It only works because the models are so good.
Yeah, but you're giving more things, more specific tasks
rather than one thing, less specific tasks.
Right, and so, but like, I think this is why it's happening.
So basically, the craziest version of this is I was talking to somebody
who is in startup land and they have, to your point,
they have all these sub-agents, but what's amazing is it maps one-to-one
to each microservice in their code base.
And so they have an agent per microservice.
They have effectively a read-me for the agent,
and that agent owns the microservice.
And they, I don't know the specific number,
but let's just say you could have, you know,
dozens or hundreds of these things going on.
And you're effectively mitigating this issue,
which is if you just said,
here's my entire code base, you know,
go run wild, you know, to one agent,
it will just, you know, produce worse and worse code over time
because it's going to have context rot.
It's not going to know exactly what you're trying to do
in that one area of the microservice,
but the sub-agent model seems to be working for that paradigm.
I love this counter pattern.
Because everybody's like, they're going to, like, you know,
Models will get, you know, smarter and you'll give them higher-level tasks,
and they'll do things longer.
Yes.
This is a counter one.
I want to tweet that, but you have more Twitter followers.
Oh, we can collectively do it.
But then, so then the question is, okay, so let's just assume this works in engineering.
You have this interesting dynamic, which is, well, then, that means that, like, some of the coding practices
will be pretty different in the future.
We've talked about this idea of, you know, the individual engineer becomes the manager of agent,
so that was already kind of, I think, a well-understood path.
This is, like, a supercharger of that concept.
And then the question is like, how does that translate to almost every form of work?
Because if I am now, you know, the lawyer and working on cases and I can have 20 sub-agents that all, you know, do a different case and then basically, you know, come back in some kind of task queue that I'm going through, like, obviously, one, just the sheer leverage now you get is going to be insane.
But I do think the way that you, you know, might even organize the work and what the, you know, what the, you know, what the, you know, work flows with.
an organization are, you know, inevitably going to change as a result of that.
Oh, but, I mean, I think this just, right, gets to the, you know, essentially that the flow
in the workflow has been serialized or linearized based sometimes on knowledge, but other times
on tooling.
And so what happens when the tooling changes is you just get this realignment of what's truly
serial and what's not.
Like, if you're planning an event for a company, which is still going to be.
keep happening, you know, like, oh, I have to book the venue, I have to invite all these people,
we have to create all these materials. Well, they're actually not particularly gated on each other.
Right. But if you have an event's person, they're gated. Right. And so now an events person
can start spinning up all of these different elements. And then they're going to come back. Like,
I've gotten as far as I can on collateral until I get a logo for this event. Right. Like,
I've gotten as far as I can on invites until I get the date and the time and the venue. Right. And
And I think there's no reason why you can't spin of all those in Prelude, because, of course,
how does that happen today?
Well, if you're a company and you use Box and you've done, this is your 58th event, you know,
you have a folder called Event, and people take the folder and go Event 59, and they make a copy
of it and all the stuff in it.
And, well, if you think about that workflow, that's exactly what a series of different
background tasks or agents could go do.
And so I think the reason that you could be.
doing all that in coding is, well, there's a, there was a natural, there's a natural way to
break that up because there's a bunch of programming.
Right, but there's the other side, but there's also a bit of an indictment on the ability
of you to give it a high level, you know, it kind of suggests that the human being needs
to be, you know, giving them more granular orders. Otherwise, you know, to start a company,
you'd issue one prompt, you'd go to the beach for six months. Right, right back, and you'd have
a full company. Which is a, which is this almost reanthropizing effect, which is, which is, like,
it turns out we did we did kind of figure out division of labor we we figured it out in the
context of an of of a lot of physical you know kind of analog limits that we that we clearly had
that agents won't have but we now you know there's no kind of you know total free lunch so you
have this context rot issue which is which is that you do actually have to subdivide the tasks
at some point so then the question is like what are the right i mean it may not be a context
Like, the Occam's razor here is you need to give them specific instructions for specific tests.
And if you give them higher level instructions, independent of context, they just don't know what you want.
And this gets to the formal language part.
Like, at some point, if you try to use, like, the Uber Frontier to get the whole thing done, you have to tell it the whole thing.
Yeah, exactly.
And that just seems like a lot of work.
Whereas if you have to tell it less because the part of the model you're using knows more, it's basically,
a different way of thinking about templates
or a different way of thinking about starting artifacts
or scoping the context in a generic world.
But then there's this, I mean, it might though be the right
architecture in general if you assume that, you know,
we're never going to get to a point where the model is just 100% perfect, right?
And so it might also be the right kind of architecture design
because at some point you're going to have,
you don't want an agent or a set of agents to go so far down a path
when there was a step that it needed to check in with you on
because there's just the compounding effect of that.
So you do need to kind of subdivide the work also
because if you do have gating moments
that are going to have a bunch of dependencies,
the agent does need to know, like, at what point
should I roll that back up to the user?
Yeah, against the common narrative,
now that I think about it,
it seems that the trend is prompts are getting more complex,
not less,
and we're seeing more agents, not less,
doing more narrow tabs.
which is almost this kind of counter AGI narrative.
It's almost like these are much more specialized
and much more deep, looking with much more specific instructions.
And there's like sort of a history of this, wow,
maybe we can actually solve it if we're specialized a little bit more.
Like if you take expert systems, at first they thought
expert systems would just be experts, and they would just know.
And then like by the time you got to the actual published research,
like at Stanford, it was like, this is an expert system
in deciding on what type of infectious disease you have,
as long as you have one of these seven,
No, literally. There was a paper that was like there's just one digestive disorder that actually is a medical tax system.
I do want to, though, because you wouldn't want, like, there is one big difference, which is somehow the model itself is packing in the inherent intelligence or capability to solve all of these.
Like, like, we are benefiting from the fact that at least you can build these all on Claude 4 and GP5.
And that all on a lot of leverage.
Let me try to show, like demonstrate this one with an old.
person example on this one, which was, like, early in the PC era, there were words processors
and spreadsheets and graphics and databases. And a lot of people were like, why are there these
four programs? There should only be one program. And my answer to that, like, which often involved
screaming, was, have you been to an office supply store? Because if you go to an office supply
store, there's, like, paper with numbers, and then there's blanks rectangles of paper, and then
there's transparency paper.
And, like, this has been around a really long time.
There's some reason that these are different.
Human context for...
How many minutes did it take you for you to know
Google Wave wasn't going to work?
Zero.
Okay, okay.
It was instant.
It was instant.
No, I mean, but this was a thing.
There was a product, ancient Mac product
that was lauded by the industry called Claris Works,
which was like, oh, it does...
You could have a spreadsheet inside a war processor.
And my first reaction is,
have you seen a person use a spreadsheet?
because their monitor can't be big enough.
So they just want as many cells as you could possibly have.
And you're sitting there saying it has to fit
on an 8.5 by 11 sheet of paper on a Mac.
And I think that one of the things that happens
is that these lenses that humans bring to specialization
really, really matter.
And if you think about the medical profession
and you think about going from a GP to the radiologist
to a specialist to a nurse practitioner
through the whole series,
they're each going to look at,
and use AI in a different way.
So then the only thing would be, okay, so that was,
that level of specialization and division of labor emerged over a hundred-year period
with, you know, alongside tools, but also with, driven by a lot of the physical
constraints and realities of how organizations emerged.
So the only question would be in a post-agent world in 10 years from now,
do those divisions of labor look exactly the same, or do those shift also because the agents
collapse some of the functions and is there some blurring and then is there just a new set of
roles like clearly there's a role in a bunch of organizations emerging which is like no I'm just
like my role is like I'm the AI productivity person and like I just like have a way of of you know
creating all new forms of productivity in the organization with AI so like clearly we'll have a bunch
new roles but is our current division of labor going to also collapse in some interesting ways because
of AI well I think that like if you actually stick with the medical example we're just going
to wake up and there's going to be way more people
with way more specialties.
Right.
And AI will have created more jobs.
And in the interim,
do you think AI causes more specialization over time?
Absolutely.
Because everyone's getting, every human is going to be way better.
Right. And more knowledge will amount.
And I think this is a thing that has really happened
with computing that people forget.
Like, there used to just be like this morass of marketing.
Right. And R&D.
Right.
And all of a sudden, like, just, just,
and there used to just be coding.
And then there was coding and testing and design
and product management and program management
and usability and research and all of these specialties.
And all of those had their own tools.
Go to a construction site.
I remember growing up, our neighbors built a house.
We lived in an apartment and they built a house.
And there was Clem, the carpenter.
And you built a house with a guy named Clem
who used all the tools and everything.
And now, like, you build a house.
And it's like this 20-person list of subcontractors,
all who have whole companies that do nothing but put in pavers.
You know, and that's what it's going to be.
I mean, there's been a long disaggregation in the history of IT, right?
Like, everything in the same sheet, metal, then, you know, disaggregate the OS and the hardware.
Then you disaggregate the apps.
Right.
And then it was kind of interesting.
Like, in the last 15 years, we saw the app, and, like, independent functions got disaggregated, right?
It's, like, almost everything became, like, an API would become a company, right?
You'd have, like, the toolios, like, OTH became a company, et cetera.
And so it may very well be the case
that every agent becomes like
a whole new vertical
and a whole new specialization.
And then you can actually build a company
around it. Like it may be the case that today,
just like with APIs, one company
will have a whole bunch of agents. It may be the case in the future
that a third party will provide that agent
as an independent company. Right.
The opportunity, to your point,
is really there for that
because, like,
it used to be like the impedance
to creating a company and distributing was infinite.
It used to be ridiculous to think that a single API
like auth could become a company, but then, you know, of course it became...
Or it used to be ridiculous to think you could build a whole company
out of signing documents.
And like not just a whole company,
but then all of a sudden you realize, wow,
the addressable market for that is huge.
And it's way bigger than signing
because of all the stuff that got done
that was just baked into a company causing headcount and waste
and fraud and abuse.
Well, I think you can kind of underwrite thousands of these companies emerging.
So Jared Freeman had a tweet about basically, like, go deep on a workflow, you know, basically do the job of some part of the economy, payroll specialists, and then build an agent for that.
And it's not obvious that there's not literally like a thousand of those.
So by every vertical and every line of department.
I just love this because this is like literally the anti-AG-HB.
It's basically following, like, the long arc of computer science
where as the market grows, the level,
the granulity can create a company.
Well, it's also economic growth.
Like, take that payroll example.
Like, today, just like Salesforce,
which is always my favorite example,
like the idea of having a productive sales force
used to just be a consultancy.
Right.
And the only way you could ever fix it
was hiring a consultancy to show up
and analyze whatever does
and then do a report that says,
this is how you need to reorg
and it usually meant go the opposite of whatever you have.
And then they would leave.
And then, you know,
people tried, but there was no cloud.
So to build, like CRM, you had to do all that consulting work
and then roll it out.
And then it was static and you couldn't maintain it.
And then all of a sudden, there's like,
oh, here's Mark Benioff, and here's a whole way to do all this.
And not only that the people actually like it,
and they think they're better at selling
because they're using their phone
and they're putting in a few notes about this client,
which helps everybody.
And I think that's what's really going to happen with all this.
And so suddenly, something that looks really, really small
becomes like a whole thing,
because there's no problem with distribution,
there's no problem with customization.
You know, we'll actually have ways to solve security
and privacy, just like we solved reliability and things like that.
And I think it's just, I mean, look at, you know,
the stuff that you're a world expert in
in the stack of internet technology,
of networking technologies, I mean, you would have asked me 15 years ago
was CDN be companies?
I never would have, I'm like, I don't make any sense.
Like, how could you have a company?
that's a cash.
Yeah.
I think people are probably way too afraid
of the model providers kind of eating them.
And I think it was basically a phenomenon
in the first wave,
which was if you were just doing basic, like,
if you had figured out that you could do something
on GPD, you know, two and three,
where it was a text interface that produced more text,
like, yes, Chatsbytee ate you.
Like, like, that clearly happened.
Yeah.
But basically since then,
most enterprises want kind of a,
applied use cases for AI and AI agents.
And so it's not obvious that the current crop of companies,
if you're doing AI for healthcare,
if you're doing AI for life sciences,
if you're doing AI for financial services,
if you're doing AI for coding at the right parts of the stack,
AI for coding maybe the one asteric area,
which will be hyper-competitive,
simply because the model companies, like,
don't want to use somebody else's product
to build their own models.
And so that kind of almost forces them
to get really good at AI for coding.
But with that as the one kind of, you know,
exception. I think basically we're just in a five-year period right now where you're going to have
to build agents for every vertical, every domain, and there's a playbook that's starting to
emerge of what that needs to look like. I mean, so I think there was kind of a technical headfakes
that happened early on, which was pre-training. So the pre-training really was a 10 out of 10
technical innovation. I can't tell you like two years ago if somebody was like I had a friend that
was building a like their own aging model, post-training aging model, like we're going to make it so
good at aging. Like, you know, this is a, this is a text-to-image model, and they wanted to make it
so, like, old people looked really good at it. And then, of course, the next version of, like,
mini-during, whatever comes out, and it does a better job of it. And the thing with pre-training
was you're just kind of consuming all of the world's existing data. You're training all of that
energy, and it perfectly generalized, right? But it feels like technically that's past, and now we're
more in post-training in RL, which is a lot more domain-specific. And so...
Well, in the moment that you have access to some set of data that is only...
Exactly.
Just for that enterprise.
And so who gets permission to access that data?
Who gets permission to do the workflow on it?
It's going to be applied companies.
Yeah.
So, yeah, if we had an infinite number of tokens,
then the models would just continue to generalize.
But it's pretty clear that that's not happening.
And so now we're going into, which we all understand very well,
which is now companies have to choose which domains that go into,
and they've got to solve the long tail of problems there,
get access to the data, et cetera.
And I also think that there's that the shadow,
having been the shadow, the shadow cast by large companies over,
we're going to put you out of business and stomp you.
It's ridiculous.
And it has never in any technology wave lived up to the fear that people have.
Look, if you built a new word processor in 1995, you were an idiot.
Like, that was not the thing to go build.
Yeah.
You know, and, but, you know, there was a time just 10 years earlier
where, like, companies built standalone spell checkers.
Like, it was just a thing.
You went to the store and you bought a spell checker.
And, like, it had more words than the other spell checker.
And so the thing is that that's not being said now,
which we should do a whole one on is,
like, what is the actual platform?
Yeah, this is a great topic.
Because you, like, it's very, it's all well and good
to say that the large models will go subsume every application.
The thing is, the minute they start doing that,
no one will be in their platform.
Right.
Because like, like, no developer is going to sit around
and say, if you're going to subsume me, then,
and this is, there's a phrase that it's Sherlocking,
on the Mac and the Apple world to this thing.
And so it does, it has a real chilling effect.
And that's one of the things all the model people
are going to learn very, very quickly.
There's a chilling effect, but there's also just,
I think there really is just a problem of like,
it's hard to go deep in 50 categories.
Like, you just can't.
Modimal pretrial.
I think everybody's scared because pre-training was actually the one thing
that was good at that.
And then now they have to actually, too.
Yeah, I agree.
You do have to, like, at some way,
becomes purely just an execution issue,
which is like, I don't know how anybody would set up a company
to be able to beat 50 startups
across 50 different domains.
No, it's ridiculous.
And in fact, like, it's only good
because what happens is that the big company raises,
the big company raises the awareness of a whole category.
And then you just swoop in and you go,
to them, I'm just a feature.
Right, yeah.
But to you, I'm, this is my whole life.
Right.
And you're going to wait.
I just, I always come back.
There's a whole company that just signs things.
Right.
Like, I can't, I cannot believe there's a whole company
that just signs things.
I have so much to say about this topic.
I mean, even minimally, if you graph
like the cost to produce,
so the willingness to pay for an inference
versus the cost to serve it,
something like for most companies,
for most bases,
20% of the inferences or 80% of the cost.
Actually, the problem of the application
is just to choose those ones on
which tend to be more domain specific.
This is the problem of inviting the three of us
on here, which is like...
We just opened up like the next 10 hours.
It's just getting us to shut up.
up in the trick.
Guys, thank you so much for coming on. This is fantastic.
Thanks for listening to the A16Z podcast.
If you enjoyed the episode, let us know by leaving a review at rate thispodcast.com
slash A16Z.
We've got more great conversations coming your way.
See you next time.
As a reminder, the content here is for informational purposes only.
It should not be taken as legal business, tax, or investment advice, or be used to
evaluate any investment or security and is not directed at any investment.
or potential investors in any A16Z fund.
Please note that A16Z and its affiliates
may also maintain investments
in the companies discussed in this podcast.
For more details, including a link to our investments,
please see A16Z.com forward slash disclosures.