a16z Podcast - Where We Are in the AI Cycle
Episode Date: June 27, 2025In this episode of ‘This Week in Consumer’, a16z General Partners Anish Acharya and Erik Torenberg are joined by Steven Sinofsky - Board Partner at a16z and former President of Microsoft’s Windo...ws division - for a deep dive on how today’s AI moment mirrors (and diverges from) past computing transitions.They explore whether we’re at the “Windows 3.1” stage of AI or still in the earliest innings, why consumer adoption is outpacing developer readiness, and how frameworks like partial autonomy, jagged intelligence, and “vibe coding” are shaping what gets built next. They also dig into where the real bottlenecks lie, not in the tech, but in how companies, products, and people work. Resources: Find Anish on X: https://x.com/illscienceFind Steven on X: https://x.com/stevesiWatch Andrej Karpathy’s talk: https://www.youtube.com/watch?v=LCEmiRjPEtQ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
People are still trying to figure out how everything works.
What I love about the vibe writing concept actually is it's a place in which full autonomy can be fulfilled today.
You're prompting, although it's English-like, it turns out you're just programming.
And you're just programming and prompt.
I've had so many conversations with product managers over the last two years about the death of product management.
It's the end of the field, why do we need PMs?
It was extreme in 1990 and it's extreme today.
Where are we really in the AI computing shift?
Is this the Windows 3.1 moment, or more like the 64K IBM PC?
In this episode, part of our This Weeking Consumer Series,
I'm joined by A16Z general partner Anisha Charya and board member Steven Sinovsky,
former Microsoft president and one of the most influential product thinkers in tech,
to unpack where we are in the AI platform cycle and what's coming next.
We dig into the framework shaping this moment,
partial autonomy, jagged intelligence, vibe coding,
versus vibrating, what builders are wrong about agents, what Google's I.O. signals about
platform strategy, and why the future might be less about killer apps and more about control
sliders. We begin by discussing this week's talk from Andre Carpathie on why software is changing
again. Let's get into it. As a reminder, the content here is for informational purposes
only. Should not be taken as legal business, tax, or investment advice, or be used to evaluate any
investment or security and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in
this podcast. For more details, including a link to our investments, please see A16Z.com
forward slash disclosures.
Anish, Steven, we were having such a good conversation offline that I wanted to get this on
the podcast. There are a few topics we wanted to discuss.
First, we were all fascinated by Andre Carpathie's talk at startup school.
Stephen, what did you find so interesting?
What were your takeaways or reactions from it?
Well, I totally love the talk.
He did an unbelievable, like a philosopher king version of where we are.
And I just found his metaphors really compelling.
In fact, what I might do is even take it further back and just say,
since he used an analogy of like where we are in computing,
I'm talking about the Windows 3 era and stuff like that,
and having lived through all of them,
I tend to think we're at the 64K IBM PC era of the microcomputer.
And the reason I think that is actually a technical one,
which is that we're at the point where people are still trying to figure out
how everything works.
And all the coding and all of the energy
is working around like these very basic working problems.
Like with the PC, it was like, okay, we have 64K of memory,
and our programs are all too big,
and we have no display and all these problems.
And with AI, people are like,
it's going to replace search.
It's going to replace Excel,
and it's going to replace all these things.
But it doesn't add very well.
It gives you a lot of errors.
Like, the thing that you say it's going to do,
it just doesn't even do yet.
So I feel like we're at a point that is just so, so early.
And he did a fantastic job of sort of making that art.
You know, the thing that struck me the most was he talked a lot about our relationship with this new tool.
You know, and in a sense, we want to use it in the same way that we've used all the other computing tools and technologies we've used in the past.
But he really talked about this kind of inversion of the relationship of LLMs as people, spirits, the fact that they have jagged intelligence.
So to me, that sort of meta point he made was one of the most interesting.
We have to relearn how to use this type of tool before we know how to be productive with it.
I think tools is a super interesting point
because the talk is anchored in tools
but the world itself is anchored in tools
and the early stages of a platform
are always about tools
and so you kind of get a little confused
like right now, of course,
he was talking about vibe coding,
clearly because he pioneered the term,
invented the concept, and is living it.
And it's very interesting
because I actually think
coding is one domain
that always works best early in a platform
because, well, all the customers
of the platform are developers
and they're going to make their tool
kind of work and come along.
But I really think that the most interesting thing for me,
what's being underestimated in the near term,
is sort of vibed writing.
I mean, it seems weird to say anything with AI is underestimated
because Lord knows that's not what we are.
But the thing is, is that vibwriting is so here.
Like, if you're in college, you're already vibrating.
And businesses are still working through the,
well, can we use this doesn't seem appropriate.
And that's a thing I've definitely lived through with word processors.
You know, I had to get permission from the dean in college to use a computer to write papers.
But this vibe writing is absolutely a thing.
And it is really, really no different than when calculators showed up.
And all of a sudden, just doing math homework involved using a calculator.
And people like, well, you're not going to know how to do math in the future.
And it's like, I won't have to know how to do math.
That's like the whole point of a tool.
Like, I have a power drill.
So I do not know how to use like one of those.
Amish drill things, you know, and the world moves up the stack. And so that's where we are,
and it's just super exciting. What I love about the vibe writing concept actually is it's a place
in which full autonomy can be fulfilled today. So you can ask the model to vibe write something,
you know, really detailed and compelling, and it'll do a great job. Whereas with vibe coding,
I think there's a ton of constraints as to what the model can actually do versus what it can
conceptually do. And understanding those boundaries and constraints is going to define a lot of the
text-to-code stuff for the next two years.
Well, I'd push back a little bit on that because, of course, I agree on the coding side.
And I think one of the things developers do early in a platform is they love to tell you that
they're doing something every day and it's working, but it actually just isn't.
And that's just what happens early in a platform.
They tell you all these things that they say are easy and they're actually not and they
spent 18 hours struggling with something that didn't work.
But on the vibe writing side, it also hits a point that I just think is so, so important,
which is, yeah, you can prompt it to spew out a bunch of stuff.
But if you have a job and your salary depends on you submitting that
or you're a student and your grade depends on you submitting that,
it actually better be right.
And you can't just say, look, if I wrote this and here you go.
And I think people don't get confused when it comes to, like, math.
Like, everybody knows you have to go check to the math.
If you ask it to do a table and then add a column that does math.
But we're going to just see endless, endless human wasn't.
in the loop vibe writing things.
And it's just that with programs,
you can't really see that right away
because in order to actually distribute it
or get someone to use it, you had to at least fix
the initial bugs. We'll only see them
later when there are security bugs,
authentication bugs, passwords stored
in plain text, or a zillion other problems
that are going to happen from vibe coding.
In a sense, we've seen this already, right? We saw a bunch of
lawsuits that were citing case precedent
from cases that don't exist. So
maybe this is actually the operative point, which is
there's full autonomy, there's partial autonomy,
Maybe partial autonomy in writing is moving us from writer to editor, but you still have to be the editor.
Yeah, we should also give him credit.
Many people have talked about this, but he did a fantastic job using the Ironman analogy
of how we're going to have autonomy, partial autonomy, and a slider to control what you want.
I actually think that's a fantastic analogy and a way of thinking that gives you a very clear picture from the movies.
But at the same time, people are very, very aggressive on their timeline of agents, and there's
there's a very, very long history in trying to automate things that turn out to be very,
very difficult to automate. And he did a fantastic job. He said, people are talking about like
the year of agents. Yeah, that's a good consultant phrase. Just like he said, we're in the
decade of agents. And it's going to take a decade for things to be anywhere near living up
to agentification as a meme. It's an interesting point. I think a lot about agents as
apply to financial services. And I think there's a set of problems in financial services that are
high friction, low judgment. So for example, when I want to go refinance my personal loan,
I don't really feel attached to any specific brand of a personal loan provider. I just want the
cheapest rate. So it's actually a very low judgment decision, but going and researching and
applying for a personal loan is a high friction process. That's something I would love to delegate
to an agent. I think it can do a nice job. Whereas doing my taxes, wow, like Stephen, how much risk do you
want to take on your taxes. How many things do you want to report or not report? That requires an
enormous amount of judgment. And of course, it also is high friction. So when I think of the two by two
of where is automation going to come first, I think a lot about high friction, low judgment.
I want to build that because I actually think it's super important to also consider that for anyone
to offer the alternatives to the market, there has to be an ability to differentiate, to explain.
And so you end up with this kind of thing where I just want the cheapest flight. And of course, for 20 years,
all of the flight searches and stuff
has worked on the cheapest,
but it turns out that's not actually what you want.
That's right.
Plus, a lot of people want to intervene
in presenting your choices to you.
And so this idea that all choice in life
is going to be reduced to some headless API.
Right.
I don't understand.
People have to go build that
and make a living building those things.
So to your example of refinancing a home,
like the only reason that it can exist
as a search problem today
is because the different people
who want to refinance you can target you,
with an ad and attract you as a customer and differentiate themselves on that offering.
Right.
And if you can't do that, then your ability to actually automate that task isn't going
to exist because there's no economic incentive to just be, hi, I'm the headless, faceless,
nameless, low-price mortgage leader is not really a business.
There's nothing there.
Just headless, faceless, nameless food isn't a thing.
It doesn't show up in a white can labeled food.
And then you consume it and you're, okay, all good.
I have food now.
Well, maybe soilent, but yes.
In the future, in the dystopian future of repo, man, that's where we end up.
But that's not going to happen.
I want the cheapest flight as long as it's not on spared airlines.
Right.
I want the cheapest flight, but I'm traveling with the family of three.
I don't want to leave at 5 a.m.
No red eye.
Yeah, like, I want miles on this airline.
A lot of things don't add up to that.
This is a real thing in business.
It's a thing on the producer and the consumer side.
Consumers really, really want much more choice than they often think they do.
and anyone who's bought anything in Amazon knows they complain about the choice,
but they really don't want just, like, phone case to show up as the thing because it was $6.
I think this is a real through line through the talk, which is partial autonomy, jagged intelligence.
Carpathie is just talking a ton about the constraints of the technology,
which I think is the right thing for us to be thinking through tradeoffs around as builders.
And he does a great job very much as this philosopher that I love.
His delivery, his tone.
don't just go read the summaries. Don't read a post. Go just watch the video
immersively. Well, I want to get to automation and employment, particularly on the entry
level side. But first, I just want to ask the broader question of there was this idea of
AI plus human, I think it was chess, could beat AI for some period of time. And that was
kind of the co-pilot view of the world. Human plus AI will have a better product. And then
it turned out, I think it was chess, maybe it was going, or maybe it was both, that actually,
that was a temporary thing and AI is just better. And then there's the question.
as to how much of the world is like chess,
where a human plus AI is only better
for a certain period of time
and then the models get better
or how much the world is like something else.
We're always going to want that human plus AI is just better
or we're just always going to want humans to do it.
Look, my view is that in a domain
in which you have a formal definition of correctness,
the path will be no autonomy, partial autonomy,
full autonomy, in domains where you don't have a formal definition of correctness
or where a ton of human judgment is necessary
and human choice and sort of a human direction.
We're just the right product design
is not to go all the way to full autonomy.
I would argue that Chesson Go do have a formal definition
for correctness.
So it makes sense that those were fully automated over time.
We're back to the early stages of where things are,
which means that a bunch of programmers
are sort of defining what success looks like.
And programmers are very good at either works or it doesn't work,
or I just want to automate this,
or I'm going to reduce your job to a tiny shell script kind of mentality.
And I just look at the world as everything is gray.
And everything is much harder than it looks when you don't actually have to do it.
Ages and ages ago, I visited a really giant hospital in Minnesota to help them figure out how to use Excel within the medical profession.
And the doctor just looked at me and he's like, I don't think you understand.
He was like, my job is all uncertain.
Every aspect of what I do is uncertain.
So adding something that pretends to be certain, like a spreadsheet, to my uncertainty,
doesn't actually help me.
And so fast forward, first I've spent 25 years of the doctor, but that's a different,
but if there was a story this week about radiologists.
And so very early, actually, if you go to ImageNet, everybody was immediately,
radiology is doomed.
Oh, like you never need to get a skin cancer biopsy.
You'll just take pictures of your mole, and it will just tell you.
And then you find out, wow, there's judgment there.
And there's even judgment in doing the biop.
how to do the biopsy, and then what to biopsy, and all this.
But it turns out the radiologists have, like, fully embraced AI.
But they embraced it no different than they embraced the latest MRI technology or the
latest software update from GE for a CAT scan.
Like, I just think there are so many things like that, and so many jobs are either
very, very uncertain, or most of the job is basically exception handling.
Right.
And, like, people are like, oh, we're going to automate our taxes.
Okay, taxes are literally a giant cascading if and switch statements of exceptions.
Yes.
And so the idea that you will just automate that, well, you have to know the answer to all the exceptions.
And if you're going to prompt it with the answer to all the exceptions, then you're doing your taxes manually.
It's sort of like once you reach a certain income, you have to get help from an accountant to do your taxes.
And the first thing the accountant does is ask you for your tax planner.
And as a software person, I look at it, I'm like, the tax planner really, really looks like the input fields of the software you're using.
So maybe I could just buy that software and then type it in.
And I said that, and he's like, well, you're welcome to, but you will go to jail.
And he explains, because every time I give him a number is a whole decision about where to apply it, does it work?
And I'm like, well, you're not really a farmer, so don't fill anything in on that form and stuff like that.
Automation is extremely difficult
and it's exception bound,
it's judgment bound, and it's all uncertain.
You know, a field in which this question comes up a ton
is product management.
I've had so many conversations with product managers
over the last two years about the death of product management,
it's the end of the field, why do we need PMs?
And I think our sort of developer generation
has developed a real resentment towards product managers,
which is a different conversation.
With that said, I think that the product management job
is the job of addressing ambiguity.
And it's ambiguity that prevents
progress from being made.
Sometimes it's execution,
decision-making, product design.
That will not change.
The nature of business and human interaction
and companies is these complex adaptive systems
where there will always be ambiguity.
I think you'll always need judgment
and you'll always need somebody
who looks like a product manager.
Yeah, I think that really gets to the vibe coding challenge
we're dealing with, which is like,
how fast can we go text to app?
And I think here,
What's so interesting in the long arc of platform transitions is that we're also having this platform transition happen, not just out on the open.
We've had that before, like back when in the earliest days of computing, these platform transitions happened in user group meetings, like at the Cumberley Community Center down the street, or in magazines or in newsletters, and then with news groups, then the internet, and so on.
The whole internet was all ICQ, and it was all in the open.
But now it's like happening on CNN, on the nightly news.
everyone knows about the platform transition that's happening, in particular on social in Discord.
And so what's happening is you're getting a lot of like vibe coding for clout.
And so you're getting a lot of this, I had an idea, I prompted it, and it worked.
And here I am.
At some point I just go, I'm calling BS on that.
That's like not a thing.
And then I sound like an old person.
And because some people think I am, I don't, but some people think I am.
I don't either.
It looks like, hey, you're just being old.
Yes.
But then you dig in and you find out like, wow, you're prompting, although it's English-like,
it turns out you're just programming.
Yes.
And you're just programming in prompts.
Yes.
And people are like, oh, this is what we're going to do is we're just going to get the model to require a little bit more structure.
And I'm like, you're writing a new programming language.
Yes.
And this path of text to app and vibe coding is just developing a new language, which is super cool.
Yes.
Lord knows the world is built on programming languages.
in the 80s, if you drove slowly past the computer science department trying to get a PhD,
they would just invent a new programming language right then and there if you stood outside
the building for too short of time. But we can't lose sight of the fact that the arc of programming
has been one of basically over-promise and under-deliver. When I was in college, like the theory
was the market was going to need so many programmers that the whole employment force, the whole
workforce would just be software people. And that never happened. And now here we are, we're not going to
need any. They're all just going to go away. And I think it was extreme in 1990 and it's extreme
today. And I think that the big thing is this overpromising at each transition, even just most
recently, low code. You even says that word anymore. Like we're not allowed to even mention it.
And it's because it's always the same thing, which is, yes, if all you're doing is a very
straightforward app that looks like all the other straightforward apps with a domain spin or a branding logo
or something. We see this with Wix and with website templates like it's possible. But you're not
going to run a company on any of those. I totally agree. Where I disagree actually is I think that
the language, the language model in this case, but the language in your metaphor is improving at a
dramatic rate underneath these things. So while I think almost all these products today,
they're good at prototyping, they're trying to push into refinement, they're not really usable
as things that you can actually deploy to production at all. In fact, most of the cool demo
as you see on Twitter, don't work three days later.
So they're very much in the prototyping phase,
but the programming language and the metaphor
is improving dramatically.
I think we'll get there,
or at least make more progress than we think,
versus a traditional programming language like object-oriented,
it didn't feel like it 100x the number of programmers
or one-100th the amount of time
to ship something to production.
We just got new tools and sort of new problems to solve.
Well, of course, you're benefiting from hindsight.
Yes.
And that's a key thing.
First, I agree.
We're in an exponential improvement.
improvement cycle with the models.
So any predictive power goes out the window.
Correct.
And anyone who says like something negative,
you're going to be the next person who says the internet is going to be a faxing fad like a fax machine.
And that's a bad.
You just don't want to be there.
And it turns out also that's a case where having lived through them,
you get very shy about making predictions because you see how foolish people look for a long time.
But take something like object oriented.
I mean, this thing was hyped to the moon.
This is a wave of programming languages, just to give you an idea, again, how the speed things move.
They started in 1980, and by 1990, they finally reached, like, peak hype.
So it was like 10 years of incremental improvement.
And by then, they were also over.
Like, any programmer would have kind of said, it's sort of just changing the old programming paradigms of abstraction and polymorphism and stuff like that.
But meanwhile, the magazines, which was the key measure of success at the time, there was one back.
magazine that had a picture of a baby a diapers, not a picture of drawing, on the cover of programming,
how programming will get made easy. I remember seeing at the newsstand, and I was working on the
C++ compiler at the time. C++ was a brand new language in 1990, and it didn't work yet.
And here was a baby who was going to make programming possible for other babies at baby care
or something. And whether it was that or all the database programming languages like Delphi or
power builder, in algorithmic sense,
they were all constant improvement.
Like just, they added a constant factor, like plus seven onto programming.
None of them changed the mathematical order of magnitude.
And what I believe is with writing right now, it's changing order of magnitude.
And so it's here.
It's happening.
Accuracy isn't there.
But one of the things about writing is, like, actually, when you read it, most of it in business
is not really accurate already.
It's very much like autocorrect.
Like, autocorrect fixed all the cost.
common typos like TEH to THE in English, and just replaced them with these wild new
autocorrects that just replaced what you typed with a word that has no meaning in the context of
the sentence, which is what we face on phones all the time. So what we're going to see is a whole
different set of errors in business writing or academic writing in schools that just replace
other errors that have always creeped in. Totally. I remember Small Talk was the hot language, right?
Well, Small Talk was the start of it, and it was called Small Talk 80. Yes. And then it really
didn't ever achieve any momentum outside of Palo Alto. But then C++ came along. And there were
50 languages in the middle that people don't talk about. Like Objective C being one of them,
that was the iPhone language, which was really one Steve Jobs. There was object Pascal and Pascal,
Pascal with a relational database attached. But this was my master's degree. Then I quit grad school.
I could go on about this one for far too long. So I'll just stop now. Do you think there will be
bestselling novels that are entirely AI generator, nearly entirely in the next few years?
100%. I don't think Stephen King is going to do that, but I think there'll be some new writer
who will probably write it under a pseudonym. And a year after the novel is written and has been
made into a movie, they'll say, oh, by the way, I got the plot idea from a prompt. And then I
just started having writing, and I was editing it along the way. Absolutely. And the copyright suit
that follows from training models and stuff, that's a different issue. I think there's two things
on this, actually, that are really interesting. So one is these language models are these averaging
machines. And with art, you almost definitely don't want the average of all the novels or
all the writing or all the authors. You want something that's at the edge. So how do we actually
point them in a direction such that they can be at the edge of culture, which I think is important
for making great art? I think the other thing is a lot of the artists don't yet know how to use
any new tools. And we're going to see artists that are native in the technology. Instead,
what we're seeing a lot out there what's called the slop has just been a lot of this
low barrier to entry art that's being created, which is great because it gives people the
sort of fulfillment of creative generation. I think what we're talking less about is,
hey, how is the ceiling being raised for artists because they have access to these technologies?
Without going all just shop on what is art, I mean, bad sitcoms are part of society too.
But I think it's important. We tend to focus on like the very, very best of things,
but most everything isn't only the very best. In business writing, it's all slop.
I mean, this is why, look, I've written a lot of business writing,
so I can say this confidently about what I've written and what gets written.
But take something completely mundane that a lot of people in Silicon Valley
spend a lot of time working on, like the enterprise software case study.
I'm telling you, GPT generates better enterprise case studies faster
than the typical marketing associate does at a company in like one millionth effort.
Yes.
And does the content need to exist?
actually does. It's just an important part of the selling process. Yes. And so at the extreme,
like with something like medical diagnosis, we tend to think about the most obscure diseases,
the most difficult to understand problems with the finest hospitals, with the most resources.
But you have to remember, like 80% of the world has no access to anything. Right. Yes.
So wherever you think of medical LLM is, as in the slop scale, most people don't have access
to anything average.
So we have to just make sure
that the whole debate
does not center around
like what is Francis Ford Coppola
using as the book
and who are the actors
and who is the cinematographer
because that corporate case study
well they often go and interview
the person and film it
well like all of a sudden
we see it today
those things are done over Zoom.
Yes.
So suddenly flying in
or getting a satellite
and bookings,
we've changed our view of excellent
because we wanted more access.
And I think that's absolutely going to happen.
Should you get graded on slop in school?
That's a different problem.
But most stuff is pretty average.
The world needs more slop, says Stephen.
I feel like this is an oppressed interview
where you can put words in my mouth like that.
The world needs more slop.
That'll be the title.
So actually, Mark makes this point, I think it's a really good one,
which is, is the bar for success perfection?
Is the bar for success what people can do today?
Or is the bar for success just something that's better
than the alternative?
and in your case of 80% of the world
that has access to no medical knowledge,
no medical services, no medical opinion,
of course this is dramatically better.
Yeah, I look at it, like when I had to get permission
to use a word processor in college,
one of the stumbling blocks was that my printer
was like an Epson-MX80 dot matrix printers.
And it looked like a printer,
like a computer printer, which the rules for the papers
were they had to be written on a typewriter.
Yes.
And then the Macintosh came out in the spring,
and only had an image writer,
which is another .Mexamir.
So all of a sudden,
the standard changed
because the value of being able to revise
and edit and update
and copy, paste, and use fonts
was just so much higher
than the fidelity of the teacher
reading it on bond paper with Courier.
And that's going to happen with content as well.
What I would love to talk to you about,
Stephen, is actually just hearing your take on I.O.
And if you felt like a Google I.O.
Oh, yeah, yeah.
Yeah. So essentially, there was a lot of conversation around Google and how Google had sort of fallen behind and lost their ability to make new things.
They released a ton of new software at every part of the stack in I.O. What do you think that says for Google about Google? Do you think the demise of Google is overstated?
Well, of course, I think the demise of Google is an absurd proposition. The demise of a giant company is a crazy thing to say.
Driving in, I was listening on CNBC, some investor or whatever talking their book, talking about IBM is the one to buy.
I almost wanted to pull over to the side of the road and think, what universe am I in where this company that has died like nine times in my career?
Right.
And so death of is just such a done thing.
Losing a position of influence, however, is a very real thing.
In these platform transitions, big companies have an enormous asset, which is the shock and awe asset.
And so they have the ability to tell the story called
we're pivoting our whole company around this
and we're a zillion dollar in whole company.
And here is like a full assault across the board
for every single asset we have
and every single category the world is talking about
that matters.
And that's what you could do.
Someone was asking me on Twitter yesterday
about this event Microsoft held in 2000
called Forum 2000.
And it was when we announced
like a whole bunch of internet stuff
and the early cloud stuff.
It wasn't called cloud, but early cloud stuff.
And nobody in that room understood
what we were talking about, not a person.
But they all left like, oh my God,
there is so much stuff here,
which was a repeat of five years earlier
when we did what was called Internet Strategy Day
and like the headlines were literally
sleeping giant awoke.
and so it was totally predictable
that Google would show up with
like literally the B2 bombers of software
but the question is really
much deeper than that
and it's really
will they alter their context
of how they build products
and their go-to-market
because that's really
what undermines the big technology companies
and so with Microsoft
the interesting thing was
all those products that got announced
over that five-year span or 10-year span,
none of them are around today.
I should be very careful every time I say something like this,
I get assaulted.
But, like, the big announcement at Forum 2000
was the dot-net framework in C-sharp,
which by almost any measure,
one would call a legacy platform today.
So it, like, came and went in six or seven years.
And everything was about virtual machines
and clustering and all this stuff that VEMware was doing.
And that's not where anything was.
And on top of all that, the economic model became SaaS.
And so what I'm looking at with Google is not can they present all the technologies in the context of Google search and ads.
But can they transform the way they think to something new?
Because that's really where the disruption is going to happen.
I love that point.
Cool.
Awesome.
Anish Stephen, thanks so much for this weekend.
Sure thing.
Thank you.
Super fun.
16Z podcast. If you enjoy the episode, let us know by leaving a review at rate thispodcast.com slash
a16Z. We've got more great conversations coming your way. See you next time.