a16z Podcast - Marc Andreessen: Why Perfect Products Become Obsolete
Episode Date: August 8, 2025In this episode, Marc Andreessen joins TBPN for an unfiltered conversation spanning everything from ads in LLMs to why Apple’s AI strategy may be risky for anyone not named Apple.Marc breaks down th...e current state of AI: why open source is resurging, how foundational research is (or isn’t) turning into product, and whether we’ve hit the moment when phones start to fade as dominant platforms. He also shares his candid thoughts on Meta’s wearable wins, Vision Pro’s imperfections, and how humor and deep research are his two favorite use cases for AI today.Timecodes:0:00 Intro 2:41 The Pace of AI and Technology Cycles 4:03 Research vs. Productization in AI Companies 5:15 Apple’s Strategy: Last Mover Advantage 7:09 The Future Beyond Smartphones 10:23 Open Source AI: Progress and Challenges 13:49 Ads in AI: Business Models and User Experience 15:52 Legal Frameworks for AI and Data 17:53 Lightning Round: How Mark Uses AI 19:01 Breaking into Venture Capital in 2025 20:34 M&A, Survivorship Bias, and Company Resilience ResourcesWatch TBPN: https://www.tbpn.com/Marc on X: https://x.com/pmarcaMarc’s Substack: https://pmarca.substack.com/
Transcript
Discussion (0)
Are ads purely destructive or negative to the user experience,
or are they actually, if done properly,
are they actually either neutral or even positive?
You know, if you're not Apple,
do you really want to be a company that basically sits there
and says, yeah, the world's moving
and we're very deliberately not going to lean as far as we can into it?
I think there's a lot of survivorship bias
in these kinds of strategy discussions
where people look at the one company that's able to pull this off,
and they don't look at the 50 other companies that are in the graveyard
because they didn't adapt.
Mark Andrewson went live on TBP on this feat,
and today we're dropping that full.
conversation here on the pod. Mark gets into it all, what's really happening in AI right now,
how Apple is playing its hand, the return of open source, and why perfect products can signal the
end, not the peak. He also shares his take on how to break into venture capital in 2025 and when
he's actually using AI for day to day. Let's get into it. This information is for educational
purposes only and is not a recommendation to buy, hold, or sell any investment or financial
product. This podcast has been produced by a third party and may include pay promotional
advertisements, other company references, and individuals unaffiliated with A16Z. Such
advertisements, companies, and individuals are not endorsed by AH Capital Management LLC,
A16Z, or any of its affiliates. Information is from sources deemed reliable on the date of
publication, but A16Z does not guarantee its accuracy. We have Mark Andreessen
joining us. He's live from the TBPN Ultram. Welcome to the stream. How you doing, Mark?
Hey, what's happening?
Great to see you.
Yeah, you too.
A lot.
It's a little bit of a slow news day, but exciting stuff with GPT open source.
It's not a slow August.
It's not a slow August.
We're glad.
We were just reflecting that we've taken exactly one day off this summer.
That was July 4th.
And we're showing the Europeans how American companies work.
American work.
We're setting an example.
And we have proof of work because we exist on the Internet.
and you can see us live every day.
So we're setting an example.
How are you doing?
How's your summer going?
Fantastic, going really well.
So how long is it going to be
until you guys put up avatars that make claims
that you're working hard all through the summer
when it turns out you're on the beach?
You might have caught us.
I think you'll know better than us
as to when the technology gets there.
We've been demoing some of the stuff.
People have been doing a lot of deep fakes of us,
and fortunately, all of them have been clockable,
so it doesn't feel like a brand risk,
but they're getting closer and closer.
And I know that there's going to be a moment where we have to say, hey, that's actually using our name and likeness to endorse something that we don't necessarily endorse.
Can you please take that down?
So we're approaching the touring test, the Uncanny Valley.
We're escaping the Uncanny Valley.
I think a question, looking back over the, you know, maybe 10 or 15 years, was what moments did you feel like there just was not a lot of action happening?
Because this summer is just the pace from so many different teams has been absolutely insane.
everybody's like trying to keep up and it didn't used to feel that way at least from my point
of view so my view it always is there's like these there's this these disconnected you know kind
of patterns or trends there's there's sort of the sort of day-to-day phenomenon where like engineers
show up every day and they make things a little bit better and then every once in a while you know
you get a technical breakthrough or a new platform and and and that process kind of this you know
kind of sawtooth kind of up to the right kind of process kind of plays out over time
kind of regardless of what else is happening in the world.
And so it keeps happening through recessions and depressions and wars
and like all kinds of crazy, crazy stuff that's happening.
But basically, you know, the technology keeps getting better.
So there's kind of that curve.
And then there's the sort of enthusiasm curve and then the adoption curve,
you know, which is basically like, when do these things actually show up in the world?
And then by the way, when are people actually ready, you know, for the new thing?
Like if you talk to people who worked on language, I'm sure you guys have talked to people
who work on language models, they will tell you that they were surprised the chat GPT was the
break-through moment because they thought everybody already knew what these models could do for,
you know, three years before that. And so they were, you know, they were shocked that it was the
chatbot interface that made the thing go. And so there's somewhat of a sort of arbitrary
disconnection between what's actually happening in the substance and then what people are
seeing and failing. And so it's just, it's really hard to predict when these things pop. But also,
if you're in this day to day, it's really hard to tell, you know, when things are going to be hot or not
because it doesn't necessarily map to how much the technology is improving. Yeah, we were just
talking about that in the context of
Google's new world
model. It's this like generative
video game that you can kind of move around in
and it feels like DeepMind is just
absolutely crushing at the AI
research frontier. They have
the best world model simulator that
you can walk around in. The question is like
if they let another lab
do the chat GPT thing and just get it out
into the consumer three months earlier
they might wind up
kind of chasing and trying to catch up
if somebody actually figures out how to make it like a dominant consumer product.
Now, in the enterprise, it's more oligopolistic, but consumer seems to be winner take all.
I guess the question is like, how much value do you place right now in the AI race to just like moving fast, breaking things, you know, dealing, having like the thick skin to deal with like the safety constraints and all of the different stuff, obviously not being irresponsible, but just speeding up the organization as much as possible.
it feels like now is the time to really push on that.
Yeah, well, first of all, I need to correct you.
It's moving fast and making things.
Making things, that's right.
I don't even know where they're going from.
I have no idea what you've ever heard.
Never heard of it.
I mean, Chad GPD didn't really break anything.
I think that's a good point.
It really did just move fast and make things.
The first things it made were weird, but that was fine.
And it failed and it hallucinated a ton,
but it didn't really break anything.
I don't know.
Yeah, I believe in this case, total deaths attributable to chat GPT are still zero.
So notwithstanding all of it,
notwithstanding all of the catterwalling.
But, yeah, so look, I think the AI industry in particular
has a very acute version of the sort of challenge
that you identified with.
And, you know, and I don't say this negatively,
just an observation, which is that they're, you know,
like in sort of a normal technology company,
you've kind of got engineers who make products
and then you've got, you know, kind of salespeople
or marketing people who sell them.
You know, in the AI companies, you have this third tier
of, you know, the quote-unquote researchers.
Yeah.
Right.
And so, you know, which is worked out.
incredibly well. I mean, the researchers have done, you know, they've just done like amazing
breakthroughs at these companies. But, you know, the handoff, you know, there's not necessarily
clean handoff from the researchers to the market. And so it kind of raises this question of like,
okay, like, is there, are these companies there for kind of three, you know, kind of three segment
companies where they have research and then they have product development and then they
have go to market. And I think that's a really open issue. I mean, if you, you know, Google's kind
of a case study of this, you know, you alluded to deep mind, but even more broadly, Google, you know,
Google developed a Transformer in 2017.
And then they basically let it sit on the shelf, right?
Because it was a research project.
They didn't productize it.
They were very worried about, you know, from people I've talked to,
they were very worried about the, you know, brand issues and safety issue,
you know, kind of all these, all these, they had all these reasons to not productize it.
I talked to somebody senior who was there at the time, and I asked them, you know,
when could you have had chat GPT with GPT four level output if you had just got, you know,
gone, gone flat out starting in 2017.
And they said by 2019, you know, they already knew how to do it.
And then, you know, they've now caught up, but it took, it took an extra five years, five years to catch up.
And so I think a lot of these companies kind of have that challenge.
Elon, as usual, of course, is provoking this question, is, I'm sure you guys talked about.
But, you know, he has now, you know, within XAI, he's now collapsed, you know, he's eliminated the distinction between research and product.
And so, you know, of course, he's pushing this as hard as he can.
And I think it's a good question for a lot of the other companies, kind of how hard they want to push on actually getting these things in fully productized form out to the market.
Yeah, yeah. On Elon's like distinction, it feels like there is more research to be done, but it feels like we're entering like a new cycle of, you know, just focus on the engineering, focus on the deployment, the applications, let's get all this technology out into the world, let's reap all that benefit. And yes, there will be a different track of fundamental research that's happening somewhere, but it's really, really hard to predict. And so if you have something that's working, just double down and just go really aggressive on.
it. I'm wondering more on that, but also on Apple strategy. It feels like Apple's been
kind of like, you know, people have been maligning them for not, for missing the AI opportunity.
And Tim Cook's just there on the earnings call being like, look, we acquired a couple small
companies and seven this year. Seven companies. But then it seems like they're taking more
of like an American dynamism approach. Like there was news today in the journal that they
that they're investing $100 million in American manufacturing. They're certainly doing
stuff, they're just not chasing the, you know, the shiny tennis ball.
The headline $100 billion cap-bex.
So I'm wondering about your thoughts on when you have a, you know, when you have a platform,
how hard is it to resist chasing the new shiny object?
Is that the right move?
Or are there any other things that you think Apple should be, you know, changing their strategy
on?
Yeah, so look, Apple's always had this, you know, very clearly defined strategy that, you know,
Steve and Tim, you know, working together, figured out a long time ago, which is, you know, they, I forget the exact term, but it's something like basically, they invest deeply into the core of what they do. You know, they'll basically work internally on things for many years. They only actually release things when they feel like they're kind of fully baked. Right. And so as a consequence, they have this thing where, and Tim says this, right? You know, they're rarely first to market with new technologies. You know, they're more often in a category of what, you know, Peter, Peter Thiel calls last to market. You know, they're, you know, they'll come out whatever, three years later, whatever, five years later. You know, they're, you know, they'll come out whatever,
there, you know, there were tablets for years before the iPad.
There were, you know, smartphones for years before the iPhone.
Folding phones.
They're about to do a folding phone.
It's like 10 years into that technology.
I'm sure if they do it, they'll hit it.
The last mover.
Yeah, yeah, yeah.
The last mover.
And I guess, yeah, what I would say is like, look, that clearly works if you're Apple, right?
And so it clearly works if you're Apple.
But I would say there's a fine line between that strategy and just and simply becoming
obsolete, right?
And so the problem is, like, if you're not Apple and you don't have all the other kind
of super strengths and, you know, kind of not.
the market position that Apple has, you know, do you really want to be a company,
you know, if you're not Apple, do you really want to be a company that basically sits there
and says, yeah, the world's moving and we're very deliberately not going to lean as hard as we can
into it. And so I think there's a lot of survivorship bias in these kinds of strategy discussions
where people look at the one company that's able to pull this off and they don't look at
the 50 other companies that are in the graveyard, you know, because they, you know, because
they didn't adapt. I mean, you know, all the other smartphone companies when the iPhone came
out, they were like, oh, yeah, well, we could do touch too, right? You know, we'll just, you know,
We'll get to it, right?
And, you know, they're gone.
What do you think?
The blackberry bold, I remember.
It was like an iPhone knockoff.
What do you think, you know, right now people are,
variety of, you know, shareholders are annoyed at Apple
around their reaction to AI, L.LM's.
John's annoyed around just like transcription generally,
just like super basic stuff.
But it doesn't feel like the core businesses immediately threatened today.
it feels like it's still on the horizon around these sort of like, you know,
eyeware-based computing, you know, potentially net new devices that we're,
that we'll see from, you know, companies like Open AI over time.
But where do you, like, like, how real is the threat, you know, this year versus 10 years from today?
And kind of what's your framework?
Yeah, well, look, I mean, I think the biggest ultimate danger,
I mean, the biggest ultimate danger is very clear,
which is just like at what point do you not carry around a pan of glass in your hand,
you know, called a phone, you know, because other things have superseded it. And, you know,
like, everything, you know, everything becomes obsolete at the point. So there will, there will
come some time to make sure when we're not, you know, carrying phones around and we'll watch movies
or people have phones and we'll be like, yeah, look at how primitive they were, right? Because
we'll have moved on to other things and whether those things are eye-based or, you know,
other kinds of wearables or whether it's just kind of, you know, computing happening in the
environment or just, you know, entirely voice-based, or, you know, who knows what it is. But,
you know, there will come a time when that happens, you know,
Is that time three years from now because there's like some, you know,
huge breakthrough, you know, from some company that figures out the product
that obsolete the phone right away or is that 20 years from now
because the phone is just, you know, such a standard platform for everything that we do in our lives
and everything else, you know, kind of remains a peripheral to the phone.
I mean, that, you know, that's, you know, that's the game of elephants that's playing out there.
You know, obviously, I think, you know, I think it's highly likely that we'll have a phone
for a very long time.
Having said that, it is exciting that there are companies that are going directly at that challenge.
and you know whoever corrects the code on that will be the next apple and by the way that
may in the fullness of time be apple itself you know they may be the company that figures that out
yeah i remember being at a board meeting at andrewson horowitz maybe a decade ago or something
and chris dixon showed me the hollow lens and i was like okay we're one year away from this
being everywhere and and i feel like today i'm still in the like yeah vr it's definitely one year
away the next quest i'm going to be wearing daily um and and and it's
It feels like we're always there, but it does feel like Apple did a lot of work on the,
on the fundamental, you know, pixel density of the resolution of the display, and then
meta's been doing a ton of work on just getting it light and affordable.
Like, it feels closer than ever, but, you know, you always got to wait until you see the
churn numbers until you really call the game, right?
Well, you say the thing, but, you know, I think that's true.
But you'd also say, you know, I'm on the meta board, so I'm kind of a dog on this one.
But like the meta, rayband glasses are a big hit.
Oh, totally.
Like, they're big, you know, so I think we now have a form factor that we know works, you know, for, for, for, for I-based
wearables.
You know, there's not VR and then VR, you know, on top of that.
But, you know, just the, you know, the glasses and, you know, and then the glasses of camera, you know,
sort of integrated camera, integrated microphone, integrate a speaker.
Yep.
You know, that's a very interesting platform.
You know, the watch clearly works, by the way, which Apple, of course, you know, is played a significant
role in making happen.
You know, that now sells in huge volume.
Yeah.
You know, so that's the second data point.
And then, you know, look, I think these, you know, these, I think some form of AI pin is going
to work. I also think, you know, headphones are going to get a lot more sophisticated,
which is already happening. And so, you know, you do have these, you know, kind of data points
coming out. And then, yeah, look, the, the trillion dollar question ultimately is, are these,
are these peripherals to the phone, you know, which is what they are today, or are these
replacements for the phone. And, you know, we, yeah, I would say, you know, we have, we, I think
we have a lot of invention coming, both from new companies and from the incumbents who are going
to try to figure that out. Yeah, I always think about the value of, like, narrowing the
aperture on these new technologies. Like, with, with, with,
the meta raybans, I feel like the fact that they aren't also trying to be a screen is actually
a feature, not a bug. And I always go back to the iPhone. Like, it was first and foremost a phone.
And people bought it because it could make calls and then it could make text messages. And then it was
an iPod. But I, do you disagree with that? Please.
Well, you guys, I don't, you guys, you guys might be too young. The first iPhone actually was a bad
phone. How so?
Then you guys, for the first two years, it couldn't reliably make phone calls.
I did, I had like the third one and a first one.
friend had one, but I feel like it was still like people were carrying cell phones, and that was
the at least of the expectation. But yeah, I mean, I guess you're right. So for the first, for the
first two years, the thing couldn't make fun, reliably make phone calls. And then it turned
out, there was an issue with the antenna and with how you held it. And there was a, I remember
that email. Yeah, you heard it. And you would, and you would disconnect it. You could basically
brick the device. Yeah. Based on how you held it. And somebody emailed, this is when Steve would respond
to emails from random people. And somebody emailed Steve saying, if I, you know, hold the phone this way,
it doesn't make phone calls. And he's like, well, don't hold it.
that way.
Yeah.
Right.
Yeah.
Yeah.
So even there, it was like, yeah, okay, and people, you know, people forget, it took like
five years for the iPhone to find its footing.
It took like two years to get the, they remember also the original iPhone didn't have,
it didn't have broadband data.
It was on, it was on the old 2G, it was called the AT&T Edge Network.
So it didn't have broadband data.
And then, of course, it didn't have an app store, right?
It was completely locked out.
So the challenges, the challenges for Apple now is that people are so used to perfection
with the device that launching a product that isn't,
perfect like is embarrassing right like you look at the vision pro and it's like well the battery's
big Steve would have hated this right like how he never would have shipped this and that
being constrained and not being able to innovate because you're tied to this like impossible standard
of being on whatever generation 17 of the iPhone and perfecting every element is a real challenge
so I would say there's a corollary to that one of the things I've observed over the years is I think
to algae products become obsolete at the precise moment
but they become perfect.
And did you point,
and what I mean by perfect basically is like,
yeah,
it's like the perfect idealized complete product.
Like it does everything you could possibly ever imagine.
Everything a customer could imagine,
everything you is the technology developer can imagine,
it's absolutely perfect.
And there's been tons of examples of this
over the last 50 years
where it's like the absolute perfect permanent,
it seems to be the permanent version of that product.
And then it just turns out that's actually the point of obsolescence
because it means creativity is no longer being applied
right into that platform.
You're just like, there's just nothing else to do.
You're just like, you're done, right?
The product has been realized.
And then the cycle is what happens to your point, the cycle is other people come in
with completely different approaches, completely different kinds of products that are broken
and weird in all kinds of ways, you know, but are fundamentally different.
And so, you know, that is one of the time under traditions.
And, you know, one of the, you know, things you could say about, you know, Tim is, you know,
his willingness to kind of break the mold of Apple only shifts perfect products, but, you know,
being willing to ship the, you know, the Vision Pro, you know, shows a level of determination
to kind of stay on the innovation game,
which I think is very positive.
Yeah, yeah, yeah, yeah, that's great.
Updated thinking on open source,
since we last talked, there's a lot that's been
happening.
Open AI is an open-source company again.
Yes, open-a-day. Yeah, look, very encouraging.
You know, a year ago, I was very, you know,
I was getting very distressed about, you know,
whether open-source say I was going to be allowed, right?
It was even going to be legal.
And so, and I think, you know, we're basically through that at this point.
We're going to say, we're through that in the U.S.
You know, we'll see about the rest of the world.
And then, look, you know, the U.S.-China thing is obviously a big deal.
But, you know, I think it's been net positive for the world that China has been so enthusiastic about open source,
say, yeah, coming out of China, which has been great.
And then, yeah, look, opening eye, leaning hard into this, you know,
and releasing what, you know, what they did is, I think, fantastic,
both because of what they released, which is great, but also just the fact that they are now,
you know, willing to do that.
And then Elon reconfirmed overnight that he's going to, you know,
open source, you know, start open source in previous versions of GROC.
And so, yeah, so we, you know, we, we, we, we, we, we, we, we,
We seem to be in the timeline where open source AI is going to happen.
You know, right now, you know, which I think what you would say is it kind of lags the leading-edge
criteria implementations by, you know, six months or something like that.
But I think that, you know, that's a good, if that's the status quo that continues,
I think that would be a very good status quo.
What are the rough edges that we need to kind of sand down when we're thinking about
Chinese open-source model specifically?
Is it we need to do some fine-tuning on top of them to add back free speech or do we need to
watch for back doors, say it's phone and home, if it runs into this specific thing.
Like the Chinese open source thing, it was remarkable because I feel like it really does
accelerate the pace of innovation because everyone gets to see, oh, this is how reasoning works.
I think that's great.
At the same time, it made me very, it made me much more appreciative of AI safety research
and capability research and actually being able to interpret what's going on and say definitively,
this model is going to behave weird in this weird way, like the Manchurian candidate problem.
We haven't found any of that, but it certainly seems like something we'd want.
to keep an eye on. But from your perspective, like, what are the risks that we need to be
aware of going into a world where China is really pushing hard into open source?
Yeah, there's two, and you identify them, but let's talk about both of them. So the phone
home thing is the easy one, which is you can put a, you know, you can packet sniff, you know,
a network and you can tell when the thing is doing that. And you, and plus you can go in the code
and you can see what it's doing that. And so you can validate, you can validate that that's
either happening or not happening. And I think that, you know, that's important. But,
But, you know, I think people are going to figure that out.
You can kind of gate that problem practically.
The bigger issue is we have this term in the field right now called open weights.
And open weights is a loaded term.
It uses the open term from open source.
But, of course, with open source, the thing is you can actually read the code.
You know, with open weights, you have, you know, just a giant file full of numbers.
As you said, you can't really interpret.
And then what you don't have, what most of the,
open source open weights models don't have including you know deep seek specifically what they don't
have is they don't have open data right or open corpus right so you you can't actually see the
training data that went went into them and of course you know most of the people building models are
kind of obscuring what that you know what that training data is in various ways and and so when you
get an open weight model you know the good news is the the software source is open the good news is
you can run on your new machine you can verify that doesn't phone home but you don't actually know
what's happening inside the ways and so I think that that is going to be a bigger
and bigger issue, which is like, okay, how the thing behaves, like, yeah, what, what has it
actually been trained to do?
And what restrictions or directives has it been given in the training, you know, that are
embedded in the weights that you need to be able to see.
You know, this is, I would say, this is coming up as sort of, I would say, a global issue,
you know, which, you know, we worry about when these models come from China.
Other countries worry when these models come from the U.S., right?
Which is, right, so one of the phrases you'll hear when you talk to people kind of outside
the U.S. is kind of this phrase, people are kicking around, which is not my weight, it's not my culture.
Right, right, or by the way, for that matter, not my way, it's not my laws.
Yeah.
Right.
Which is like, okay, like, what actually is this thing going to do, right?
And to your point, that Chinese models, for example, might, you know, never criticize, you know, communism or something.
You can tell you, the American models have all kinds of constraints also, right, implemented, you know, usually by a very specific kind of person in a very specific location in the U.S.
And so, you know, I think that this is a general issue.
and we're going to have to see basically people's tolerance levels
being willing to run open weights models
where they don't fundamentally have access to the data
and then, correspondingly, I think what we'll see
is more open source developers also doing open corpus, open data
so you can see what's actually in them.
Yeah. Obviously, open source is very important
in terms of just distributing intelligence broadly,
giving people the ability to run their own models
and really fine-tune them and have control.
There's also a big push just to make frontier models
and high capability models free.
One model is you charge for the premium,
you give the free away.
It's a freemium model.
That's what we're seeing at most of the labs right now.
There's also this kind of specter on the horizon
of potentially putting ads in LLMs
and what that would do to the world.
Jordy got in a little dust up with Mark Cuban on the timeline
deciding whether or not it would be a net good
to put advertising in LLMs,
what might happen that might be bad there.
Do you have to take?
Yeah, my point broadly was,
that ads have been an incredible way to make a variety of products and services online free
and just saying, like, default, just no ads would potentially, you know, be incredibly destructive.
But, yeah, curious, your framework.
Yeah, so I should start by saying, like, whenever I personally use an Internet service,
I always try to buy the premium version of it that doesn't have ads.
Right?
And so if I can, like, live personally inside an ad for universe and pay for it, like, that's
great. And I'll freely admit, you know, whatever level of, you know, hypocrisy or incongruous,
you know, kind of results from that.
No, the point is choice. The point is choice.
Well, the point is exactly what you said. It's affordability. So the problem is if you really
want to get to five, if you want to get to a billion and then five billion people,
you can't do that with a paid offering. Like it just, you, at any sort of reasonable
price point. It's just not possible. The, you know, global per capita GDP is not high enough
for that. People don't have enough income for that, at least today. And so if you want to
get to, you know, if you want the Google search engine or the Facebook social app or the whatever
AI, you know, frontier AI model to be available to 5 billion people for free, you need to have a
business model, you need to have an indirect business model, and as is the obvious one. And so I do think
if, you know, if you take some principle stand against ads, I think you unfortunately are also
taking a stand against broad access just in the way the world works today. And then look, the other
really salient question is, you know, the same question that the companies like Google
Facebook have been dealing with for a long time, which is, are ads purely destructive or
negative to the user experience, or are they actually, have done properly? Are they actually either
neutral or even positive? Right. And this was something that, you know, Google, I think, to their
credit figured out very early, which is, you know, a well-targeted ad at a specifically relevant
point in time is actually content. Like, it actually enhances the experience, right? Because
it's the obvious case. You're searching on a product. There's an ad. You can buy the product.
You click you buy the product. That was actually a useful piece of functionality.
and so can you can you have ads or other things that are like ads or look like ads you know different kinds of referrals you know mechanisms or whatever can you have them in such a way that they're actually additive to the to the product experience and you can imagine with social networking you can imagine lots of examples of that people will you know people will you know they'll whine or hold out of lots of different ways but I think you know I think that hasn't been a bad outcome overall and I think that I think it's entirely possible that that's what what happens with with these models as well.
So kind of similar kind of question, what should be legal, kind of trying to create legal frameworks on a number of issues with AI.
There's been a number of IP cases that have been working their way through the courts, what can labs use to train models, et cetera.
There's been some good outcomes recently.
Sam also was talking about how a lot of people are using AI as like a confidant, like a, you know, a friend, things like that.
and he mentioned that currently your chats are not privileged.
They can be used in a lawsuit or other situations.
How optimistic are you that our sort of legal system in the U.S.
can get some of these issues right where maybe it can't just be, you know,
total free markets, kind of lawless, whatever goes?
Yeah.
So in the case of training data, I think that there, I mean,
there's a bunch of these copyright, you know,
kind of lawsuits happening right now.
There's, you know, the big New York Times opening I won,
and there's, you know, been a bunch of others.
I think in that, for that particular problem,
my guess is that problem ultimately has to be solved through legislation.
It's ultimately a legislative question.
The reason is because it goes to the nature of copyright law itself, you know,
which is legislation.
And, of course, you know, the content industry is already claiming that, of course,
you know, using copyrighted data to train, you know,
without permission and without paying is sort of, you know,
they believe illegal on his face, you know,
due to violation copyright law.
the counter argument to that, which we believe is, well, it's not copying, right?
There's a distinction between training and copying, just like in the real world, there's
a distinction between reading a book and copying the book, you know, as a person.
And so there's going to need, I think, you know, the courts are trying to grapple with that.
There's a whole bunch of cases.
There's jurisdictional questions, you know, probably ultimately Congress is going to have to
figure out a, you know, figure out an answer on that.
And by the way, the president is kind of, you know, thrown down that gauntlet in his, I think
the speech he gave last week or two weeks ago, you know, where he's sort of, you know,
Washington probably needs to deal with that as an issue.
So that's one on the on the on the on the privacy thing.
I think that one feels like it's a Supreme Court thing to me.
It feels like that's the kind of issue said to Supreme Court.
And in other words, like whether for example your transcripts are considered your property
and whether they're protected against, you know, warrantless search and seizure.
And the observation I would make there is if you look at the March of Technology over time.
So the Constitution has like very clear, you know, Fourth, Fifth Amendment's
you know, very specific rights around the, you know,
the things that are yours, you know, such as, you know,
your home, you know, being in your home, you know,
by the way, the thoughts in your head, right?
You know, that the government can't just, like, come in and take.
They can't, you know, they can't just come in and search your house without a warrant.
You know, they can't, like, you know, put you in a jail cell and beat you until you fess up.
Like, you know, there are, you know, we have constitutes for protections
against the government being able to basically, you know,
take information, you know, fundamentally, you know, as well as possessions.
And then basically what happens is every time,
there's a new technology that creates a new kind of sort of, you know, thing that's yours,
thing that you would consider it to be private, thing that you wouldn't want the government to be
able to take without a warrant.
You know, out of the gate, law enforcement agencies just naturally go try to get those things
because they're ways to assault crimes and, you know, it feels like that that's a legal thing
to do.
And then basically the courts come in later and they, you know, rule one way or the other and basically
say, no, that actually is also a thing that is protected against, you know, warrantless,
for example, wireless search, you know, wireless wiretapping.
And so I feel like that, you know, this is the latest of probably, I don't know,
20 of those over the last 100 years.
And, you know, I don't know which way it'll go, but I think it's going to be a key thing
because, as you know, people are already telling these models, you know, lots of things
that they're, you know, that are very personal.
Okay. Lightning round, quick questions.
We're letting you get out of here in a couple minutes.
We're in this age of spiky intelligence.
Models are great at some things and then terrible at others.
Where are you actually getting value out of AI right now?
Where is it falling down for you?
Where are you, how are you using AI day to day?
Yeah, so I have two kind of, I don't know, a barbell approach.
One is for serious stuff.
I love the deep research capabilities.
And so, and I'm doing this in a bunch of models, but like the ability to basically say I'm interested in this topic.
And I just, I just felt like write me a book.
And I, you know, I'm kind of hoping for the longest book I can get.
I always tell it like, go longer, go longer, more sophisticated.
You know, but the leading edge models now, they're getting.
up to like 30-page PDFs, you know, that are like completely well-form, you know, basically
long-form essays, you know, it's just like incredible richness in depth. And, you know, if it's 30
pages today, I'm sort of crossing my fingers, it'll get to, you know, 300 pages coming up here
in the next few years. And so I, you know, I'm able to basically have the thing generate enormous
amounts of reading material with just like, I think, incredible richness and depth and complexity.
And then on the other side of the barbell is humor. And I've posted some of these to my ex-feed over
over the last couple of years.
But I think these models are already much funnier
than people give them credit for.
Really?
Yeah.
I think they're actually quite highly entertaining.
Is it a specific specific formats?
Like we know that B,
chatting back and forth.
Be me, B Mark Andreessen, you know, that format.
Take a dip in my pool, in my office.
They're really good.
So they're really good at green text.
That works really well.
But for some reason, the ones I find historical
are the, I haven't right screenplays,
you know, for like TV shows or,
or plays or movies.
And I posted it, I had it right,
a new season of the HBO Silicon Valley,
you know, said 10 years later.
Yep.
And I had it right like an entire day.
I had it write like 10 to 10 scripts
for a complete season.
And of course, I just said,
you know, make it like Silicon Valley,
except, you know, it's happening.
It's in 2021, it kind of peak woke.
And I thought it was just, I think it's just,
you know, I'll sit there two in the morning
and just like laughing my ass off
at how funny this thing is.
And so I think these things are actually,
are actually already like extremely funny.
They're extremely entertaining
when they're used in that way
and I do enjoy that a lot
and I generate a lot of those
that I don't post
stay in the group chats
is probably good idea
they're your property
yeah hopefully the Fourth Amendment holds on these
yeah that's great
I have one last question
go for it and then I've got one more
how do you get a job as a venture capitalist
in 2025
so I mean look the best way
the best way to do it is to have a track record
early as somebody who is like in the loop
specifically on your product development
And so somebody who, you know, be, like, deeply in the trenches at one of these new companies in one of these spaces, you know, participate in the creation of a great new product and a great new company and, you know, really demonstrate that you know how to do that.
You know, there's, you know, there are great VCs who have not done that.
But, you know, I think that is sort of a foundational skill set, you know, for working with the kinds of founders that you want to work with who are going to want you to have, you know, kind of very interesting things to say on that, as I think, you know, still the best way to do it.
Yeah, like feel the growth.
Be, immerse yourself in the growth, the aggressive growth environment,
and then you'll be able to identify it when you see it from afar.
Yeah, that's right.
Last question for me, state of M&A in your mind,
how are you advising, you know, companies where you're on the board
or just the portfolio broadly around what they should expect now and in the near future?
You mean in terms of whether you can get things approved?
Basically, yeah.
Yeah, yes.
So, look, approval is not a slam doc.
There was a, you know, there was a, I just saw there was a medical device company this morning, you know, where the acquisition was not allowed by the FTC.
So, you know, there is still scrutiny.
It's, you know, it's obviously a very different political regime in Washington.
But, you know, this is, this is not an minute, you know, by their own statements, this is not an administration that believes it's in total laissez-faire.
And it definitely wants to, you know, in their view, maintain a very healthy level of market competition.
Yeah.
How many do you expect, do you expect certain companies to be negatively implicit?
impacted by the Figma story, right?
You have this deal gets blocked, successful, you know, IPO.
Lena Kahn is taking a victory lap.
You know, many people are responding and joking saying, you know,
someone, Lena cuts off the arm of a pianist, and they endure and can create a masterpiece.
And so I expect, and then you look at the example with, you know, Roomba, I think it was,
where Roomba had a deal with Amazon.
It was blocked.
and the company has just been a shambles ever since.
So my concern is that people look at Figma and say,
you should be independent, you just figure it out.
Nothing can go wrong.
Yes.
Yeah, and this kind of taking a victory lap was very disconcerting.
And for exactly the reason you said, which is survivorship bias, right,
which is you pick the one that worked out.
And then, you know, it's the airplane, the red dots, the airplane.
You know, you ignore the 50 that are in the ground that you've never heard of.
And so that was very disconcerting because that, you know,
sort of the central planning fallacy,
which is like we make centrally plan economic decisions.
We have one example.
You know, it's like in Europe.
It's like, yeah, well, the bottle caps actually don't fall off the bottle, right?
Like, you know, it works.
Right?
It's like, okay.
But do you want to live in an economic regime in which, you know,
the government is dictated bottle cap design?
The answer is clearly no.
Because the downside consequences.
Or even looking at that, you know, the Chinese model,
which is, you know, people can say they're picking winners,
to get to maybe picking a winner,
you have this intense bloodbath of competition
where, you know,
teams need to rise to the top
and sort of prove themselves
before they get any of that real,
like, you know, meaningful state benefit.
Yeah, that's right.
And so you just, yeah, you just have this adverse election
survivorship bias thing where you just,
you don't pay attention to all the collateral damage.
So I do think that mentality is like super, super dangerous.
And so, yeah, look, I think companies just have to be very thoughtful
about this, both acquirers and the acquires, you know, and the big thing is if you're selling a
company, like, you just need to anticipate that you might not get it through. And if you don't,
they're sort of like, okay, number one, is there like a big enough breakup fee, right? Are you going to
get, you know, paid for the, you know, paid for the, you know, the damage that you're going through
through, you know, and how is that structured on the one hand? And then two is, yeah, look,
do you have the kind of company culture that's going to be able to withstand that? And is your
business, you know, strong enough to be able to get through that. And it is a real risk and
something worth, you know, taking very seriously. Yeah, and that's, that's, that's, that's
why it felt emotion we were at nicey last week it felt emotional this that that the the figma team was
was able to like effectively just like restart the business and say like we're we're taking this all
the way so if you talk if you talk to any really successful company what they'll tell you is yeah
over the years we have these like crucible moments in which like we almost died right but we like
pulled together and we pulled it off and then that became like you know one of these central kind
of mythical events in the history of the company that we always referred to and like my god we got
through that, and we're so strong and tough, and we've been forging fire, and now we can do
anything. And it's like, yeah, that's great. And then there are 50 other companies that
those crisp old moments blew up and died.
And so, yeah, it's all of the quote lessons learned on this stuff. They're all conditional
on, like, survival. And so these things need to be taking incredibly seriously, you know,
which the great CEOs do. Yeah. Well, thanks so much for joining. We'll let you get back to your day.
We are always five minutes over. Next time we have to book five hours because this is fantastic.
I think I got 10% of the way through.
We'd be the first 24-hour TVP.
Yeah, we would love to have you again.
Marathon.
Enjoy the rest of your day.
We'll talk to you soon, Mark.
Have a good day.
Bye.
Cheers.
Thank you, guys.
Thank you.
Thanks for listening to the A16Z podcast.
If you enjoyed the episode, let us know by leaving a review at rate this podcast.com
slash A16Z.
We've got more great conversations coming your way.
See you next time.
Thank you.