a16z Podcast - Reid Hoffman on AI, Consciousness, and the Future of Humanity
Episode Date: October 20, 2025Reid Hoffman has been at the center of every major tech shift, from co-founding LinkedIn and helping build PayPal to investing early in OpenAI. In this conversation, he looks ahead to the next transfo...rmation: how artificial intelligence will reshape work, science, and what it means to be human.In this episode, Reid joins Erik Torenberg and Alex Rampell to talk about what AI means for human progress, where Silicon Valley’s blind spots lie, and why the biggest breakthroughs will come from outside the obvious productivity apps. They discuss why reasoning still limits today’s AI, whether consciousness is required for true intelligence, and how to design systems that augment, not replace, people.Reid also reflects on LinkedIn’s durability, the next generation of AI-native companies, and what friendship and purpose mean in an era where machines can simulate almost anything. This is a sweeping, high-level conversation at the intersection of technology, philosophy, and humanity. Resources:Follow Reid on X: x.com/reidhoffmanFollow Alex on X: x.com/arampell Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
This is actually one of the things that I think people don't realize about Silicon Valley.
You start with what's the amazing thing that you can suddenly create?
Lots of these companies.
And you go, what's your business ball?
I don't know.
They're like, yeah, we're going to try to work it out.
I can create something amazing here.
And that's actually one of the fundamental, call it the religion of Silicon Valley
and the knowledge of Silicon Valley that I so much love and admire and embody.
Reid Hoffman has spent decades helping shape how we connect, work, and build online from PayPal,
on LinkedIn to open AI and beyond. In this episode, I'm joined by Reed and A16Z general partner
Alex Rampel to talk about how AI is reshaping not just work but what it means to be human.
We discuss how far current AI models can go, what's holding them back, and why the next
breakthroughs will likely come from places Silicon Valley isn't even looking. We also talk about
friendship, meaning, and how to stay grounded in an era of exponential change when our tools
might soon think, reason, and even feel alongside us. Let's get into it.
Reid, welcome things and see podcast. It's great to be here. So, Reed, you're one of the most
successful Web 2 investors of that era, Facebook, LinkedIn, obviously, which you co-created,
Airbnb, many, many others. And you had several frameworks that. One of which was the seven
deadly sins, which we talk about often in love. As you're thinking about AI investing, what's a
framework worldview that you take to your AI investing? So obviously, we're all
looking through a glass darkly, looking through a fog
with strobe lights that, you know, are hard to understand what's going on.
So we're all navigating this new universe.
So I don't know if I have as Christopher from it.
But seven deadly stins still work because that's a question of what is
psychological infrastructure across all 8 billion plus human beings.
But I'd say there's a couple things.
So first is there is going to be a set of things that are the kind of the obvious line of sight.
obvious line of site, a bunch of stuff with chatbots,
a bunch of productivity, coding assistance, da-da-da-da-da.
And by the way, that's still worth investing in,
but obviously obvious line of sight means it's obvious to everybody,
line of site.
And so doing a differential investment is harder.
The second area is, well, what does this mean?
Because too often people say in an area of disruption
that everything changes as opposed to significant things change.
So, like you were mentioning Web2O and LinkedIn,
And obviously, part of this with a platform change, you go, okay, well, are there now new LinkedIn
that are possible because of AI or something like that?
And obviously, given my own heritage, I would love LinkedIn to be that.
But, you know, I'm always pro-innovation entrepreneurship, the best possible thing for humanity.
But what are the kind of more traditional kind of things that haven't changed?
Network effects, enterprise integration, other kinds of things that the new platform
upsets the Apple cart, but you're still going to be putting that Apple cart.
kind of back together in some way, and what is that?
And then the third, which is probably where I've been putting most of my time,
has been what I think of is Silicon Valley blind spots
because Silicon Valley is one of the most amazing places in the world.
There's a network of intense co-opetition, learning, invention,
building new things, et cetera, which is just great.
But we also have our canons.
We have our kind of blind spots.
And a classic one for us tends to be, well, everything should be done in CS, everything should be done in software, everything should be done in bits.
And that's the most relevant thing because, by the way, it's a great area to invest.
But it was like, okay, what are the areas where the AI revolution will be magical but won't be within the Silicon Valley blind spots?
And that's probably where I've been putting the majority of my co-founding time, invention time, kind of investment time, et cetera,
Because I think usually a blind spot on something that's very, very big is precisely the kinds of things that you go, okay, you have a long runway to create something that could be like another one of the iconic companies.
Yeah.
Let's go deeper on that because we're also talking just before this about how people focus so much on the productivity, set the workflow sides, but they're missing other elements.
So say more about other things that you find more interesting there.
So one of the things I kind of told my partners back at Greylock in 2015, so it's 10 years ago, was I said, look, there's going to be a bunch of different things on productivity around AI.
I'll help.
You have companies you want me to work with that you're doing.
Great.
That's awesome.
Enterprise productivity, et cetera, things that Greylock tends to specialize on.
But I said, actually, in fact, what I think that's here, getting the blind spots, is also going to be some things like, you know, what.
as you guys both know, Matt S-A-I, which is how do we create a drug discovery factory that works at the speed of software?
Right. Now, obviously, there's regulatory, obviously there's biological bits, obviously, da-da-da. And so it won't be purely a speed of software. But how do we do this? And they said, oh, well, what do you know about biology? And the answer's zero. Well, it may be not quite zero, but on the board of BioHub for 10 years. I'm on the board of ARC, et cetera. Like I've been thinking about the intersection of the worlds of atoms and the world.
to bits, and you have biological bits, which are kind of halfway between atoms and bits
in various ways.
I've been thinking about this a lot and kind of what the things are, not so much with a specific
company focus, as much as a what are things that elevate human life kind of focus?
Part of reason my biohub, part of reason I arc.
But then I was like, well, wait a minute.
Actually, now with AI, and you have the acceleration.
Because, like, for example, this detour will be fun.
So roughly also around 10 years ago, I was asked to.
give a talk to the Stanford long-term planning commission.
And what I told them was that they should basically divert
and put all of their energy and AI tools for every single discipline.
And this is a while before chat, GPT, and all the rest.
And the metaphor I used was a search metaphor,
because think if you had a custom search productivity tool in every single discipline.
Now, back then, I could imagine it, I could build one for every discipline,
other than theoretical math or theoretical physics.
Today, you might even be able to do theoretical math
and theoretical physics.
Right, exactly.
And so do that, like transform knowledge generation,
knowledge communication, knowledge analysis.
Well, that kind of same thing, now thinking,
well, the biological system is still too complex to simulate.
We've got all these amazing things with LLMs.
But like the classic Silicon Valley blind spot is,
oh, we'll just put it all in simulation and drugs will fall out.
Right. That simulation is difficult. Now, part of the insight that you begin to see from like the work with alpha glow and alpha zero is because like people who just think, ah, physical material is going to take quantum acuning. Now, quantum computing could do really amazing things, but actually simply doing prediction and getting that prediction right. And by the way, it doesn't have to be right 100% of time. It has to be right like 1% of the time because you can validate the other 99% work, right? And then finding that one thing. And so literally,
it's not a needle in a haystack.
It's like a needle in a solar system.
Right.
But you can possibly do that.
And that's part of what led to,
okay, Silicon Valley will classically go,
we'll put it all in simulation and that will solve it.
Nope, that's not going to work.
Or, oh, no, we're going to have a super intelligent drug researcher
and that will be two years down.
The thing, look, maybe someday, not soon.
Right.
So anyway, that was the kind of thing that was in other different areas.
Now, part of it's also kind of what a lot of people don't realize.
Actually, if I'm not going too long, I'll go to the other example that I gave because you'll love this.
This will echo some of our conversations from 10, 15 years ago.
So I am prepping for a debate on Sunday this week on whether or not AIs will replace all doctors in a small number of years.
Now, the pro case is very easy, which is we have massively increasing cases.
If you look at ChatGBTBT today, you'd go, for example, advice to everyone who's listening to this, if you're not using ChatGBTGT or equivalent as a second opinion, you're out of your mind. You're ignorant. You get a serious result. Check it as a second opinion. And by the way, if it diverts, then go get a third. And so the diagnostic capabilities, these are much better knowledge stores than any human being on the planet. So you go, well, if a doctor is just,
just a knowledge store? Yeah, that's going away. However, the question is, I actually think
things that really do mean doctor, and it's not like, oh, someone who holds your hand and says,
oh, it's okay, et cetera. I actually think there will be a position for a doctor 10 years from now,
20 years from now. It won't be as the knowledge store. It will be as an expert user of the
dollar store. But it's not going to be, oh, because I went to med school for 10 years and I
memorized things intensely. That's why I'm a doctor. That's all going away. Great. But there's a lot
of other parts that being a doctor. Now, so I went to chat CBT, pro using deep research. I went to
Claude, Opus 4.5 deep research. I went to Gemini Ultra. I went to co-pilot deep research. And all of these
things, I was doing everything I knew about prompting to give me the best possible arguments
for my position because I thought, well, I'm about to debate on AI.
Of course I should be using AI debate.
The answers were B minus or B despite absolute top thing.
And I'm not like maybe there's probably better prompters in the world, but I've been doing
this since I got access to GPD for six months before the public did.
Right.
So I've got some experience in the whole prompting thing.
It's not like I'm an amateur prompter.
And so I looked at this and I went,
oh, this is very interesting
in a telling of where current LLMs are limited
in their reasoning capabilities
because what it did is it basically did
10 to 15 minutes of 32 GPU compute clusters
doing inference, bringing all in
amazing work relative to a work that an analyst
would have produced in three days
was produced in 10 minutes.
And of course, I set it up all in parallel
with different browser tabs
all going into the different systems
and then ran the comparisons across them, everything.
But its flaw was that it was giving me a consensus opinion
about how articles in good magazines, good things,
are arguing for that position today.
And all of that was weak
because it was kind of like, oh, you need to have humans
cross-check the diagnosis, right?
It was a common theme across this.
And, well, by the way, very clearly we know as technologists
that human cross-checking the diagnosis,
we're going to have AIs cross-checking the diagnosis.
We're going to have AIs cross-checking the AIs or cross-checking the diagnosis.
And sure, there'll be humans around here somewhere,
but that's not going to be the central place to say,
in 20 years, doctors are going to be cross-checking the diagnosis.
Because, by the way, what doctors should be learning very quickly is,
if you believe something different than the consensus opinion that an AI gives you,
you'd better have a very good reason
and you're going to go do some investigation.
It doesn't mean the AI is always right.
That's actually part of what you're, like,
what we're going to need in all of our professions
is more sideways thinking, more lateral thinking.
The, okay, this is good consensus opinion.
Now, what if it's not consensus opinion?
That's what doctors need to be doing.
That's what lawyers will need to be doing.
That's what it is.
And LLMs are still pretty structurally limited there.
That's funny.
My favorite saying is by Richard Feynman,
science is the belief in the ignorance of experts.
Yes.
And there are so many professions
where the credentialism is the expertness.
It's like it's if this than that.
And it's like, I have MD, therefore I know.
I have JD, therefore I know.
And that's why coding is actually a little bit ahead of it
because it's like, I don't care where you got your degree.
This is a, it's kind of ahead of the rest of society.
Now, it's funny, Milton Friedman one time got asked
because he was famous libertarian,
don't you think that brain surgeon should be credentials?
And it's like, yeah, the market will figure that out.
Seems kind of crazy, right?
But that's how we now do coding when you're in the world of bits.
But it feels like a lot of the reasons why you have this not very advanced thinking is because so much of it is built upon layers of credentialism.
And that's a very good heuristic.
Historically, it has been.
If you have a doctor that graduated at the top of their class from Harvard Medical School, it's like probably a good doctor.
By the way, you critically wanted that three years ago.
Right.
Right.
It's like, no, no, I need someone who has the knowledge base.
You have it?
Great.
Right.
But now we have a knowledge base.
Yeah.
I totally agree.
That was the reason I was saying you would love this because it echoes of our expertise.
I thought you were going to get into bits versus atoms where it's kind of interesting right now
where it's like all this high value work like Goldman Sachs cell site analyst, that's deep research, right?
Whereas fold by laundry, that's $100,000 of CAPEX.
So it doesn't work as well as somebody that you could pay $10 an hour to.
And it's like the atom stuff is so hard to actually disrupt.
And we're going to get there eventually, but that's where Silicon Valley certainly has a blind spot.
But it's like a CAPX versus OPEX or Bits versus Adams.
The atoms is another part.
But that's also the reason why bio.
because bios are the biddy atoms.
Yes, yes, yes, right.
And what's the best explanation for why it's so hard to figure out folding laundry,
but so easy to figure out?
Well, it's actually not that hard to figure out.
Or why it's taken us much longer, much more expensive,
because it would have been hard to foresee that in advance.
Well, I remember I talked to Ilya about this a few years ago,
and it's like, why is it that if you read an Asimov novel
where it talked about, like, how, you know,
people will cook for you and fold your laundry?
Like, why have none of these things happened?
And it's like, well, you just never had a brain that was smart enough.
This was part of the problem, is that you could, I mean, yes, you have things like, you know, how do you actually
pick up this water bottle?
And it turns out your hands are very, very well, like, why are humans more advanced than every other species?
So there are two reasons.
Number one is we have opposable thumbs.
And then number two is we've come up with the language system that we could pass down from
generation to generation, which is writing.
Dolphins are very smart.
Like, there was actually a whole theory, which is it wasn't just brain size.
It was brain to body size.
So humans were the high.
Nope, not true. And now that we've actually measured every single animal, there are a lot of
animals that have more brain over body size. Like that, that ratio is in tilt of an elephant or of a
dolphin or, I forgot the numbers, but there are a bunch that are actually more advanced than humans,
but they don't have opposable thumbs. And because of that, they never developed writing,
so they can't actually iterate from generation to generation. And humans did. And then, of course,
like the human condition was like it was this, and then the Industrial Revolution,
then it went like that, and now it's continued like this.
But this is the reason why in the last four or five years, one of the things I realized is, you know, because of the classic classification of human beings is homo sapiens.
I actually think we're homo techni because it's that iteration through technology.
Yes, yes, exactly.
Whatever version, writing, typing, you know, but it's we iterate through technology.
That's the actual thing goes to future generations, builds on science, you know, all the rest of it.
And that's what I think is really key.
Yeah.
A couple other explanations could be that we have more.
training data on white collar work
than sort of, you know, picking
things up. Or some people make this evolutionary
argument that we've been using her disposable
thumbs for way longer than we've been, say,
you know, reading. Well, yeah, it's the lizard
brain. Like, most of your brain is not the neocortex.
And, like, that's the, like, draw and paint and everything else, which is actually
very, very hard. You can't find a dolphin that can draw
her paint. And that's probably because they don't have opposable
thumbs, but it's also, like, maybe that part
of the brain hasn't developed, but you have, like, you have
billions of years of evolution for
these somewhat autonomous responses like fight or flight that's been around for a long, long time
well before drawing and painting. But I think the main issue is just like you have battery chemistry
problems. Like I can't, like it turns out like a lithium ion battery is pretty cool, but the
energy density of that is terrible relative to ATP with cells, right? Like you have all of these
reasons why robotics don't work, but first and foremost is the brain was never very good. So you had
robotics like Finoch, which makes assembly line robots. Those work really well, but it's like very
deterministic or highly deterministic. But once you go into like, you know, multiple degrees of
freedom, you have to get so many things to work. And the CAPEX, it's like, I need $100,000 to
have a robot full my laundry. And we have so many extra people that will do that work. The economics
never made sense. But this is why Japan is a leader in robotics because they can't hire anybody.
So therefore, I might as well build a true story. I went bowling in Japan. And they had a robot to get,
a vending machine, robot, that would give you your bowling shoes,
and then it would clean the bowling shoes.
And it's like, you would never build that here.
Because you'd hire some guy from the local high school, and he'd go do that.
Yeah, and much cheaper and actually more effective.
But it's this cap-a-c-like the Cappex line and the OPEX line when they cross,
then it's like, ooh, I should build robots.
So that's the other thing that you probably need.
But if the cost goes down, then, of course, it goes in favor of Cappex versus OPEX.
I think there's a couple things to go deeper on the robot side.
So one is the density, the bits to value, right?
So like in language, when we encapsulated all these things, even into like romance novels, there's a high bits to value.
Whereas when you're going to in the whole world, there's a lot of like, how do you, we abstract from all those bits and how do you abstract them?
There's another part of it, which is kind of common sense awareness.
Like this is one of the things that, like when I look at, you know, GBD2, 3, 4, 5, it's a progression of.
of savants, right? And the savants are amazing. It doesn't mean the savant, but like when it makes
mistakes, like as a classic thing, so Microsoft has had running for years now, agents talking to
each other long for, like just like, let's go for a year and do that and see what happens.
And so often they get into like, oh, thank you, no, thank you, no, thank you. One month later,
thank you, no, thank you. Which human beings are like, stop, right? Like, just like, and that's like a,
That's a simple way of putting the context awareness thing of like, no, no, no, no, let's, let's stay very context aware.
And even as magical as the progression has been, like much, much better data, much, much better reasoning, much, much better personalization, et cetera, et cetera.
Context awareness only is a proxy of that.
Yeah.
Yeah.
I want to go deeper on your question about doctors read.
Because, Alex, we just released one of your talks around, you know, software eating labor.
And I'm curious where you, and how you, what sort of frameworks you have for thinking about what spaces are going to have more of this co-pilot model versus what spaces it's going to be, sort of replacing the work entirely.
I have, I wish I could, I'm going to use an L-LM to go predict the future, but I'm going to get a B-minus, apparently, maybe I'll answer when I get a B-plus.
I think a lot of it is like the natural, like there's this skeu-morphic version, which is, okay, well, I trust the doctor.
Everybody trusts the doctor. The heuristic is, where did you go to medical school? Apparently two-thirds of doctors now use open evidence, which is like, chat, you.
GPT, but it ingested the New England Journal of Medicine and have, like, a license to that.
So, Daniel Nadler, good guy.
Kenchua, right? So, yeah, so that seems like there's no reason not to do that.
Like, my seven deadly sins version, I'll simplify it, which is like everybody wants to be lazier and richer.
So if this is a way that I can, like, get more patients and do less work, of course people are going to use this.
There's no reason not to.
But does it replace that particular thing?
And actually, most of, like, the software eats labor thing, it doesn't actually eat.
labor right now the thing that's working the best is not like hey i have a product where everybody's
going to lose their job nobody's going to buy that product it's very very hard to get that distributed
as opposed to you i will give you this magic product that allows you to be lazier obviously it's not
framed this way like lazy and rich it sounds kind of uh you know not not great but i'm going to let you
work fewer hours and make more money and that's that's a very killer combo and if you have a
product like that um and it's delivered by somebody that already has that heuristic of expertise
these are just going to go one after another
and get adopted, adopted, adopted.
And then eventually you're going to have cases
like the one that you mentioned
where if you don't use CHAPT
when you get a medical diagnosis, you're insane.
But that has not fully diffused
across the population.
Well, it's barely diffused.
No, I know.
Yes, yes.
But you were saying not fully,
I mean, part of the reason
everyone, start doing it.
Yes, 100%.
Well, it's because it's the fastest growing product of all time.
Yeah, it's barely, you know.
Well, that's why I'm convinced
that AI is massively underhyped.
Yeah.
Because in Silicon Valley, you might not make that claim.
Maybe it's overhype, maybe valuation, whatever.
We all don't think it's overhithe.
But I think once I meet somebody in the real world and I show them this stuff, they have no idea.
And part of it is like they see the IBM Watson commercials and like, oh, that's AI.
No, that's not AI.
Right.
Or they see the fake AI.
They've seen chat GPT two years ago.
It didn't solve a problem.
And it's funny.
I made this blog post.
Back when you were my investor at trial day, I called it never judge people on the present.
And this is a mistake.
It's a category error.
that a lot of big company people make.
But I mean that almost metaphorically.
And the way that I wrote this blog post
was I found a video of Tiger Woods.
He was two and a half years old.
He hit a perfectly straight drop.
And he was on, you know, not the,
I think the Tonight Show or something.
And there are two ways of watching that video.
You could say, well, I'm 44.
I can hit a drive much further than that kid,
which is correct.
Or you could say, wow,
if that two and a half-year-old kid keeps that up,
he could be really, really good.
And most people judge things on the present.
Yes.
And that's why it's underhyped.
Because it's like they tried it at some point in time.
There's a distribution of when they tried it.
Like, probabilistically, it's in the past.
And they're like, oh, that didn't work for my use case.
It doesn't work.
And that's, that's bad.
But so I think it's going to diffuse largely around this, like, lazy, rich, like concept.
And that's where a lot of these things have taken off.
And I see it less at the very, very big companies because you have a principal agent problem at the very big company.
It's like, okay, my company made money or saved money.
I'm a director of XYZ.
Like, all I know is that I want to leave earlier and get promoted.
Yeah.
And how does that actually help me?
It helps the ethereal being of the corporation,
whereas at a smaller business or a sole proprietor
or an individual doctor, where I run a dermatology clinic
and somehow I can have five times as many patients
or I'm a plaintiff's attorney,
I can have five times as many settlements.
It's like, of course I'm going to use that
because I get to be lazier and richer.
Yeah, 100%.
That seems to a great model.
By the way, the other one, you're reminding me,
Ethan Wallach has a quote here that I use often.
He's great.
Yes.
The worst AI you're ever going to use
is the AI you're using today.
Correct.
Because it's to remind you,
use it tomorrow.
Yeah.
And a lot of the skeptics
is exactly this.
It's like,
well,
I tried it two months ago
and it didn't solve this problem,
therefore it's bad.
It's less easier
judging it on the present.
Like, you have to extrapolate.
Yes.
And you don't want to get like too extrapolatory.
I'm like, you know,
oh, LLMs have this.
Like you actually have,
I feel like the two types of people
that are under hyping AI
are people that know nothing
and people that know everything.
It's really interesting.
It's like the meme where it's like,
you know, the idiot meme, right?
It's like the people,
But it's like the people and this part of the distribution are correct.
Normally the meme is the opposite.
It's like these people are smart, even though they're dumb.
These people are smart, even though they're smart.
Everybody here, like, this is this part of the curve is actually correct.
Because they're the ones that are using it to get richer and be lazier.
The other thing I also tell people is if you haven't found a use of AI that helps you on something serious today,
not just write a sonnet for your kid's birthday or, you know, I've got these ingredients in my fridge.
What should I make?
Do those too.
But if you haven't for something.
like work for like something
is serious about what you're doing,
you're not trying hard enough.
Yeah, yeah.
It doesn't that it works, does everything.
Like, for example, I still think if I put in,
like, how should Reid Hoffman make money investing in AI?
And I'll go try that again.
I suspect I will still get what I think is the Bozo
business professor answer versus the actual name of the game.
But everyone should be trying.
And I, you know, like, for example,
we put, when we get decks,
So we put them in and say, give me a due diligence plan, right?
If not everybody here doing that, that's a mistake.
Because five minutes, you get one and you go, oh, no, not two, not five.
Oh, but three is good.
And it would have taken me a day to getting to about three.
Yeah.
Yeah.
In terms of, let's go back to extrapolation, obviously the last few years have had incredible growth.
You were involved, of course, with opening us since the beginning.
When we look for the next few years, these broader question as to whether scaling laws will hold,
whether there's sort of the limitations
or how far we can get with with LLMs.
Do we need another breakthrough of a different kind?
What is your view on some of these questions?
So one of the things we, you know,
we all swim in this universe of extrapolating the future.
One of the things is it's great about Silicon Valley.
And so you get such things as, you know,
theories of singularity,
theories of superintelligence,
theory of exponential getting to superintelligence soon.
And what I find,
is usually the mistake in that is not the fact that it's strappling the future that's smart
and people need to do that and far too people do I think I remember liking your post and
helping promote it if I recall um but it's the notion of well what curve is that like if it's a savant
curve that's different than oh my gosh it's an apotheosis and now it's god you know you know
it's like no no it'll be an even more amazing
savant than we have. But by the way, if it's only savants, there's always room for us.
There's always rooms for the generalists and the cross-checker and the context awareness
and all the rest of that. Now, maybe it'll cross over a threshold or not. Maybe it won't.
You know, like, I think there's a bunch of different questions there, but that extrapolation
too often goes, well, it's exponential. So in two and a half years, magic. And you're like,
well, look, it is magic, but it's not all magic is the kind of way he's doing it. Now, so my own
personal belief is that, so the critics of LLLM's make a mistake in that, and, you know, we can
go through all the different critics, oh, not knowledge representation, it, it screws up on, you
know, prime numbers and, you know, blah, blah, blah, blah, we've all heard.
How many R is in struggle.
Yes, exactly, yeah, you know, and they go, wow, see, it's broken.
And you're like, you're missing the magic, right?
Like, yes, maybe there's some structural things that over time, even in three to five,
five years will continue to be a difficult problem for LMs, but AI is not just the one LLM to rule
them all. It's a combination of models. We already have combination of models. We use diffusion
models for various image and video tasks. Now, by the way, they wouldn't work without also
having the LLMs in order to have the ontology to say, create me and Eric Tornburg as a Star Trek
captain, you know, going out to, you know, explore the universe and meeting and making first
contact with the Vulcans and so forth, which now with our phone, we could do that, right?
And it would be there, courtesy OpenAI, and, you know, VO, because Google's model is also very good.
But it needs the LMs for that.
But the thing that people on track is it's going to be LMs and diffusion models and I think other things with a fabric across them.
Now, one of the interesting questions is the fabric fundamental LLMs, is the fabric of the things?
I think that's a TBD on this.
the degree to which it gets to intelligence is an interesting question.
Now, one of the things I think is a, you know, like I talk to all the critics intensely,
not because I necessarily agree with the criticism, but I'm trying to get to the,
what's the kernel of insight?
Yeah.
And like one of the things that I loved about, you know, kind of a set of recent conversations
with Stuart Russell was saying, hey, if we could actually get the fabric of these models
to be more predictable, that would greatly allay the, you know,
the fears of what happens if something goes amok.
Well, okay, let's try to do that.
Now, I don't think the whole verification of outputs, like logical,
like we can't even do verification of coding, right?
Like verification, this strikes me as very hard.
Now, brilliant man, maybe we'll figure it out.
But the, but on the other hand, the, hey, this is a good goal.
Can we make that more programmable, reliable?
I think that is a good goal that people, that very smart people,
people should be working on. And, by the way, smart AIs.
Well, that's some of the math side. It's like, if you think about the foundation of the world,
I mean, philosophy is the basis of everything. Actually, math came from philosophy. It's called
the Cartesian plane after Descartes. You know, you're a philosophy, you know this, right?
So you have philosophy, math, physics. Like, why did Newton build calculus to understand the
real world? So math, physics gets you chemistry. Chemistry gets you biology, and then biology gets
you psychology. So that's kind of the stack. So if you solve math, that's actually quite
interesting because there's a professor at Rutgers, Kantorovich, who's written about this a lot.
And I find this part fascinating, just as a former mathematician, because there are some very,
very hard problems. There's a rumor that the Navarier-Stokes equation is going to be solved
by deep mind, which would be huge. That's one of the clay math problems. But, you know,
the remand hypothesis, like, this is not like, there's no e-val, right? If it's like, this is
why if you look at the progression of AI, there is the Amy, the American Invitational Math
Examination, where you, the answers are all just like three, it's just integers. It's like zero to
999 is the answer. And then, of course, you can keep trying different things. Then you either
get the right answer or you don't, and it's very, very easy to do that. Whereas once you get to
proofs, very, very hard. Yes. And if you solve that, I mean, is that AGI, no, because the goalposts
keep changing out of AGI. Yes. But math is just so interesting.
AGI is the AI we haven't invented.
Exactly. Exactly. It's the corollary to it. It's like if the worst AI you're going to try is today, well, AGI is what you're going to have tomorrow. It's the same kind of thing. But math is a very, very interesting one as well. Because again, you have these things. It's not like solving high school math. This is like if you're able to actually logically construct a proof for something and then validate it. There's a whole programming language called Lean, which is for that. That stuff is also fascinating. So there's so many different vectors of attack, which is the other way of thinking about it.
It's fascinating. So as you just mentioned, Alex, read you're a philosophy major, but you're also very interested in deep in neuroscience. And some people say that, hey, we'll never create AI with its own consciousness because we don't even understand our own conscience. We don't understand how our own brain works. And then there's a broader question as, oh, will AI have its own goals or will have its own agency? What is sort of your view on some of these questions surrounding consciousness and relates to AI?
Well, consciousness is its own tarball, which I will say a few things about.
I think agency and goals is almost certain.
There is a question.
I think this is one of the areas where we want to have some clarity and control.
That was a little bit like the kind of question of what kind of compute fabric holds it together
because you can't get complex problem solving without it being able to set its own minimum subgoals and other kinds of things.
And so goal setting and behavior and inference from it, and that's where you get the classic.
kind of like, well, you tell it to maximize paper clips,
and it tries to convert the entire planet into paper clips.
And there's one thing that's definitely old computer
which is no context awareness,
something I even worry about modern AI systems.
But on the other hand, it's like, look,
if you're actually creating an intelligence,
they don't go, oh, let me just go try to convert everything
into paper clips.
It's like it's actually, in fact, not that simple
in terms of how it plays.
Now, consciousness is an interesting question
because you've got some very smart people,
Roger Penrose, who I actually interviewed way back when
on Emperor's New Mind, speaking of mathematicians,
and, you know, who are like, look, actually, in fact,
there's some thing about our form of intelligence,
our form of computational intelligence that's quantum-based,
that has to do with how our physics work,
that has to do with things like tubulars and so forth.
And by the way, it's not impossible.
Like, that's a, it's a coherent theory from a very smart mathematician, like one of the world's smartest, right?
Like it's kind of in the category of there's other people as smart, but there's no one's smarter, right, in that kind of vector.
And so, so that's possible.
I don't think you need consciousness for goal setting or reasoning.
I'm not even sure you need consciousness for certain forms of self-awareness.
There may be some forms of self-awareness
that consciousness is necessary for.
It's a tricky thing.
Philosophers have been trying to address this
not very well
for as long as we've got records of philosophy.
And philosophers agree.
The philosophers wouldn't think I was throwing him under the bus with this.
They're like, yeah, this is a hard problem.
Because it ties to agency and free will
and a bunch of other things.
And I think that the right thing to do is keep an open mind.
Now, part of keeping an open mind,
I think Mustafa Suleiman wrote a very good piece
in the last month or two
on like semi-consciousness, which is we make too many mistakes all of the Turing test,
a piece of brilliance, which is, well, it talks to us, so therefore it's fully intelligence
and all the rest. And so similarly, you had that kind of, you know, kind of nutty event from that
Google engineer who said, I asked this earlier model, was it conscious? And it said, yes,
so therefore it is.
QED. Yes, QED. Yes, QED. No, no, no. Like, you have to be not misled by that kind of thing.
And, like, for example, you know, the kind of thing that, you know, what I actually think most people obsess about the wrong things when it comes to AI.
They're obsessed about the climate change stuff because actually, in fact, if you apply intelligence at the scale and availability of electricity, you're going to help climate change.
You're going to solve grids and appliances and a bunch of other stuff.
It's just like, no, this will be net super positive.
And by the way, you already see elements of it.
Google applied its algorithms to its own data centers, which are.
are some of the best tuned grid systems in the world, 40% energy savings.
I mean, just, you know, just that, da, da, da, da, and just applying it.
So that's the mistake.
But one of the areas, I think, is this question around, like, what is the way that we want children growing up with AI's?
What is their epistemology?
What is their learning curves?
You know, what are the things that kind of play to this?
Because that kind of question is something that we want to be very intentional about in terms of how we're doing it.
And I think that's, like, if you want to go ask a good question that you should be trying to get good answers, that you could do something again in contributing good answers to, that's a good one.
Yeah.
Well, the most cogent argument that I've heard against free will is just that we are biochemical machines.
So if you want to test somebody's free will, get them very hungry, very angry, like all of these things where it's just there's a hormone.
It's like, nor epinephrine.
It's like that makes you act a particular way.
It's like an override.
So you have this like free will thing, but then you just insert a certain.
chemical, and then, like, boom, it changes.
Are you saying you're not a Cartesian?
You don't have a little pineal gland that connects the two senses?
No, I don't know.
So, but it's true.
I mean, just, like, hangar is, yeah, I'm hangary.
Like, that's a thing.
Yes.
And, you know, what is the, like, do you actually want, if you're developing superintelligence,
do you want to have this, like, kind of silly override?
I mean, the reason why people go to jail sometimes that are perfectly normal is they get very
angry.
They do things that are kind of, like, out of character.
But it's actually not out of character if you think about this free will.
override of just like chemicals going through your bloodstream, which is kind of crazy to think about.
Look, since we're on a geeky nerdy podcast, I'm going to say two geeky nerdy things are one.
The classic one is people say, yes, we are biochemical machines, but let's not be overly simplistic on what a biochemical machine is.
That's like the Penrose, quantum computing, et cetera.
And you get to this weird stuff in quantum, which is, well, it's a probabilistic dual supervisional form until it's measured.
Why is there magic in measurement?
And is that magic in measurement, something that's conscious, you know, blah, blah, blah.
So there's a bunch of stuff there.
The other thing that I think is interesting that we're seeing as a resurgence in philosophy a little bit is idealism.
Like we would have thought as physical materialists that we go, no, no, idealists, we're disproven, they're gone.
But actually beginning to say, no, actually, in fact, what exists is thinking and that all of the physical things around us come from that thinking.
And obviously we see versions of this because, you know, I find myself entertained frequently here in Silicon Valley by people saying we're living in a simulation. I know it. You know it. And you're like, well, your simulation theory is very much like Christian intelligent design theory. It's the I have things that I can't explain. So therefore, creator. No, therefore, simulation. No, therefore creator of simulation. You're like, no, no, no. But I, you know, so clearly I'm not an idealist. But that's why I see some resurgence of.
idealism happening.
I suspect we'll solve for AGI
before we solve for various definitions of AGI
before we solve for the hard problems of consciousness.
Yes.
I want to return to LinkedIn
how we began the conversation
because we were lucky to, or I was lucky to work
many years with you, we would get pitches
every week about a LinkedIn disruptor
last 20 years, right?
And so, and nothing's come even close.
And so it's fascinating.
I'm curious why people sort of underrated how hard it was.
And people have this about Twitter, too,
or other things that kind of look simple, perhaps,
but are actually very, very difficult to unseat
and have a lot of staying power.
And it's interesting.
You know, Open AI, they said they're coming out
with a job service to, quote,
use AI to help find the perfect matches
between what companies need and what workers can offer.
I'm curious how you think about sort of LinkedIn's durability.
So, look, I obviously think LinkedIn is durable,
but first and foremost, I kind of look at this
as humanity, society,
industry. So first and foremost is what are the things that are good for humanity, then what's
good for society, then what's good for industry. And by the way, we do industry to be good for
society and humanity. It's not, it's not oppositional. It's just a, you know, how you're making
these decisions and what you're thinking about. So I would be delighted if there were new amazing
things that helped people, you know, kind of make productive work, find productive work.
In Neek than do them, we're going to have all this job transition coming from technological
disruption with AI, like it would be awesome. I, of course, would be extra awesome if it was
LinkedIn bringing it, just given my own personal craft of my hands and pride at what we built
and all the rest. Now, the thing with LinkedIn, and, you know, Alex was with me on a lot of
this journey, you know, as I sought his advice on various things. The, the, LinkedIn was one of
those things where it's where the turtle eventually actually, in fact, like, grows into something huge.
Because for many, many years, the general scuttlebutt in Silicon Valley was LinkedIn was the
was the dull, boring, useless thing, et cetera. And it was going to be Frenster. Probably most
the people listening to this, don't know what Frenster is, then MySpace, maybe a few people have heard of
that, right? You know, and then, of course, we got, you know, Facebook and meta and, you know, TikTok and all the rest.
And part of the thing for LinkedIn is it's built a network that's hard to build, right?
Because it doesn't have the same sizzle and pizzazz that photo sharing has.
It doesn't have the same sizzle and pizzazz that, you know, like one of the things that, you know, you were referencing the Seven Deadly Sins comment.
And back when I started doing that 2002, yes, I left my walker at the door.
the thing that I used to say was Twitter was identity.
I actually mistook it.
It's wrath, right?
And so it doesn't have the wrath, you know, kind of component of it.
And so the thing that, and you said with LinkedIn, LinkedIn's greed, great, you know,
because seven deadly sins kind of, you know, because that's, you know, a motivation that's very common across a lot of human beings.
Rich and lazy.
Yes, exactly.
And so, or, you know, you're putting it in the.
the punchy way, but simply being productive, more value creation and accruing some of that
value to yourself. And so I think the reason why it's been difficult to create a disruptor to
LinkedIn is it's a very hard network to build. It's actually not easy. And by staying really true
to it, you end up getting a lot of people going, well, this is where I am for that. And now I have a
network of people that are and we are here together, collaborating and doing stuff together.
And that's the thing that a new thing would have to be.
And, you know, I, you know, when I saw GVD4 and knew that Microsoft had access to this,
I called the LinkedIn people and said, you guys have got to get in the room to see this,
because you need to start thinking about what are the ways we have.
help people more with that because you start with this is actually one of the things
that I think people don't realize but Silicon Valley because, you know, the general discussion
is, oh, you're trying to make all this money through equity and all this revenue. Of course,
you know, business people are trying to do that. But they don't realize as you start with,
what's the amazing thing that you can suddenly create? And part of it is like lots of these
companies, like it started with and you go, what's your business model? And you go, I don't know.
Like, yeah, we're going to try to work it out. But I can create something amazing here.
And that's actually one of the fundamental, like, places of what the, you know, call it the religion of Silicon Valley and the knowledge of Silicon Valley that I so much, you know, love and admire and embody.
That's actually a question that I have.
So I'll say one thing is a huge compliment to LinkedIn.
It's anti-fragile.
Yes.
And that, like, Facebook, oh, nobody goes there anymore.
It's like the yogi bearer.
And it's too crowded and nobody goes there anymore.
It's, oh, there are too many parents there.
And there's always been a new one.
Like, how did Snap start?
Like, all these other networks started because people didn't want to hang out with their boomer parents.
parents. My kid won't let me follow him on Instagram, right? It's like he doesn't want to use
Facebook. So LinkedIn has survived through all of that. But you referenced something that I think
is a very interesting point, which is back in like Web 2, it was like get lots of traffic.
Yes. Get amazing retention, you know, smile curve. And then you will figure out monetization.
Yes. And like that isn't happening right now. Yeah. It's not like get lobby. Yes, it happened
with chat GPT. It was like it's $20. Yes. Right. Like the monetization was kind of built in very, very
clear subscription versus, like, become giant, build a giant.
Like, do you think there will be new ones of those with AI?
Yes, and there will be new kind of freemium.
It's part of our tool chest.
Now, part of the reason why it's more tricky, especially when you're doing open AI,
is because the, like, the cogs are changed a little off.
Yes, right?
For now.
Yes.
No, no, but like, and so you just can't, this is one of the reasons why at PayPal, we had
to change to, like, we, as you know, because you were close to us there, like, we
had to change to a paid model because we're like, oh, look, we have expectations.
Which means exponentiating cost curve, which means despite having raised hundreds of millions of dollars, we could literally count the, we could point to the hour that we'd go out of business, right? Because you know, no, you can't have an exponentially cost curve. So I think that's one of the reasons why some of it has been different in AI because you like, you can't have an exponential cost curve without at least a following revenue curve. Right. But it's almost no fun. It's like Pinterest, it's like, how are they going to make money now a big public company? It's like there were a lot of these just during that era. And now it's like, it's like,
They're burning lots of money.
They're raising lots of money,
but the subscription revenue is baked in from day zero.
And that's the fundamental.
But they have to because of the cost.
They have to, exactly.
Yeah.
So I'm waiting for, like, one of these, like,
you know, net new companies
that appeals to probably one of the seven deadly sins.
That is the new counterpart.
Yeah.
Well, I'd be happy to work on with you.
Yes.
Well, it is fascinating.
Some people,
many people have tried sort of different angles on LinkedIn.
One that I was curious about a few years ago
was sort of this idea of could you get,
what's on LinkedIn is resumes,
but not necessarily references.
But the same way,
that resumes are viral, references are like anti-vary or antimimetic and people don't want them
on the Internet. If there was a data set that people wanted on the Internet, LinkedIn would have done it
to some degree. But yeah, I think most people who try these attempts don't kind of appreciate
sort of the subtleties of a... And I've actually, I mean, we do have the equivalent of book blurb references
on it. Yes, endorsements. But you don't have a negative reference. Well, but by the way,
part of the reason why negative references is you have complexity in social relationships. That's the
negative virality point that you were just making.
And then you also have complexity on, like, you know, kind of not just legal liability,
but social relationships and a bunch of other stuff.
Now, LinkedIn is still the best way to find a negative reference.
I mean, that's actually one of the things that I use LinkedIn to figure out who might know a person.
And I have a standard email.
You've probably gotten a bunch of these for me where I've, where I email people saying,
could you rate this person for me from one to 10 or reply, call?
call me.
It's a negative what?
Yes.
Yes.
Right.
And when you get to call me, you're like, okay.
Don't even need to take the call.
Yeah.
Yeah.
I understand.
Right.
And by the way, sometimes you go, when a person writes back 10, you're like, really?
Like, best person you know, right?
But what you're looking for is like a set of eight nons.
And if you're going to set in eight and nines, you might still call and get some information.
But you're like, okay, I got a quick referential information.
Whereas, by the way, more often than.
not, you know, when you're checking someone, you really know, you get a couple call mes.
Yeah.
Because, because my, and it's just that quick.
Because email, one sentence thing, get back, call me.
You're like, okay, I understand.
Yeah.
Do we have a 10 minutes left just logistics check.
A couple of last things we'll get into.
Is there anything you wanted to make sure?
But we can do this again.
This is always fun.
Yeah, that's great.
I'm curious, Reed, as you've sort of continued to up level in your career and have more
opportunities and they seem to compound, especially, you know, post-selling.
LinkedIn, how have you decided
where is the highest leverage use for
your time? Where can you have the
biggest impact? What's your mental
framework for you? So
I mean
one of the things that I'm sure I speak for all three of us
is an amazing time to be alive.
I mean, this AI and the transformation
of what it means for
evolving homotechne and what
is possible in life
and in society and work
and all the rest is just amazing. And so
I stay as
involved with that as I possibly can.
Like, it has to be something that's so important
that I will stop doing that.
Now, within that, you know, part of that was, you know,
co-founding Manasei with Sadar to Mukherjee,
who's CEO, Emperor of Empiral Malady's inventor of some T-cell therapies.
So it was like, for example, getting an instruction from him
on the FDA process, you know,
that's the kind of thing that makes us all run screaming for the hills.
right, as an instance.
And so, you know, that kind of stuff.
But also, you know, like one of the things I think is really important is as technology drives more and more of everything that's going on in society, how do we make government more intelligent on technology?
And so, you know, every kind of, you know, kind of well-ordered Western democracy, I've been doing this for at least 20 to 25 years.
If a minister, you know, or kind of senior person from a democracy comes and asks for advice, I give it to them.
So, you know, just last week I was in France talking with Macron because he's trying to figure out, like, how do I help French industry, French society, French people, what are the things I need to be doing?
You know, if all the frontier models are going to be built in the U.S. and maybe China, what does that mean for how I help, you know, our people and so forth?
and he's doing the exact right thing,
which is I understand that I have a potential challenge.
What do I do to help my people?
How do I reach out?
How do you talk?
Sure, they've got my straw.
They've got some other things.
But like, how do I maximally help what I'm doing?
And so putting a bunch of time into that as well.
Yeah.
I remember seeing your calendar and it was what seemed like seven days a week,
meetings absolutely stacked.
And one of the ways in which...
I've gone to six and a half days.
Okay.
I'm glad you've calmed out.
Yeah.
One of the ways in which you were able to do that,
One, it's important problems, but two, you work on projects with friends, sometimes over decades.
And maybe we'll close here.
You've thought a lot about friendship.
You've written about it.
You've spoken about it.
I'm curious what you've found most remarkable or most surprising about friendship or where you think more people should appreciate it,
especially as we enter this AI era where people sort of are questioning, you know, the next generation,
what's there going to be a relationship to friends?
I actually am going to write a bunch about this specifically because AI is now bringing some very important things
that people need to understand, which is friendship is a joint relationship.
It's not a, oh, you're just loyal to me or you just do things for me.
Oh, this person does things for me.
Well, there's a lot of people who do things for you.
Your bus driver does things for you, you know, like, but that doesn't mean that you're friends.
Friends, like, for example, like a classic way of putting is like, oh, I had a really bad day
and I show up my friend Alex and I want to talk to him.
And then Alex's like, oh, my God, here's my day.
I'm like, oh, your day is much worse.
We're going to talk about your day versus my day.
You know, that's the kind of thing.
thing that happens because what I think fundamentally happens with friends is two people agree to
help each other become the best possible versions of themselves. And by the way, sometimes that
leads to friendship conversations that are tough love. They're like, yeah, you're fucking this up
and I need to talk to you about it. Right. It's not, I tell you, like, you know, the whole sycifancy
phase and AI thing. It's not that. It's like the how do I help you, but as part of also the thing
that I gave the commencement speech
at Vanderbilt a few years back
and was on friendship
and part of it was to say
look part of friends is not just
does Alex help me
but Alex allows me to help him
right and as part of that
that's part of how I become a deeper friend
I learn things right it's not just helping Alex
that joint relationship's really important
and you're going to see all kinds of nutty people
saying oh I have your AI friend right here
and it's like no you don't it's not
a bidirectional relationship, maybe an awesome companion, like just spectacular, but it's not
a friend. And you need to understand, like, part of a friend is part of when we begin to realize
that life's not just about us, that we, it's a team sport, we go into it together, that
sometimes, you know, friendship conversations are wonderful and difficult, you know, and that
kind of thing. And I think that's what's really important. And now that, you know, we've got this
blurriness that AI is created, it's like, shoot, I have to go write some of this very soon.
So that people understand how to navigate it
and why they should not think about AI
anytime soon as threats.
Well, one thing I've always appreciated about you as well
is you're able to be friends with people
for whom you have disagreements with
or people for whom, you know,
you are not close to it for a few years,
but you can reconnect and sort of, yeah,
that ability is...
Yeah, it's about us making each other
the better versions of ourselves.
And sometimes that, you know,
sometimes those go through rough patches.
Yeah, I think it's a great place to close.
Thank you so much for coming on the podcast.
My pleasure, and I hope we do this again.
Yeah.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our Substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
Thank you.
