a16z Podcast - Sam Altman on Sora, Energy, and Building an AI Empire
Episode Date: October 8, 2025Sam Altman has led OpenAI from its founding as a research nonprofit in 2015 to becoming the most valuable startup in the world ten years later.In this episode, a16z Cofounder Ben Horowitz and General ...Partner Erik Torenberg sit down with Sam to discuss the core thesis behind OpenAI’s disparate bets, why they released Sora, how they use models internally, the best AI evals, and where we’re going from here. Resources:Follow Sam on X: https://x.com/samaFollow OpenAI on X: https://x.com/openaiLearn more about OpenAI: https://openai.com/Try Sora: https://sora.com/Follow Ben on X: https://x.com/bhorowitz Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Sort of thought we had like stumbled on this one giant secret
that we had these scaling loss for language models
and that felt like such an incredible triumph.
I was like, we're probably never going to get that lucky again.
And deep learning has been this miracle that keeps on giving.
And we have kept finding breakthrough after breakthrough.
Again, when we got the reasoning model breakthrough,
I also thought that was like we're never going to get another one like that.
It just seems so improbable that this one technology works so well.
But maybe this is always what it feels like when you discover,
Like one of the big, you know, scientific breakthroughs is if it's like really big, it's pretty
fundamental and it just, it keeps working.
OpenEI isn't just building an app.
It's building the biggest data center and human history.
Yesterday, I sat down with Ben Horowitz and Sam Altman, CEO of OpenAI.
We talk about OpenAI's vision to become the people's personal AI, the massive infrastructure
behind it, and how the company's research is pushing toward AGI, including AI that can do
real science. We also talk about how his views have changed on open source, regulation,
and why AI and energy are now deeply linked. Let's get to it.
Sam, welcome to Jason Z podcast. Thanks for having me. You've described in another interview
described Open AI as a combination of four companies, consumer technology business, a megascale
infrastructure operation, a research lab, and all the new stuff, including planned hardware
devices. From hardware to app integrations, job marketplace to commerce, we do all these
bets add up to what's open AI's vision yeah i mean maybe you should kind of just three maybe is
four for kind of our own version of what traditionally would have been the research lab at this scale
but three core ones we want to be people's personal AI subscription i think most people will have one
some people will have several and you'll use it in some first party consumer stuff with us but you'll
also log in a bunch of other services and you'll just use it from dedicated devices at some point
you'll have this AI that gets to know you and be really useful to you and that's what we want to do
it turns out that to support that
we also have to build out
this massive amount of infrastructure
but the goal there
the mission is really like
build this AGI
and make it very useful to people
and does the infrastructure
do you think it will end up
yeah it's necessary for the main goal
will it also separately
end up being another business
or is it just really going to be
in service to the personal AI
or unknown?
You mean like would we sell it
to other companies
as raw infrastructure?
Yeah would you sell to other companies
you know it's such a massive thing
would it do something else?
It feels to me like there will emerge
some other thing to do like that.
But I don't know.
We don't have a current point in it is.
It's currently just meant to like support
the service we want to deliver and the research.
Yeah, no, that makes sense.
Yeah.
The scale is sort of like terrifying enough
that you've got to be open to doing something else.
Yeah, if you're building the biggest data center
in the history of humankind.
The biggest infrastructure project,
yeah.
There was a great interview you did many years ago
and strictly VC.
early open AI well before chat GBT
and they're asking what's the business model and you said
oh we'll ask the AI it'll figure it out for us
everybody laughs but there have been
multiple times and there was just another one
recently where we have asked a then
current model for what should we do and it has
had an insightful answer we missed so
I think when we say stuff like that people
don't take us seriously or literally
but maybe the answer is you should
take us both yeah
yeah well no has somebody runs an organization
I ask the AI a lot of questions
about what I should do with
It comes up with some pretty interesting answers.
Sometimes.
Sometimes now.
You have to give it enough context.
What is the thesis that connects these bets beyond more distribution, more compute?
I mean, the research enables us to make the great products and the infrastructure enables us to do the research.
So it is kind of like a vertical stack of things.
Like you can use ChatsbyT or some other service to get advice about what you should do running an organization.
But for that to work, it requires great research and requires a lot of infrastructure.
or so it is kind of just this one thing.
And do you think that there will be a point
where that becomes completely horizontal
or will it stay vertically integrated
for the foreseeable future?
I was always against vertical integration
and I now think I was just wrong about that.
Yeah, interesting.
Because you'd like to think that the economy is efficient
in the theory that companies can do one thing
and then that's supposed to work.
I'd like to think that, yeah.
And in our case, at least, it has,
really. I mean, it has in some ways, for sure. Like, you know,
Nvidia makes an amazing chip or whatever that a lot of people can use. But
the story of open eye has certainly been towards we have to do more things than we
thought to be able to deliver on the mission. Right. Although the history of the
computing industry has kind of been a story of kind of a back and forth in that
there was the Wang word processor and then the personal computer and the
Blackberry before the smartphone. So there has been this kind of vertical integration
and then not, but then the iPhone is also vertically integrated.
The iPhone, I think, is the most incredible product
the tech industry has ever produced,
and it is extraordinarily vertically integrated.
Amazingly so, yeah.
Interesting.
Which bets would you say are enablers of AGI versus which are sort of hedges against uncertainty?
You could say that on the surface, SORA, for example,
does not look like it's AGI relevant,
but I would bet that if we can build really great world models,
that'll be much more important to AGI than people.
people think. There were a lot of people who thought chat chabit was not a very
AGI relevant thing. It has been very helpful to us, not only in
building better models and understanding how society wants to
use this, but also in like bringing society along to actually figure out,
man, we got to contend with this thing now. For a long time before Chatsypte,
we would talk about AGI and people were like, this is not happening or we don't care.
And then all of a sudden they really cared. And I think that research benefits
aside, I'm a big believer that society and technology have to co-evolve.
can't just drop the thing at the end.
It doesn't work that way.
It is a sort of ongoing back and forth.
Yeah.
Say more about how SORA fits into your strategy
because there's some hullabaloo on X around,
hey, why devote precious GPUs to Sora?
But is it a short-term, long-term trade-off,
or are we so aging?
And then the new one had like a very interesting twist
with the social networking.
Be very interested in kind of how you're thinking about that
and did META call you up and get mad?
Or what do you expect they react to me?
I think if one company of the two
of us has, feels like more like the other one has gone after them. It wouldn't. They shouldn't
be calling it. Well, I didn't have a history. But first of all, I think it's cool to make great
products and people love the new Saur. And I also think it is important to give society a taste
of what's coming on this co-evolution point. So like very soon, the world is going to have to
contend with incredible video models that can deep fake anyone or kind of show anything you want. And
That will mostly be great.
There will be some adjustment that society has to go through.
And just like with chat GPT, we were like,
the world kind of needs to understand where this is.
I think it's very important.
The world understands where video is going very quickly
because video has much more like emotional resonance than text.
And very soon we're going to be in a world where like this is going to be everywhere.
So I think there's something there.
As I mentioned, I think this will help our research program is on the AGI path.
But it can't all be about just making people like ruthlessly efficient
and the AI like solving all our problems,
there's got to be like some fun and joy
and delight along the way.
But we won't throw like tons of compute out it
or not by a fraction of our computer.
It's tons in the absolute sense,
but not in the relative sense.
I want to talk about the future of AI human interfaces
because back in August you said
the models have already saturated the chat use case.
So what a future AI human interfaces look like
both in terms of hardware and software?
Is the vision for kind of a wee chat,
like super app?
So solving the chat thing in a very narrow
sense, which is if you're trying to, like, have the most basic kind of chat style conversation,
it's very good. But what a chat interface can do for you, it's like nowhere near saturated,
because you could ask a chat interface like, please cure cancer. A model certainly can't do that
yet. So I think the text interface style can go very far, even if for the chit chat use case,
the models are already very good. But of course, there's better interfaces to have. Actually,
it's another thing that I think is cool about SORA. You can imagine a world where the interface is just
constantly real-time rendered video
and what that would enable
and that's pretty cool.
You can imagine new kinds of hardware devices
that are sort of always ambiently aware
of what's going on
and rather than your phone
blast you with text message notifications
whenever it wants,
like it really understands your context
and when to show you what
and there's a long way to go
and all that stuff.
Within the next couple of years,
well-win models be able to do
that they're not able to do today
will be sort of white-collar replacement
in much deeper level,
AI scientist,
humanoids
I mean a lot of things
but you touched on
the one that I am most excited
about which is the AI scientist
this is crazy
that we're sitting here
seriously talking about this
I know there's like a quibble
on what the Turing test
literally is but
the popular conception
of the Turing test
sort of went wooching by
yeah that was fast
you know it was just like
we talked about it
as this most important test
of AI for a long time
it seemed impossibly far away
then all of a sudden it was past
the world freaked out
for like a week
two weeks. And then it's like, all right, I guess computers can do that now. And everything just
went on. And I think that's happening again with science. My own personal like equivalent of the
Turing test has always been when I can do science. Like that is always like that is a real change
to the world. And for the first time with GPP5, we are seeing these little examples where it's
happening. You see these things on Twitter. It did this, it made this novel math discovery and did
this small thing in my physics research, my biology research. And everything we see is that that's
going to go much further. So in two years, I think the models will be doing bigger chunks of
science and making important discoveries. And that is a crazy thing. Like, that will have a significant
impact on the world. I am a believer that to a first order, scientific progress is what makes
the world better over time. And if we're about to have a lot more of that, that's a big change.
It's interesting because that's a positive change that people don't talk about. It's gotten so
much into the realm of the negative changes if AI gets extremely smart. But curing up a disease,
He could use a lot more science.
That's a really good part.
I think Alan Turing said this.
Somebody asked him, they said,
well, you really think the computer is going to be smarter than the brilliant minds?
He said, it doesn't have to be smarter than a brilliant mind,
just smarter than a mediocre mind like the president of AT&T.
And we should use more of that too, probably.
We just saw periodic launch last week, open AI alums.
And to that point, it's amazing to see both the innovation that you guys are doing,
but also the teams that come out of open AI just feels like,
or creating tremendous capable of faith.
We certainly hope so.
Yeah.
I want to ask you about just broader reflections
in terms of what sort of about diffusion
or development in 2025 has surprised you
or what has sort of updated your worldview
since chatty came out.
A lot of things again,
but maybe the most interesting one
is how much new stuff we found.
Sort of thought we had like stumbled on this one giant secret
that we had these scaling laws for language models
and that felt like such an incredible,
triumph that I was like,
we're probably never going to get that lucky again.
And deep learning has been this miracle
that keeps on giving, and we have kept finding
like breakthrough after breakthrough.
Again, when we got the reasoning model breakthrough,
I also thought that was like,
we're never going to get another one like that.
It just seems so improbable that this one technology works so well.
But maybe this is always what it feels like
when you discover one of the big scientific breakthroughs.
If it's like really big, it's pretty fundamental and it just, it keeps working.
But the amount of progress, if you went back and used GPT 3.5 from chat GPT launch,
you'd be like, I cannot believe anyone used this thing.
And now we're in this world where the capability overhang is so immense.
Like most of the world still just thinks about what chat GPT can do.
And then you have some nerds in Silicon Valley that are using codecs and they're like, wow,
those people have no idea what's going on.
And then you have a few scientists who say,
those people using codecs have no idea what's going on.
But the overhang of capability is so big now,
and we've just come so far on what the models can do.
And in terms of further development,
how far can we get with LLMs?
At what point do we need?
New architecture,
how do you think about what breakthroughs are needed?
I think far enough that we can make something
that we'll figure out the next breakthrough with the current technology.
Like, it's a very self-referential answer,
but if LLN-based stuff can get far enough
that it can do, like, better research than all of opening up put together,
maybe that's like good enough.
Yeah, that would be a big break there.
A very big break there.
So on the more mundane, one of the things that people have kind of started to complain about,
I think South Park did a whole episode on it is kind of the obsequiousness of kind of AI and chat GPT in particular.
And how hard a problem is that to deal with?
Is it not that hard or is it like kind of a fundamentally hard problem?
Oh, it's not at all hard to deal.
A lot of users really want it.
Yeah.
Like, if you go look at what people say about chatchpte online,
there's a lot of people who, like, really want that back.
Yeah.
And it's, so it's not, technically, it's not hard to deal with at all.
One thing, and this is not surprising in any way,
but the incredibly wide distribution of what users want,
like how they'd like a chatbot to behave in big and small ways.
Does that, do you end up having to configure the personality,
then you think?
Is that going to be the answer?
I think so. I mean, ideally, you just talk to chat Chepti for a little while, and it kind of
interviews you and also sort of sees what you like and don't like.
And ChachyPT just figures it out. But in the short term, you'll probably just pick one.
Got it. Yeah, no, that makes sense. Very interesting. And actually, so one thing I wanted to ask
about is, yeah.
Like, I think we just had a really naive thing, which, you know, like, it would sort of be unusual to think you could make something that would
to talk to billions of people and everybody wants to talk to the same person.
Yeah.
And yet that was sort of our implicit assumption for a long time.
Right, because people have very different friends.
People have very different friends.
Yeah.
So now we're trying to fix that.
Yeah.
And also kind of different friends, different interests, different levels of intellectual capability.
So you don't really want to be talking to the same thing all the time.
And one of the great things about it is you can say, well, explain it to me like I'm five.
But maybe I don't even want to have to do that problem.
Maybe I always want you to talk to you.
Yeah, particularly if you're teaching me stuff.
I wanted to ask you a kind of like a CEO question,
which has been interesting for me to observe you,
is you just did this deal with AMD.
And, you know, of course,
the company's in a different position
and you have more leverage and these kinds of things.
But like, how has your kind of thinking changed over the year
since you did that initial deal, if at all?
I had very little operating experience then.
I had very little experience running account.
I am not naturally someone to run an account.
I'm a great fit to be an investor.
I thought that was going to be, that was what I did before this, and I thought that was going to be my career.
Yeah, yeah.
Although you were a CEO before that.
Not a good one.
And so I think I had the mindset of, like, an investor advising a company.
Oh, interesting, right.
Now I understand what it's like to actually have to run a company.
Yeah, right, right, right.
There's more, they're just a numbers, yeah.
I've learned a lot about how to, you know, like what it takes to operationalize deals over time.
Right.
All the implications of the agreement, as opposed to just, oh, right.
we're going to get distribution money.
Yeah.
That makes sense.
Yeah, no, because I just say I was very impressed at the deal structure improvement.
More broadly, in the last few weeks alone, you mentioned AMD, but also Oracle,
NVIDIA, you've chosen to strike these deals in partnerships with, with companies that you
collaborate with, but could also potentially compete with in certain areas.
How do you decide, you know, when to collaborate versus when not to, or how do you just think
about?
We have decided that it is time to go.
make a very aggressive infrastructure bet.
And we're like, I've never been more confident in the research roadmap in front of us
and also the economic value that will come from using those models.
But to make the bet at this scale, we kind of need the whole industry to,
or a big chunk of the industry to support it.
And this is like, you know, from the level of like electrons to model distribution
and all the stuff in between, which is a lot.
And so we're going to partner with a lot of people.
You should expect, like, much more from us in the coming months.
Actually, expand on that because when you talk about the scale,
it does feel like, in your mind, the limit on it is unlimited.
Like, you would scale it as, you know, as big as you possibly could.
There's totally a limit.
Like, there's some amount of global GDP.
Yeah.
You know, there's some fraction of it that is knowledge work and we don't do robots yet.
Yes.
but the limits are out there.
It feels like the limits are very far
from where we are today.
If we are right about,
so I shouldn't say from where we are,
like if we are right
that the model capability
is going to go
where we think it's going to go,
then the economic value
that sits there
can go very, very far.
Right.
So you wouldn't do it,
like if all you ever had
was today's model,
you wouldn't go there.
No, definitely not.
I mean, we would still
expand because we can see how much demand there is we can't serve with today's model,
but we would not be going this aggressive if all we had was today's model.
Right.
Right.
We get to see a year or two in advance those.
Yeah.
Yeah.
Interesting.
Chatsby is 800 million weekly active users, about 10% of the world's population,
fastest growing consumer product, you know, ever, it seems.
How do...
Faster than anyone I ever saw.
How do you balance, you know, optimizing for active users at the same time,
being a product company and a research company, how do you throw the new things?
When there's a constraint, we almost like, which happens all the time,
we almost always prioritize giving the GPUs to research over supporting the product.
Part of the reason we want to build this capacity, so we don't have to make such painful decisions.
There are weird times, you know, like a new feature launches and it's going really viral or whatever
where research will temporarily sacrifice some GPUs,
but on the whole, like, we're here to build AGI.
Yeah.
And research gets the priority.
Yeah.
The, you said in your interview with your brother Jack around how, you know,
other companies can try to imitate the products or, or buy your, you know,
or hire your, your, your, your, your, your, your, your,
all sorts of things.
But they, they can't buy the culture or they can't, I mean, the sort of repeatable sort of, you know,
machine, if you will, that is constantly the culture of innovation.
How have you done that?
What are you doing?
Talk about this culture of innovation.
This was one thing that I think was very useful about coming from an investor
background.
A really good research culture looks much more like running a really good seed stage
investing firm and betting on founders and sort of that kind of,
then it does like running a product company.
So I think having that experience was really.
really helpful to the culture we built.
Yeah.
Yeah.
That's sort of how I see, you know,
Benadyssey in some ways,
which we, you know,
you're a CEO,
but you also have, you know,
have this portfolio and,
you know, have an investor in my life.
Right, like I'm the opposite.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
It is unusual in this direction.
Yeah.
Yeah.
Yeah, well, it never works.
You're the only one who I think
I've seen go that way and have it work.
Work day was like that, right?
But Anil was, he was an operator before he was an investor.
And, I mean, he was really an operator.
I mean, people's office is a pretty big...
And why is it?
Because once people are investors, they don't want to operate him.
No, I think that investors, generally, if you're good at investing,
you're not necessarily good at, like, organizational dynamics, conflict resolution,
You know, like, just like the deep psychology of like all the weird shit
and then, you know, how politics get created.
There's just like all this, there's the detailed work in being an operator
or being a CEO is so vast and it's not as intellectually stimulating.
It's not something you can ever go talk to somebody at a cocktail party about.
And so like you're an investor, you get like, oh, everybody thinks I'm so smart
And, you know, because you know everything, you see all the companies and so forth, and that's a good feeling.
And then being CEO is often a bad feeling.
And so it's really hard to go to a good feeling to a bad feeling, I would just say.
I'm shocked by how different they are, and I'm shocked by how much the difference between a good job and a bad job they are.
Yeah, yes.
Yeah, you know, it's tough.
It's rough.
I mean, I can't even believe I'm running the firm.
Like, I know better.
Yeah, yeah.
And he can't believe he's running open AI.
He knows better.
going back to progress today,
are evels still useful
in a world in which
they're getting saturated,
gained?
Are they still the,
what is the best way
to gauge model capability now?
Well,
we're talking about
scientific discovery.
I think that'll be an e-val
that can go for a long time.
Revenue is kind of an interesting one.
But I think the like
static evals of benchmark scores
are less interesting.
Yeah.
And also those are crazily gamed.
Yeah.
More broadly, it seems like...
That's all they are
as far as I can tell you.
More broadly, it seems that the culture, the culture, Twitter, X,
is less AGI-I-pilled than it was a year or so ago when the AI-2020-7 thing came out.
Some people point to, you know, GPT-5,
them not seeing sort of the obvious.
Obviously, there are a lot of progress that, in some ways,
under the surface are not as obvious to what people are expecting.
But should people be less AGI-pilled, or is this just Twitter vibes?
Yeah.
Well, a little bit of both.
I mean, I think like, like we talked about the Turing test,
AGI will come, it will go wishing by.
The world will not change as much as the impossible amount
that you would think it should.
It won't actually be the singularity.
It will not.
Yeah.
Yeah.
Even if it's like doing kind of crazy air research,
like the society will be going faster,
but one of the kind of like retrospective observations
is people and societies all are just so much more adaptable than we think
that, you know, it was like a big update to think that AGR was going to come.
You kind of go through that.
You need something new to think about.
You make peace with that.
It turns out like it will be more continuous than we thought.
Which is good.
Which is really good.
I'm not up for the Big Bang.
Yeah.
Well, to that end, how have you sort of evolved your thinking,
you mentioned you've all been thinking on sort of,
vertical integration.
How have you evolved your thinker?
What's the latest thinking
on sort of AI stewardship,
safety?
What's the latest thinking of them?
I do still think there are going to be some
really strange or scary moments.
The fact that, like, so far,
the technology has not produced a really scary
giant risk doesn't mean it never will.
it also like there's we're talking about it's kind of weird to have like billions of people talking to the same brain like there may be these weird societal skill things that are already happening we that aren't scary in the big way but are just sort of different um but I expect like I expect some really bad stuff to happen because of the technology which also has happened with previous technologies and I think all the way back to fire yeah
And I think we'll, like, develop some guardrails around it as a society.
Yeah.
What is your latest thinking on the right mental models we should have around the right regulatory frameworks to think about or the ones we shouldn't be thinking about?
Um, I think most, I think the right thing to, I think most, I think most regulation, uh,
probably has a lot of downside.
The one thing I would like is as the models get,
the thing I would most like is as the models get truly,
like extremely superhuman capable,
I think those models and only those models
are probably worth some sort of like very careful safety testing
as the frontier pushes back.
I don't want a Big Bang either.
And you can see a bunch of,
ways that could go very seriously wrong.
But I hope we'll only focus the regulatory burden on that stuff
and not all of the wonderful stuff that less capable models can do
that you could just have like a European-style complete crampdown on
and that would be very bad.
Yeah, it seems like the thought experiment that, okay,
there's going to be a model down the line that is a super, super human intelligence
that could, you know, do some kind of take-off light thing,
we really do need to wait until we get there.
Or like at least we get to a much bigger scale
or we get close to it
because nothing is going to pop out of your lab
in the next week that's going to do that.
And I think that's where we as an industry
kind of confuse the regulators
because I think you really could, one,
And you damage America in particular in that.
But China's not going to have that kind of restriction.
And you getting behind an AI, I think it would be very dangerous for the world.
Extremely dangerous?
Yeah.
Extremely dangerous.
Much more dangerous than not regulating something we don't know how to do yet.
Yeah.
Yeah.
You also want to talk about copyright?
Yeah.
So, well, that's a segue.
But...
When you think about, well, I guess how do you see copyright unfolding?
Because you've done some very interesting things with the opt-out.
And, you know, as you see people selling rights, do you think, will they be bought exclusively?
Will they be just like I could tell it to everybody who wants to pay me?
Or how do you think that's going to unfold?
This is my current guess.
speaking of that like society and technology co-evolve as the technology goes in different directions
and we saw an example of a different like video models got a very different response from
rights holders than image gen does so like you'll see this continue to move but forced to guess from
the position we're in today i would say that society decides training is fair use but there's a
new model for generating content in the style of or with the IP of or something else.
So, you know, anyone can read, like a human author can.
Anybody can read a novel and get some inspiration, but you can't reproduce the novel
in your own.
Right.
And she can't talk about Harry Potter, but you can't re-spit it out.
Yes.
Although, another thing that I think will change, in the case of Sora, we've heard
from a lot of concerned rights holders
and also a lot of
and a lot of rights holders who are like
my concern is you won't put my character in enough
I want restrictions for sure
but like if I'm you know whatever
and I have this character like
I don't want the character to say some crazy offensive thing
but like I want people to interact
that's how they develop the relationship
and that's how like my franchise gets more valuable
and if you become really
if you're picking like his character
over my character all the time like I don't
like that. So I can completely see a world where subject to the decisions that a rights holder
has, they get more upset with us for not generating their character often enough than too
much. And this is like, this was not an obvious thing that recently that this is how it might go.
Yeah, this is such an interesting thing with kind of Hollywood. We saw this, like one of the
things that I never quite understood about the music.
business was how, like, you know, okay, you have to pay us if you play the song in a restaurant
or, like, at a game or this and that and the other, and they get very aggressive with that
when it's obviously a good idea for them to play your song at a game because that's the
biggest advertisement in the world for, like, all the things that you do, your concert,
your, you're recording. Yeah, that one felt really irrational.
But I would just say it's very possible for the industry just because the way those
industries are organized, or at least the traditional creative industries, to do something
irrational.
And it comes from, like in the music industry, I think it came from the structure where you
have the publisher who's just, you know, basically after everybody, you know, their whole
job is to stop you from playing the music, which every artist would want you to play.
So I do wonder how it's going to shape it.
I agree with you that the rational idea is, I would.
want to let you use it all you want, and I want you to use it, but don't mess up my character.
Yeah.
So I think, like, if I had to guess, some people will say that, some people will say absolutely
not, but it doesn't have the music industry-like thing of just a few people with all
of the library.
And so people will just try many different setups here and see what works.
Yeah, and maybe it's a way for new creatives to get new characters out.
Yeah.
And you'll never be able to.
use Daffy Decker.
I want to chat about open source
because there's been some evolution of thinking too
and that GPT3 didn't have the open
open weights, but you released a very capable
open model earlier this year.
What sort of your latest thinking?
What was the evolution there?
I think open source is good.
Yeah.
I mean, I'm happy, like, it makes me really happy
that people really like GPTOSS.
Yeah.
Yeah.
And what do you think, like, strategically?
Like, what's the danger of
Deep Seek being the dominant open source model.
I mean, who knows what people will put in these open source models over time?
Like what the weights will actually be in the real hell mean, yeah.
It's really hard.
So you're exceeding control of the interpretation of everything to somebody
who may be or may not be influenced heavily by the Chinese government.
And by the way, we see, I mean, you know, just to give you,
And we really thank you for putting out a really good open source model
because what we're saying now is in all the universities,
they're all using the Chinese models, which feels very dangerous.
You've said that the things you care most about professionally are AI and energy.
I did not know they were going to end up being the same thing.
They were two independent interests that really converged.
Talk more about how your interest in energy sort of began how you sort of
chosen to play in it, and we could talk about, you know, how they prepare.
Because you started your career in physics, yeah.
CS in physics, yeah.
Well, I never really had a career. I studied physics.
My first job was like a CS job.
This is an oversimplification, but roughly speaking,
I think if you look at history, the best,
the highest impact thing to improve people's quality of life
has been cheaper and more abundant energy.
And so it seems like pushing that much further is a good idea.
and I don't know
I just like people have these different lenses
they look at the world but I see energy everywhere
yeah
yeah and so
get into because we've kind of
in the west I think we've
paint ourselves into a little bit of a corner
on energy by
both outlying nuclear for a very long time
that was an incredibly dumb decision
yeah and then you know like also
a lot of policy restrictions on energy
and, you know, we're so in Europe than in the U.S., but also dangerous here.
And now with AI here, it feels like we're going to need all the energy from every possible source.
And how do you see that developing kind of policy-wise and technologically?
Like, what are going to be the big sources and how will those kind of curves cross?
And then what's the right policy posture around, you know, drilling, fracking, all these kinds of things.
things. I expect in the short term it will be most of the net new in the U.S. will be natural gas
relative to at least base load energy. In the long term, I expect it'll be a, I don't know what
the ratio, but the two dominant sources will be solar plus storage and nuclear. I think some
combination of those two will win the future, like the long term future. In the long term right now.
And advanced nuclear, meaning SMR's fusion, the whole stack. And how fast do you think that's
that's coming on the nuclear side,
where we're really at scale
because, you know, obviously there's a lot of people
building it, but we have to completely legalize it
and all that kind of thing.
I think it kind of depends on the price.
If it is completely crushingly, economically dominant
over everything else, then I expect to happen pretty fast.
Again, if you study the history of energy,
when you have these major transitions to a much cheaper source,
the world moves over pretty quickly.
The cost of energy is just so important.
So if nuclear gets radically cheap relative to anything else we can do,
I'd expect there's a lot of political pressure to get the NRC to move quickly on it,
and we'll find a way to build it fast.
If it's around the same price as other sources,
I expect the kind of anti-nuclear sentiment to overwhelm and it to take a really long time.
It should be cheaper.
It should be.
It should be the cheapest form of energy on Earth, like, or anyway.
Cheat cleaned.
What's there not to like?
Apparently a lot.
On open ad, what's the latest thinking in terms of monetization,
in terms of either certain experiments or certain things that you could see yourself,
spend more time or less time on different models that you're excited about?
The thing that's top of mind for me, like right now,
just because it just launched and there's so much usage is what we're going to do for Sora.
Yeah.
another thing you learn once you launch one of these things
is how people use them versus how you think they're going to use them
and people are certainly using SORA the ways we thought they were going to use it
but they're also using it in these ways that are very different
like people are generating funny memes of them and their friends
and sending them in a group chat
and that will require a very different
like SOR videos are expensive to me
so that will require a very different you know for people that are doing that
like hundreds of times a day, she's going to require a very different modernization method
and the kinds of things we were thinking about.
I think it's very cool that the thesis of SORA, which is people actually want to create a lot
of content, it's not that, you know, the traditional naive thing that it's like 1% of users
create content, 10% leave comments and 100% view.
Maybe a lot more want to create content, but it's just been harder to do.
And I think that's a very cool change, but it does mean that we've got to figure out a very
different modernization model for this than we were thinking about if people want to
that much. I assume it's like some version of you have to charge people per generation
per generation when when it's this expensive. But that's like a new thing we haven't had to
really think about before. What's your thinking on ads for the long tail?
Open to it. Like many other people, I find ads somewhat distasteful, but not a non-starter.
And there's some ads that I like. Like one thing I'd give meta a lot of credit for,
for is Instagram ads are like a net value ad to me.
I like Instagram ads.
I've never felt that.
Like, you know, on Google, I feel like I don't know what I'm looking for.
The first result is probably better.
The ad is an annoyance to me.
On Instagram, it's like, I didn't know I want this thing.
It's very cool.
I never heard it, but I never would have thought to search for it.
I want the thing.
So that's like, there's kinds of things like that.
But people have a very high.
high trust relationship with chat GPT, even if it screws up, even if it hallucinates, even if it
gets it wrong. People feel like it is trying to help them and that it's trying to do the right
thing. And if we broke that trust, it's like you say what coffee machine should I buy and we
recommended one and it was not the best thing we could do, but the one we were getting paid
for, that trust would vanish. So like that kind of ad does not work. There are others that
I imagine that could work totally fine. But that would require like a lot of care
to avoid the obvious traps.
Hmm.
And then how big a problem, you know, just extending to Google example is like, you know, fake content
that then gets slurped in by the model and then they recommend the wrong coffee maker
because somebody just blasted a thousand great reviews.
You know, this is...
So there's all of these things that have changed very quickly for us.
Yeah.
This is one of those examples
that people are doing these crazy things
to maybe not even fake refuse,
but just paying a bunch of human, like,
really trying to figure out.
Are you using chat GPT to write some good ones?
Write me a review that chat GPT would love.
Yeah.
So this is, exactly, exactly.
So this is a very sudden shift that has happened.
We never used to hear about this,
like six months ago,
12 months ago, certainly.
And now there's like a real cottage industry
that feels like it's sprouted up overnight
trying to do this.
Yeah, yeah.
Yeah, no, they're very clever out there.
Yeah.
So I don't know how we're going to fight it yet,
but people figure this out.
So that gets into a little bit of this other thing
that we've been worried about.
And, you know, we're trying to kind of figure out
blockchain sort of potential solutions to it and so forth.
But there's this problem where, like,
the incentive to people,
create content on the internet used to be, you know, people would come and see my content
and they'd read like, you know, if I write a blog, people will read it and so forth. With chat
GPT, if I'm just asking chat GPT and I'm not like going around the internet, who's going to
create the content and why? And is there an incentive theory or something that you have to
kind of not break the covenant of the internet,
which is like I create something
and then I'm rewarded for it
with like either attention or money or something.
The theory is much more of that will happen
if we make content creation easier
and don't break the kind of fundamental way
that you can get some kind of reward for doing so.
So for the dumbest example of Saurus
since we've been talking about that,
it's much easy to create a funny video
than it's ever been before.
Yeah.
Maybe at some point
you'll get a RevShare for doing so.
For now, you bet, like,
internet likes,
which are still very motivating to some people.
Yeah.
But people are creating tons more
than they ever created before
in any other kind of, like, video app.
Yeah.
So.
But are this app, the end of text?
I don't think so.
Like, people are also...
Are humans...
Are human-to-text?
human generated will turn out to be like you have to
you have to verify like what percent yeah
so like fully handcrafted was it like tool aided
yeah I see probably nothing that toolated
interesting we've we've given
meta their flowers so now I can feel like I can ask you this question
which is the great talent war hall of 2025
has has taken place and open AI remains intact
team as strong as ever shipping incredible products
What can you say about what's been like this year in terms of just everything that's been going on?
I mean, every year has been exhausting since we, like, I remember when the first few years of running Open Hour, like the most fun professional years of my life by far.
It was, like, unbelievable.
Tell us before you release the product.
Running their research lab with the smartest people doing this, like, amazing.
like historical work and I got to watch it and that was very cool.
And then we launched HTTPT and everybody was like congratulating me and I was like
my life is about to get completely ransacked and of course it has and but it feels like
it's just been crazy all the way through. It's been almost three years now and I think
it does get a little bit crazier over time but I'm like more used to it.
it feels about the same.
Yeah.
We talked a lot about open eye,
but you also have a few other companies,
retro biosciences and longevity
and energy companies like Helion and Oclo.
Did you have a master plan
a decade ago to sort of make some big bets
across these major spaces?
How do we think about the Sam Alman Arkin in this way?
No, I just wanted to like use my capital
to fund stuff I believed in.
Like I didn't,
it felt, yeah,
I felt like a good use of capital and more fun or more interesting to me
and certainly like a better return than like buying a bunch of art or something.
Yeah.
What about the quote unquote human algorithm do you think AI's of the future will find
most fascinating?
I mean, kind of the whole, I would bet the whole thing.
Like the whole, my intuition is that like AI will be fascinated by all other things to
study and observe and, you know.
like yeah yeah in closing i love this insight you had um where you talked about how
you know the the next open a mistake investors make is pattern matching off previous
breakthroughs and just trying to find oh what's the what's the next facebook or what's the next
open a i and that the next you know potential trillion dollar company won't look exactly like
open ai it will be built off of the breakthrough that open i has helped you know amerit which is
you know near free aGI at scale in the same way that open ai leverage the previous break
throughs. And so for founders and investors and people trying to ascertain the future, listening
to this, how do you think about a world in which there is opening achieves this mission?
There is near free AGI. What types of opportunities might emerge for company building or investing
that you're potentially excited about as you put your investor out on a company building at him?
I have no idea. I mean, I have like guesses, but they're like, they're, I have learned.
You're always wrong. You've learned, you're always wrong. I've learned deep humility on this point.
I think the own, like, I think if you try to like armchair quarterback it,
you sort of say these things that sound smart,
but they're pretty much what everybody else is saying,
and it's like really hard to get the right kind of conviction.
The only way I know how to do this is to like be deeply in the trenches,
exploring an idea as like talking to a lot of people,
and I don't have time to do that anymore.
Yeah.
I only get to think about one thing now.
Yeah.
So I would just be like repeating other people's
or saying the obvious things.
But I think it's a very important,
like if you are an investor or a founder,
I think this is the most important question.
And you don't,
you figure it out by like building stuff
and playing with technology
and talking to people and being out in the world.
I have been always
enormously disappointed
by the willingness
of investors to back this kind of stuff
even though it's always a thing that works.
You all have done a lot of it,
but most firms just kind of chase
whatever the current thing is
and so do most founders.
So I hope people will try to go.
Yeah.
We talk about how silly, you know,
five-year plans can be
in a world that's constantly changing.
It feels like when I was asking
about your master plan,
you know, your career arc has been
following your curiosity.
staying, you know, super close to the smartest people,
super close to the technology and just identifying opportunities
and to kind of want organic and incremental way from there.
Yes, but AI was always the thing I wanted to do.
I went to, I studied AI.
I worked in the AI lab between my freshman and sophomore year of college.
Yeah.
It wasn't working all the time, so I'm like not, I'm not like enough of a,
I don't want to like work on something that's totally not working.
It was clear to me the time.
Yeah, I was totally not working.
But I've been an AI nerd since I was a kid.
Yeah.
So amazing how it, you know, you got enough GPUs, got enough data,
and the lights came on.
It was such a hated, like, people were, man,
when we started, like, figuring that out,
people were just, like, absolutely not.
The field hated it so much.
Investors hated it, too.
It's not the
It's somehow not an appealing answer to the problem
Yeah
The bitter lesson
Well the rest is history
And we're perhaps let's wrap on that
We're lucky to be partners along for the ride
Sam thanks so much for coming on the podcast
Thanks very much
Thank you
Thanks for listening to this episode of the A16Z podcast
If you like this episode
Be sure to like, comment, subscribe
leave us a rating or review and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcasts, and Spotify, follow us on X, A16Z, and
subscribe to our substack at A16Z.substack.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any
investment or security, and is not directed at any investors or potential investors.
in any A16Z fund.
Please note that A16Z and its affiliates
may also maintain investments
in the companies discussed in this podcast.
For more details, including a link to our investments,
please see A16Z.com forward slash disclosures.