In Good Company with Nicolai Tangen - Sam Altman CEO of OpenAI: ChatGPT, Future of AI, AGI, and China
Episode Date: September 6, 2023How does Sam envision a world where humans and AI coexist? How far into the future can he see, and when will AI completely change the way we live? Sam Altman, Founder and CEO of OpenAI, provides uniqu...e insights into how AI will impact the world going forward.Sam’s book recommendation: The Beginning of Infinity by David DeutschThe production team on this episode were PLAN-B’s Nikolai Ovenberg and Niklas Figenschau Johansen. Background research was done by Sigurd Brekke.Links:Watch the episode on YouTube: Norges Bank Investment Management - YouTubeWant to learn more about the fund? The fund | Norges Bank Investment Management (nbim.no)Follow Nicolai Tangen on LinkedIn: Nicolai Tangen | LinkedInFollow NBIM on LinkedIn: Norges Bank Investment Management: Administrator for bedriftsside | LinkedInFollow NBIM on Instagram: Explore Norges Bank Investment Management on Instagram Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Hi, everyone, and welcome to our podcast, In Good Company.
I'm Nicolai Tangen, the CEO of the Norwegian Wealth Fund.
Today, we are releasing a very special bonus episode with the one and only Sam Altman,
CEO of OpenAI.
Now, this conversation just blew my mind away.
You have to listen in.
Enjoy.
OpenAI shocked the world last November with ChatGPT.
And OpenAI is not only creating models, it's creating the future.
So Sam, it's an honor to have you on the podcast.
Thanks a lot for having me. It's great to be here.
How does it feel to spearhead this revolution?
It's definitely a little surreal.
It's like a very exciting moment in the history of technology.
And to get to work with the people who are creating this is a great honor.
And I can't imagine anything more exciting to be doing.
No, I can't imagine.
It's definitely a lot. I can see that. Now, big picture,
what's the vision of the world where humans and AI coexist?
Well, one thing that we believe is that you have to answer that question empirically.
There's been a lot of philosophizing about it for a long time. Very smart people have had very
strong opinions. I think they've all been wrong, and it's just a lot of philosophizing about it for a long time. Very smart people have had very strong opinions.
I think they've all been wrong, and it's just a question of how wrong.
The course that a technology takes is difficult to predict in advance.
I love that Alan Kay quote, that the best way to predict the future is to invent it.
What we're trying to do is see where the technology takes us,
deploy it into the world to actually understand how people are using it, where the risks are,
where the benefits are, what people want, how they'd like it to evolve, and then sort of co-evolve
the technology with society. And, you know, I think if you asked people five or 10 years ago,
what the deployment of powerful AI into the world is going to look like, they wouldn't have guessed that it looks like this.
People had very different ideas at the time, but this was what turned out to be where the technology leads and where the science leads.
And so we try to follow that.
And how far into the future can you see now?
The next few years seem pretty clear to us.
We kind of know where these models are going to go.
We have a roadmap we're very excited about.
We can imagine both the science and technology,
but also the product a few years out.
And beyond that, we're going to learn a lot.
We'll be a lot smarter in two years than we are today.
Yeah. And what kind of holy shit moments have you had lately?
Well, remember that we've been, you know, we've been thinking about this and playing around with
this technology for a long time. So the world has had to catch up very quickly. But we have less
holy shit moments because, you know, we've been expecting this
and we've been building it for a while.
And, you know, we don't,
it doesn't feel as discontinuous to us.
And what kind of big things have you seen
since Chateau Petit?
Well, we...
The biggest ones have not been about new technology or new models but about the breadth
of use cases the world is finding to do this so the holy shit moments have not been like oh now
the model can do this now we now we figured out that because again you know somewhat expected that
but seeing how much people are coming to rely on these models to do their work
in their current form, which is very imperfect and broken.
We're the first to say these models are still not very good.
They hallucinate a lot. They're not very smart.
They have all these problems.
And yet people are using their human ingenuity
to figure out how to work around that and still leverage these tools.
And so watching people that are remaking their workflows
for a world with LLMs has been big.
And some examples of new things you've seen,
new user cases, applications?
A common one is around how developers are changing their workflow
to spend half their time in ChatGPT.
You hear people say they feel like two or three or sometimes more times productive than before.
An uncommon one is I met a guy who runs a laundromat business as a one-person thing
and uses ChatGPT for coming up with marketing copy uh dealing with like customer
service uh helping review legal documents he had a long list of things and he's like i got a virtual
employee in every category that was pretty cool and what about things like uh brain implants and
getting it to help with speech and so on which we just saw recently? I'm very excited about neural interfaces,
but I am not currently super excited about brain implants.
I don't feel ready to want one of those.
I would love a device that could read my mind,
but I'd like it to do that without having to put a hole in my skull,
and I think that's possible.
How?
Oh, there's many technologies depending on what you'd want.
But, you know, there's a whole bunch of companies
working on trying to sort of like read out the words you're thinking
without requiring a physical implant.
Now, a few years ago, nobody had heard about OpenAI.
Now, everybody's heard about it.
You are, you know, one of the most famous people on earth.
But the people, so how many people are you at OpenAI now? Everybody's heard about it. You are one of the most famous people on earth.
But the people, so how many people are you at OpenAI now?
500 or so.
And what do these 500 people actually do?
It's a mix.
So there's a large crew that's just doing the research,
like trying to figure out how we get from the model we have today,
which is very far from an AGI to an AGI,
and all of the pieces that have to come together there.
So scaling the models up, coming up with new methods,
that whole process.
There's a team that makes the product and figures out also how to scale it.
There's a sort of traditional Silicon Valley tech company
go-to-market team.
There's a very complex legal and policy team that does all of the work you'd imagine there.
Yeah.
And so your priorities as a CEO now, how do you spend your time?
I kind of think about the buckets of, of what we have to do in,
uh,
research product and compute on the technical side.
And then,
uh,
on the,
and I,
that's sort of the work that I think I,
I enjoy the most and where I can contribute the most.
Um,
and then I spend some of my time on policy and sort of social impact issues, for lack
of a better word.
And then the other things I spend less time on, but we have great people that run the
other functions.
Now, your mission has been to ensure that artificial, well, the general intelligence
benefits all of humanity. What's the biggest challenge to this, you think?
I, a couple of thoughts there. One, I'm reasonably optimistic about solving the technical alignment
problem. We still have a lot of work to do, but, you know, I feel like I feel better and better
over time, not worse and worse. The social part of
that problem, how do we decide whose values we align to? Who gets to set the rules for this?
How much flexibility are we going to give to each individual user and each individual country?
We think the answer is quite a lot, but that comes with some other challenges in terms of
how they're going to use these systems. That's all going to be difficult, to put it lightly, for society to
agree on. And then how we share the benefits of this, what we use these systems for, that's also
going to be difficult to agree on. The buckets I think about here are we've got to
decide what global governance over these
systems as they get super powerful is going to look like, and everybody's got
to play a role in that. We've got to decide how we're going to share the
access to these systems, and we've got to decide how we're going to share the benefits of them.
There's a lot of people who are excited about things like UBI and I'm one of them,
but I have no delusion that UBI is a full solution or even the most important part of
the solution. People don't just want handouts of money from an AGI. They want increased agency.
They want to be able to be architects of the future. They want to be able to do more than they could before. And figuring out how to do that while addressing
all of the sort of, let's call them disruptive challenges. I think that's going to be very
important, but very difficult. How far out this true AGI?
I don't know how to put a number on it.
I also think we're getting close enough that the definition really matters, and people
mean very different things when they say it.
But I would say that I expect, by the end of this decade, for us to have extremely powerful
systems that change the way we currently think about the world.
And you say we've got different definitions.
What is your definition of general intelligence?
So one that matters, you know,
there's like kind of the open AI official definitions
and then there's one that's very important to me personally.
When we have a system that can figure out
new scientific knowledge
that humans on their own could not,
I would call that an AGI.
And that you think we may have by the end of this decade?
Well, I kind of tried to soften that a little bit
just by saying we'll have systems that really change the way the world works.
The new science may take a little bit longer or maybe not what's the what's the
end game here um are we just all of us going to work a lot less um you know i want to be people
i think we'll all work differently i think we still, many of us will still work very hard, but differently. Every technological revolution,
people say that we're just going to do less work in the future.
And we just find that we want a higher standard of living and new and
different things.
And also that we find new kinds of work.
We really enjoy,
you know,
neither you nor I have to work.
And I bet we both work pretty hard.
I love my job.
I love my job, and I feel very blessed.
So the definition of work, what we work on, why we work, the reasons for it, I expect that all to change.
What we do, I expect to change.
expect to change but i love what i do and i expect people in the future to love even more what they do because there will be new amazing things to work on that we can hardly imagine right now
and less boring stuff yeah i'm all for getting rid of the boring stuff like i think like everybody
should love that that's maybe one thing we could say in the future is everybody will do things that
they love you won't have to do things you don't and i think most people probably don't love their jobs right now
um i believe you just traveled the world and met with a lot of people and users
what's what was your what was your main takeaway
uh the level of excitement about the future and what this technology is going to do for people around the world in super different cultures and super different contexts was just very, very different than I expected.
Like, it was overwhelming in the best way.
Any difference between geographies or industries?
There's definitely...
Yeah, like, you know, in the developing world,
people are just focused on what this can do economically right now.
And in the more developed world, there's much more of a conversation about what the downsides are going to be and how this is going to disrupt things.
And there's still excitement, but it's tempered more by fear.
That was a striking change, a difference.
Do you think it will lift up the poor part of the world?
Yeah, I really do.
I think it's going to make everybody richer, but I think it positively impacts poor people the most.
And I think this is true for most kinds of technology.
But it should be particularly true for the democratization of intelligence.
You or I can afford to pay a super highly compensated expert if we need help, but a lot of people can't.
And to the degree that we can make, say, great medical advice available to everyone,
you and I benefit from that too, but less than people who just can't afford it at all right now.
And what would potentially prevent this from happening?
And what would potentially prevent this from happening?
Well, we could be wrong about the trajectory that technology is on.
I think we are on a very smooth exponential curve that has much, much further to go.
But, you know, we could be like missing something.
We could be drinking our own Kool-Aid.
We could hit a brick wall soon.
I don't think we're going to.
I think we have some remarkable progress ahead of us in the next few years.
But we could somehow be wrong for a reason we don't understand yet.
What is it doing to the global balance of power?
I don't know how that's going to shift.
I'm not sure anyone does,
but I certainly don't think that's something that I'm particularly well-qualified to weigh in on.
But it just seems like it's being,
it's so key now to the weapon race,
the medical race, the self-driving vehicle race, just
all these races.
But it's also available pretty broadly.
One of the things that we think is important is that we make GPT-4 extremely widely available.
Even if that means people are going to use it for things that we might not always feel
are the best things to do with it.
But we have a goal of globally democratizing this technology.
And as far as we know, GPT-4 is the most capable model in the world right now.
And it is available to anyone who wants to pay
what I think are the very cheap API rates. Now, anyone is not quite there. We block a handful of
countries that the US has embargoes with or whatever, but it's pretty available to the world.
But in order to develop it further, you need the right chips, right? And they are not available.
You need the right chips.
You also need the right people.
And I would argue that's even more of a constraint.
There are people who can go copy GPT-4, right?
We did it fine.
Once people know how to do it and know that it's possible, a lot of people will replicate it.
But what matters is how you're going to get to like GPT-6 and 7.
And also, even more than that, how you're going to get the next set of very different ideas that take you on a different trajectory.
Like everyone knows how to climb this one hill and we're going to go figure out the next hill to climb.
And there's not a lot of people in the world that can do that.
But we're committed to making that as widely available as we can.
Do we know where China is here?
We don't. Maybe someone doesn't know it's here.
Do you think there's a chance that well, like they did with weapons that just suddenly bang,
they had the supersonic rockets? We didn't even know they existed, right? Could that happen?
Yeah, totally. It could. I mean, mean we're gonna work as hard as we can
to make sure that we stay in the lead but we're a little in the dark so mark andreessen for instance
he thinks we should stuff it into everything and you know as part of the geopolitical fight.
What do you think?
Stuff it into everything means just put it everywhere?
Yeah.
That's happening, and I think that's great.
Without revealing something I shouldn't,
the amount of GPT-4 usage and the number of people, companies,
that are integrating it into different ways
is staggering.
It's awesome.
Some examples, if you had to reveal something.
Uh, I mean, like, you know, car makers are putting it into cars and I was like, all right,
that sounds like a gimmick.
And then I got to try a demo of it.
And I was like, wow, this being able to just talk to
my car and control it in a sophisticated way entirely by voice actually totally changes my
experience of how i like use a car in a way that i would not have believed was so powerful
so for instance you see it in a car what do you say
uh this is this is probably where i don't want to like reveal a partner's
plans but you can imagine a lot of things that you might say like the basic stuff is easy like
you know i need to go here and um i'd like to listen to this music and also can you make it
colder sounds good do you depend on on newer and even more powerful chips
than what we have now?
I mean, how much quicker do you,
how much more complex do chips need to be
than H100 or the latest things from NVIDIA?
Yeah, of course.
Like there's, the ways that we can keep
making these models better
are we can come up with better algorithms
or just more efficient implementations or both.
We can have better chips and we can have more of them.
And we plan to do all three things and they multiply together.
And do you think it's the chip makers
who will end up with the profits there?
They will end up with profits. I wouldn't say the profits. I think there's many people who
are going to share this massive economic boon. How much does it cost to train these models? I mean, how much have you spent on pre-training models?
We don't really talk about exact numbers,
but quite a lot.
Yeah.
And what's the challenge of spending so much money
pre-training and then it lasts
for a relatively short period of time?
In a way, you have to depreciate the whole investment
because you need to invest more in the next generation.
I mean, how do you think about this?
That's true.
I don't think there are going to be as many massive pre-trained models
in the world as people think.
I think there will be a handful,
and then a lot of people are going to fine-tune on top of that or whatever.
I think there will be a handful and then a lot of people are going to fine tune on top of that or whatever so so how does how do you how do you read the competitive part of it that I think is
important is like you know when we did gpt4 um we did we produced this artifact and people use it
and it generates all this economic value and um you're right, that does depreciate fast. But in the process of that, we learn so much about how to go. We push the frontier of research so far
forward and we learn so much that'll be critical to us being able to go do GPT-5 someday or whatever.
That it's like, you're not just depreciating the CapEx one time for the model. You have
generated a huge amount of new IP
to help you keep making better models.
So the way you read the competitive landscape now,
what does it look like?
I mean, there are going to be many people
making great models.
We'll be one of them.
We'll contribute our EGI to the world,
to society among,
among others.
And I think that's fine.
And,
you know,
we'll all,
we'll run different experiments.
We'll try setting,
you know,
we'll have different features,
different capabilities.
We'll have different opinions about what the rules of a model should be.
And through the magic of a model should be.
And through the magic of competition and users deciding what they want, we'll get to a very good place.
How far ahead do you think you are of competition?
I don't know. I don't think about it that much, to be honest. Our customers are very happy.
our customers are very happy.
They are desperate for more features and more capacity
and us to be able to deliver our service
in all of these little better ways.
And we're very focused on that.
I'm sure Google will have something good here
at some point,
but I think they're racing to catch up
with where we are
and we're thinking very far ahead of that.
So normally in the software business,
you have something which is very cheap,
where you ship a lot of it,
or something which is very expensive
and you don't ship so much.
Here you could potentially ship something.
And I can see you smiling here.
Here you can potentially ship a lot of the area.
Exactly.
So tell us, how is this going to work?
You know, I'll tell you, one of the most fun things about this job is we are past the point as a company.
I am past the point as like a CEO running this company where there's like a roadmap to follow.
We're just doing a bunch of things that are like outside of the standard Silicon Valley received wisdom. And so we get to just say, well, we're going to figure it out and we're
going to try things. And if we got it wrong, like who cares? There was no, like, it's not like we
like screwed up something that was already figured out. I mean, back to our very founding, like
most big tech companies are a,
they start as a product company and eventually they bolt on a research lab
that doesn't work very well.
And we started as a research lab
and then bolted on a product company
that didn't work very well.
And now we're making that better and better.
But to like-
I'm sorry, product company, you mean Microsoft?
No, no.
I mean like having to figure out
how to ship the API in ChatGPT.
Yeah. Like we started, we really did just start as a research lab No, no. I mean having to figure out how to ship the API in ChatGPT.
We really did just start as a research lab,
and then one day we're like, we're going to make a product,
and then we're going to make another product,
and now that product is the fastest-growing product in history or whatever.
And we weren't set up for that.
Is the usage of ChatGPT decelerating?
No.
I think it maybe took a little bit of a flatline during the summer,
which happens for lots of products, but it is...
Doink up.
Tell us about the relationship with Microsoft.
How does that work?
I mean, at a high level, they build us computers, we train models, and then we both use them.
And it's a pretty clear and great partnership.
Are your goals aligned? aligned yeah they really are um one of i mean there's like there's of course areas where we are not
perfectly aligned and like i don't like any partnership in life or business or whatever
i won't pretend it's perfect but it is very good and we are aligned at the highest levels which is really important and
the the misalignments that come up at the sort of lower levels once in a while you know like
no contract in the world is what makes a partnership good like what makes a partnership
good is that when those things happen you know satya and and Kevin and I talk and we figure it out.
And there's like a good spirit of compromise over a long time.
Now, they've been one of the initiators, together with you, in terms of self-regulating this space.
What can this type of thing be self-regulated?
Not entirely. this space? What can this type of thing be self regulated?
Not entirely. I think it needs to start that way. And I think that's also kind of like how you figure out a better answer. But
like governments are going to have to do their own thing here.
And you know, we can provide input to that. But we don't.
We're not like the elected decision makers of society. And
we're very aware of that.
And what can governments do?
Anything they want.
And I think people forget this.
Governments have quite a lot of power.
They just have to decide to use it.
Yeah, but I mean, so let's say now Europe decides that they're going to regulate you really harshly.
I mean, are you just going to say goodbye Europe, no? Possibly. I don't think that's what's going to happen. I think we have
a very productive conversation. I think Europe will regulate AI, but reasonably, not very harshly.
What is a reasonable regulation? What is that level?
And what is a reasonable regulation? What is that level?
I think there's many ways that it could go that would all be reasonable. But to give one specific example, and I'm surprised this is controversial at all,
but a regulatory thing that's coming up a lot in Europe and elsewhere is that if you're using an AI, you've got to disclose it.
So if you're talking to like a bot and not a person, you need to know that.
That seems like a super reasonable and important thing to do to me for a bunch of reasons, given what's starting to happen.
To my surprise, there's some people who really hate that idea.
But I'd say that's like a very, very reasonable regulation.
I agree. I agree.
Do you think we'll get global regulation?
Is there any shape or form that can happen?
I think we're going to get it for only the most powerful systems.
So I think individual countries or blocks of countries are not going to give up their right to self-determine
for what can a model say and not say,
and how do we think about the free speech rules and whatever.
But for technology that is capable of causing grievous harm
to the entire world, like we have done before with nuclear weapons or a small number of other examples, yeah, I think we are going to come together and get good global regulation.
sitting in your car and it's like super cool and it's cold and hot and music and this and that and you know and you you're a chinese car company and you want to compete the americans why do you want
to have a regulation on this well gpt4 i don't think needs global regulation nor should it have
it i'm talking about like what happens when you get to gpt10 and it is you know say smarter than
all of humans put together and that's why you think we'll get it that's when i think we'll get it
when you have the cost of intelligence coming down so dramatically
like it is now what is it going to do to productivity in the world
i mean it's supposed to go up a lot right right? That's what theory tells us. That's what I think.
So I've told everybody in our company that, hey, we should improve our productivity by 10% over the next 12 months.
All of us.
And you know how I got the number?
Did you ask Chachi BT?
No.
I just took it I just took it
straight out of the air
do you think
what do you think
about that number
is it low high
under ambitious
under ambitious
what should
what should productivity
increase by
how do you
how do you measure
the stuff we do
that's not very good measurement,
but just the kind of stuff that I produce.
How much of your company writes code?
Well, people in technology,
probably 15, 20% of us.
More, actually.
Okay, let's say that's 20% writing code.
I think an overall goal of 20% productivity increase
in a 12-month period is appropriately ambitious,
given the tool.
And given the tools that we will launch
over the next 12 months.
Okay, sounds like I should up the game here a bit.
I think so.
Yeah, I'll just tell everybody you told me to, so that's fine.
It's better to set a goal that is slightly too ambitious
than significantly under-ambitious, in my opinion.
Yeah.
Now, is there an inherent limitation to what AI can achieve?
I mean, is there a point of no further progress?
I couldn't come up with any reasonable explanation of why that should be the case.
You say that most people overestimate risk and underestimate reward.
What do you mean by that?
There's a lot of people that don't go start the company or take the job they want to take or try a product idea because they think it's too risky.
And then if you really ask them, like, all right, can we unpack that?
Can you explain what the risk is and what's going to go wrong?
It's like, well, the company might fail.
Okay, and then what?
You know, well, then I have to go back to my own job, my old job.
All right, that seems reasonable.
And they're like, well, you know, but I'll be a little embarrassed.
And I'm like, oh, is that, you know, what's the call?
I think, like, people view that as a super risky thing and they view staying in a job where they're not really
progressing or learning more or doing new things for 20 years as not risky at all. And to me,
that seems catastrophically risky, you know, to like miss out on 20 years of your very limited life and energy to try to do the thing you actually want to do.
That seems really risky.
But it's not thought of that way.
Talking about staying in your job, so the leaders and the CEOs, so how is AI going to change the way leaders need to act and behave?
Well, hopefully it's going to do my job.
Hopefully the first thing we do with AGI is let it run OpenAI
and I can go sit on the beach.
That'd be great.
We don't want to do that for long, but right now it sounds really nice.
How do you develop the people in your company? How do you develop
your leaders?
Um, I think developing leaders tend to fail at the same
set of things most of the time. You know, they don't, they don't
spend enough of their time hiring talent and developing
their own teams. They don't spend enough of their time articulating and communicating the vision of their team. They don't spend enough
of their time thinking strategically because they get bogged down in the details. And so when I like
put a new person in a very senior role, which I always try to do with promotions, I mean,
I'm willing to hire externally, but I'd always, always rather promote internally. I have them over for dinner or go for a walk or sit down
or something and say like, here are the ways you're going to screw up. I'm going to tell you
all of them right now. You're going to totally ignore me on this and not believe me, or at least
not do them because you're going to think, you know, better, you know, not make these mistakes,
but I'm going to put this in writing and hand it
to you and we're going to talk about it in three months and in six months and you know
eventually i think you'll come around and they always ignore me and always come around
and i think just like letting people recognize that for themselves uh but telling them up front
so that it's at least in their mind is very important.
What's the most common way leaders grew up?
Failing to recruit slash promote, and then failing to build a good delegation process.
And then as a consequence of those, not having enough time to set strategy
because they're too bogged down in the day-to-day
and they can't get out of that downward spiral.
What does your delegation process look like?
Two things.
Number one, high quality people.
Number two, setting the training wheels at the right height
and increasing them over time as people learn more
and I build up more trust.
Is that the way to manage geniuses um they get uh researchers that's a different thing i was like talking about how to like executives that run the thing okay what about researchers what is this? The pre-Madonna's.
Explain,
well, pick really great people.
Explain the general direction of travel
and the resources that we have available
and kind of at a high level where we
need to get to to get to the next level.
So, you know, we have to achieve this to go get the next 10 times
bigger computer or whatever. And, you know, provide like the most mild input on it
would be really great if we could pursue this research direction and this would be really
helpful and then step back. So we kind of like, you know, we set a very high level vision for the company and what we want to achieve.
And beyond that,
researchers get just a huge amount of freedom.
Do you think companies generally are too detailed in the remit they give the
teams?
Yes.
I mean,
at least for our kind of thing,
I think managing,
you know, Yes. I mean, at least for our kind of thing. I think managing... We talked earlier about having to rediscover a bunch of things. I say this realizing it's going to come across as arrogant, and I don't mean it that way, but I think it's an important point. there used to be great research that happened in companies in Silicon Valley.
Xerox PARC being the obvious example.
There have not been for a long time.
And we really had to rediscover that.
We made many screw-ups along the way to learn how to run a research effort well
and how you balance letting people go off and do whatever
towards trying to get the company to point in the same
direction. And then over time, how to get to a culture where people will try lots of things,
but realize where the promising directions are, and on their own, want to come together to say,
let's put all of our firepower behind this one idea, because it seems like it's really working.
You know, I'd love to tell you, we always knew language models were going to work.
That was absolutely not the case. We had a lot of other ideas
about what might work. But when we realized
the language models were going to work,
we were able to get the entire
research trust, or almost entire research
brain trust, to get behind it.
I'm slightly surprised you say
that there was no innovation culture in Silicon Valley
because that's a bit contrary
to what I thought.
So could you explain this?
Yeah, there's a product
innovation culture for sure, a good one.
But like,
I mean, again, I hate to say this because
it sounds so arrogant, but like
before OpenAI, what was the last really
great scientific breakthrough that came out of a
Silicon Valley company?
And why did that happen?
Why?
What happened there?
Well, we got a little lucky.
No, I don't mean you.
I'm saying why did this culture disappear in Silicon Valley, you think?
I have spent so much time reflecting on that question.
I don't fully understand it.
I think it got so easy to make a super valuable company
and people got so impatient on timelines and return horizons
that a lot of the capital went to these things that could
just you know fairly reliably multiply money in a short period of time by just saying like we're
going to take the magic of the technology we have now the internet mobile phones whatever and apply
it to every industry that sucked up a lot of talent very understandably now you you had some um what should we say your your co-founders
are pretty pretty into big big hairy goals right
yeah i mean we're trying to make agi i think that's the biggest hairiest goal in the world
so not so many companies have those kind of co-founders
and people who
with that kind of track record and
you know,
that type of talent
magnet
funding capabilities and so on.
How important was that?
You mean Elon by this, right?
Yeah. And you know,
some of the other people you worked with in the beginning.
Well, there were six co-founders, Elon and me, Greg and Ilya, and John and Voychuk.
Elon was definitely a talent magnet, an attention magnet for sure,
and also just has some real superpowers that were super helpful to us in those early days
aside from all of those things
and contributed in ways that we're very grateful for.
But the rest of us were like pretty unknown.
And I mean, maybe I was like somewhat known
in technology circles
because I was running Y Combinator,
but not in a major way.
And so we just had to like,
you know,
grind it out.
But that was like a,
that was like a good and valuable process.
What is your superpower?
I think I'm good at thinking very long-term and not being sort of constrained in like common wisdom.
Yeah, you said in 2000... I think I'm good at evaluating talent. That was like a really helpful thing to learn from Y Combinator.
You said in 2016 that long-term thinking is a
competitive advantage because almost no one does it. Yeah. I mean, when we started OpenAI and said
we're going to build AGI, everybody was like, that's insane. A, it's 50 years away, and B,
it's like, you know, the wrong thing to even be thinking about. You should be thinking about this,
how to improve this one thing this year. And, you know, also this is like unethical
to even say you're working on it
because it's like such a science fiction
and you're going to lead to another AI winter
because it's too much hype.
And we just said, it's going to take us a while,
but we're going to go figure out how to do it.
You said you also good at assessing talent.
How do you do it?
I don't know
I don't
I can't
like
I have a lot of practice
so I've got like a
but I don't have like
words for it
I can't
I can't tell you like
here's the five questions
I ask or
here's the one thing
I always look for
but
you know assessing if someone is smart and if they have a track record of getting things done and if they have novel ideas that they're passionate about, I think you can learn how to do that through thousands of conversations, even if it's hard to explain.
Why is Europe so behind generally when it comes to innovation and innovative culture
i'd ask you that i don't know why is it well is it first of all like well i guess i guess it is uh
look at where the big tech companies are, where the big innovations
come.
It's certainly very behind in
hyperscale software companies.
There's no question there.
Big fear of failure. It's a cultural
thing.
There are a lot of things going into
that cocktail, I think.
The fear of failure thing
and the
cultural environment or cocktail i think the the fear of failure thing um and and the kind of the like the cultural
environment or backdrop there is is huge no doubt uh the you know we funded a lot of european people
at yc and a thing they would always say is like they cannot get used to the fact that in Silicon Valley, failure is tolerated.
I've failed at stuff big time.
And I'm sure I'll fail at stuff in the future.
What's the biggest failure so far?
Well, I mean, monetarily-wise, I've made a lot of big investments that have gone to just zero, crater in the ground.
But in terms of like
time and psychological impact on me, I did a startup from when I was like 19 to 26,
worked unbelievably hard to consume my life and failed at that. And that was like quite painful
and quite demoralizing. And it's like, you know, you learn to get back up after stuff like that,
but it's hard. How learn to get back up after stuff like that but it's hard how do you get
back up um i mean one of the key insights for me was realizing that although i thought this was
like terribly embarrassing and shameful uh no one but me spent much time thinking about it
who do you ask for advice personally?
My strategy is not to just have one person that I go to with everything.
And a lot of people do that.
They have one mentor that they go to for every big decision.
But my strategy is to talk to a ton of different people
when I'm facing a big decision
and try to synthesize the input from all of that.
So if I'm facing a real major strategic challenge
for OpenAI,
kind of one of these bet the company things,
I would bet that counting people
internal and external to the company,
I'd talk to 50 people about it.
counting people, internal and external of the company,
I'd talk to 50 people about it.
And probably out of 30 of those conversations,
I would hear something interesting or learn something that updates my thinking.
And that's my strategy.
So now, outside AI, what are you the most excited about?
Fusion.
I think we're going to get fusion to work very soon.
And I think my model,
if you boil everything down to get to abundance in the world,
the two biggest, most important things
are bringing the cost of intelligence way down and bringing the
cost and amount of energy way down. And I think AI is the best way to do the former and fusion is
the best way to do the latter. And, you know, in a world where we look at energy, that's like less
than a penny per kilowatt hour. And more importantly, we can have as much as we want and
it's totally clean. That's a big deal. Do you think it's going to solve the climate problem?
totally clean. That's a big deal.
Do you think it's going to solve the climate problem?
Yes.
We'll have to use it to do other things. We'll have to use some of it
to capture carbon because we've already done so much damage.
But yes, I do.
What about crypto?
I am excited
for the vision of
crypto. And it has so
far failed to deliver on that promise.
But you have plans.
It's not something I'm spending that much time.
OpenAI has taken over my whole life.
So I can have a lot of plans about OpenAI.
And there's other projects that I've invested in or helped start
that I feel bad because I don't have much time to offer them anymore.
But they're all run by super capable people, and I assume they'll figure it out.
What do you read?
The thing that has unfortunately gone the most by the wayside for me recently has been free time and thus reading. So I don't,
I don't get to read much these days. Uh, I used to be a voracious reader and, uh, there was like
one year where I read, you know, not fully, but like more than a skim, I read 50 textbooks
and that was like an unbelievable experience. Uh,
but I don't like this last year,
I have not read many books.
What's the one book young people should read?
That's a great question.
Picking one is really hard.
I don't think... Man, that's such a good question.
I don't think it's the same for every young person. And coming up with a generic singular recommendation here is super hard.
I don't think I can give a faithful answer on this one.
Good.
Now we are fast-forwarding here. Oh, you know what?
Actually, I do have... No, we are fast-forwarding here. Oh, you know what? Can I?
Actually, I do have.
This is not the one for every young person,
but I wish a lot more people would read The Beginning of Infinity early on in their career
or their lives.
The Beginning of Infinity.
The Beginning of Infinity.
Why?
Why does it matter?
We'll find it.
I think it's the most inspiring.
You can do anything.
You can solve any problem.
And it's important to go off and do that.
It's a very like, I felt it was like a very expansive book of the way I thought about the world.
Well, Sam, I think that's a very suitable place to
go in for landing. Now, last one. So, fast forward a couple of decades, people sit down and reflect
on Sam Altman's impact on the tech world and society. What do you hope they'll say?
What do you hope your legacy will be? You i'll think about that when i'm like at the end of my career like right now i my days are
spent like trying to figure out why this executive is mad at this one and why this product is delayed
and like why our network on our you know big new training computer is not working and who screwed
that up and how to fix it and it's very caught up in the annoying tactical problems.
There is no room to think about legacy.
We're just trying to go off and build this thing.
Fantastic.
Well, good luck with that.
It's been an absolutely fantastic conversation
and all the best of luck.
And go get them.
Yeah.
Great talking to you. Thank you for having me very cool wow that was great