What Now? with Trevor Noah - Sam Altman [VIDEO]
Episode Date: December 7, 2023Trevor has a candid and revealing conversation with Sam Altman, who was ousted and then reinstated as CEO of OpenAI just 12 days ago. Sam recounts where he was when he received the brutal phone call, ...how it all really went down, and its emotional toll. Trevor and Sam also discuss ChatGPT’s explosive release last year, Sam's hopes for AI to better humankind, as well as its potential for evil if not governed properly. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Sam Altman is out as CEO of OpenAI.
A superstar CEO on one side, a disgruntled board on the other.
747 of 770 employees sent a scathing open letter to the board.
Five days after he was unexpectedly fired, Sam Altman is back.
Does this even count as a firing?
This was a brutal...
I guess I'm not really supposed to talk about this right now.
I guess I'm not really supposed to talk about this right now.
This is What Now? with Trevor Noah.
This episode is brought to you by Peloton.
Forget the pressure to be crushing your workout on day one.
Just start moving with the Peloton Bike, Bike Plus, Tread, Row, Guide, or App.
There are thousands of classes and over 50 Peloton instructors ready to support you from the beginning.
Remember, doing something is everything.
Rent the Peloton Bike or Bike Plus today at onepeloton.ca slash bike slash rentals.
All access memberships separate. Terms apply.
What day of the week do you look forward to most?
Well, it should be Wednesday.
Ahem, Wednesday.
Why, you wonder?
Whopper Wednesday, of course.
When you can get a great deal on a whopper.
Flame grilled and made your way.
And you won't want to miss it. So make every Wednesday a Whopper Wednesday,
only at Burger King, where you rule.
Hey, what's going on?
Nice to meet you.
How you doing? Good, how are you?
Absolute pleasure, man.
Mine too.
Thanks for taking the time. meet you. How are you doing? Good, how are you? Absolute pleasure, man. Mine too. Thanks for taking the time.
Thank you.
At what I feel like is a crazy time, right?
Feels like the craziest time I have lived through.
Yeah, I mean, you're at the center of it all.
So I wonder what that feels like.
Because I'm just an avid watcher of everything in this space and this world, you know.
And I feel like you're somebody who's been affected by it all.
I mean, just right now, we get the news.
Sam Altman was on the shortlist for Time Magazine Person of the Year.
Thrilled not to get that.
Thrilled not to get it?
Yeah, of course.
Why?
I have had more attention this year than I would have liked to have in my entire life.
And that is a big one.
And I'm happy for Taylor Swift.
Okay.
Okay.
So you don't like the attention?
You don't want the attention?
No, it's been brutal.
I mean, it's like fun in some ways and it's like useful in some ways.
But from like a personal life, like quality of life trade-off?
Yeah.
Yeah, definitely not. But, you know, this is it now. Like this is of life trade-off. Yeah. Yeah, definitely not.
Yeah.
But you know, this is it now.
Like this is what I signed up for.
Right.
It's the infamy now.
Yeah.
Do people recognize you in the streets?
That's the kind of trade-off that's really bad.
Yeah.
I just feel like you never,
I'm sure it happens to you too,
but I never get to be anonymous anymore.
Yeah.
But people don't ask me about the future.
They don't ask if you're going to destroy the world.
Yeah, I believe that.
Exactly.
There's a slight difference.
People might want a selfie from me.
That's the extent of it. Yeah, a slight difference. People might want a selfie from me.
That's the extent of it.
A lot of selfies.
Well, congratulations.
You are Time Magazine's CEO of the Year.
Yeah.
That's probably one of the strangest moments, right?
Because I guess Time Magazine is making this decision.
A few weeks ago, you might not have been CEO of the Year.
I don't know if they would have still been able to give you the award.
I guess it was for your work before. Yeah, I don't know.
I don't know how it works.
How does it feel to be back in CEO?
I'm still like recompiling reality,
to be honest.
Yeah.
I mean, it feels great in some sense
because one of the things I learned
through this whole thing
is how much I love the company and the mission and the people right and um you know i
had a couple of experiences in that whole thing where i went through like all of the like the
full range of human emotion it felt like in short periods of time um but a very clarifying moment for me was that like, so it all happened on like a Friday
afternoon at Friday at noon. And then the next, the next morning, a Saturday morning, a couple
of the board members called me and said, you know, would you like to talk about coming back?
And I had really complicated feelings about that. But it was very clarifying at the end of it to
be like, yes, I do. i do like i really i love this
place and what we're doing and i think this is like important to the world and like the thing
i care most about it feels like in in the world of tech hiring and firing is something that everybody
has to get used to i mean i know i know back in the day um you know you were at the was the y
combinator right and you were fired from that it was the Y Combinator, right?
And you were fired from that position.
And everyone has a story in – wait, wait, what is – I don't want to debate that.
No, no, no.
No, no, no.
Tell me.
Tell me.
These are things.
You know, you get the research and then you go from there.
Oh, I mean, I had, like, decided, like, a year earlier that I wanted to just come to OpenAI.
Right.
And it was, like, a complicated transition to get here.
But like I had been working on both OpenAI and YC and like very much decided that I wanted to go do OpenAI.
Okay.
And I've never regretted that one.
All right.
So then you've never been fired then.
This is a tough place to be in as a person.
Not.
Does this even count as a firing?
Like if you get fired and then immediately hired back.
Oh, no.
What I was going to say is not only, like this was a brutal,
I guess I'm not really supposed to talk about this right now.
This was a very painful thing.
Well, I felt to me personally, just as a human, like super unfair,
the way it was handled.
Yeah, yeah, I can imagine.
You know, a lot of people will talk about, you know,
getting fired from their jobs.
It became a trend, I guess, during COVID especially,
people would talk about getting an email or a mass video that would go out and then thousands of employees would be let go.
You seldom think it would be possible for that to happen to a CEO of a company. And then I think
even more so, you don't think of it happening to a CEO who many people have termed like the Steve
Jobs of this generation and the future.
And you don't say that about yourself, by the way.
Certainly not.
No, I think a lot of people say that about you, you know,
because, I mean, I was thinking about this and I was going,
I think calling you the Steve Jobs of this generation is unfair.
In my opinion, I think you're the Prometheus of this generation.
No, you really are.
You really are.
It seems like to me you have stolen fire from the gods
and you are at the forefront
of this movement
and this time
that we are now living through
where once AI was only the stuff
of sci-fi and legend.
You are now the face
at the forefront
of what could change
civilization forever.
Do you think it will change everything going forward?
I do.
I mean, I could totally be wrong about what I'm about to say, but my sense is we will
build something that almost everybody agrees is AGI.
The definition is hard, but we will build something that people will look at and say,
all right, you all did that.
That's artificial general intelligence.
Yeah, yeah, yeah. Like, you know, a human level or beyond human level system.
Before you go into the details on that, like, what would you say is the biggest difference
between what people think AI is and what artificial general intelligence is?
We're getting close enough that the way people define it is important and there are differences
in it. So for some people, they mean a system that can like do some significant fraction of
current human work. Of course, we'll find new jobs, we'll find new things to do. But for other
people, they mean something more like a system that can help discover new scientific knowledge.
Okay. And those are obviously very different milestones, have very different impact on the world.
But the reason I don't like the term anymore,
even though I'm so stuck with it,
I can't stop myself from using it.
You don't like which term?
AGI.
Okay.
All that I think it really means
to most people now
is like really smart AI.
But it's become super fuzzy
in what it is other than that.
And I think largely
just because we're getting closer.
But the point I was going to try and make was we're going to make AGI, whatever you want to
call that. And then at least in the short and medium term, it's going to change the world
much less than people think. Much less than people think.
In the short term. I think society is, it has a lot of inertia.
The economy has a lot of inertia.
The way people live their lives has a lot of inertia.
And this is probably healthy.
This is probably good for us to manage this transition.
But we all kind of do things in certain ways and we're used to it.
And society is the super organism, does things in a certain way and is kind of used to it.
So watching what happened with GPT-4 as an example, I think was instructive. People had this like real freak out moment when we first launched
it. Yeah. I said, wow, I didn't think this was going to happen. Here it is. And then they went
on with their lives and it definitely changed things. People definitely use it. It's a better
technology to have in the world than not. And of course, you know, GPT-4 is not very good and 5, 6, 7, whatever, going to be way, way better. But 4 was the moment in
chat GPT's interface, I think was the moment where a lot of people went from not taking it seriously
to taking it very seriously. And yet life goes on. Is that something you think is good for us as humanity and society?
Is life supposed to just go on?
I think.
Or as one of the fathers of this product, one of the parents of this idea, do you wish that we all stopped and took a moment to, I guess, take stock of where we are?
I think the resilience of humans individually and humanity as a whole is fantastic.
Okay.
And I'm very happy that we have this ability to absorb and adapt to new technology, to changes and have it become just like, you know, part of the world.
It really is wonderful.
I think COVID was a recent example where we watched this.
You know, like the world kind of just adapted pretty quickly
and then it felt pretty normal pretty quickly.
I mean, another example in a sort of non-serious way,
but instructive was when all of that UFO stuff came out.
This was a couple of years ago now.
Yeah.
A lot of my friends would say things like, hmm, maybe those are real UFOs or real aliens or whatever.
People who are real skeptics.
And yet everyone, yeah.
And yet they just kind of like went to work and played with their kids the next day.
Yeah, because I mean what are you going to do?
What are you going to do?
What are you going to do?
If they're flying by, they're flying by.
What are you going to do? What are you going to do? What are you going to do? If they're flying by, they're flying by. What are you going to do? So do I wish that we had taken more of a time to take stock?
We are doing that as a world.
And I think that's great.
I'm a huge believer that iterative deployment of these technologies are really important.
We don't want to go build AGI off in secret in a lab, have no one know it's coming,
and then drop it on the world all at once and have people have to say like, huh, here we are.
You think we have to get used to it gradually and sort of grow with it as a technology.
And so this conversation now that society, that our leaders, our institutions are having, where people actually use the technology, have a feel for it, what it does, what it can't do, where the risks are, where the benefits are, I think that's awesome.
have a feel for it, what it does, what it can't do, where the risks are, where the benefits are.
I think that's awesome. And I think like maybe in some sense, the best thing we ever did for our mission was to adopt, so far, was to adopt the strategy of iterative deployment. Like we
could have built this in secret and then built it up for years longer and then just deployed it.
And that would have been bad. You know, it's interesting. Today we walked into the
open AI buildings as like a little bit of a fortress. we we walked into the um into the open ai buildings it's like
a little bit of a fortress and it feels like it feels like the home of the future and i i saw a
post of yours where did you come in as a guest today not anymore i'm i'm back now i did okay
okay during the middle all right i saw you i saw you i saw you had a post where you came in as a
guest and i was like damn that's that's that's a weird one you know it's like coming home but then it's yeah that was and it is home it it felt like it should have been somehow
a very strange moment um to like put on a guest badge here yeah but I was like everyone was like
so tired so exhausted on so much adrenaline yeah it really did not feel momentous in the way that
I guess I could say I had hoped it would it It should have been like a funny, you know, moment to like reflect on and tell stories that
there were moments that day that were like that. Like one of my proudest moments of that day is,
uh, I, I was very tired, very distracted. Um, and you know, we thought the board was going to put
me back as CEO that day, but in case they didn't, I got interviewed for L3,
which is like our lowest level
software engineering job
by one of our best people.
And he gave me a yes.
That was like a very proud moment.
Okay, okay.
So you still got the skills.
But the badge was not as poignant
as I would have hoped.
Right.
I'd love to know what you think
you've done right as CEO
to have the level of support that we've publicly seen
from the people who work at OpenAI.
You know, when the story broke,
and I won't ask you for the details because I know you can't comment
about the internal investigation stuff.
I know what I want.
Yeah, I mean that stuff.
But what I mean is, you know,
I know you can sort of speak about just the feelings
and what's been happening in the company as a whole.
It's rare that we'll see a situation unfold the way it did with OpenAI.
You know, you have this company and this idea that for one minute doesn't exist for most people on the globe.
The next minute, you release ChatGPT, this simple prompt, just a little chat box that changes the world.
I think you go to 1 million users in the fastest time of any tech product.
I think five days.
Yeah, five days.
And then it shoots to 100 million people.
And it very quickly, I know on an anecdotal level, for me, it went from nobody in the world knew what this thing was.
I was explaining it to people, trying to get them to understand it.
I have to show them like poetry and simple things they would get.
And then people are telling me about it.
And now it just becomes this ubiquitous idea where people are trying to come to grips with what it is and what it means.
But on the other side, you have this company that's trying to in some way, shape or form harness and shape the future.
And the people are behind you.
You know, we see the story.
Sam Altman is out.
No longer CEO.
And then they're swirling everything.
I mean, I don't know if you saw some of the rumors.
They were crazy. One of the craziest things I saw was they said, like someone said, it was wild and funny.
They said, I have it from good sources that Sam was fired for trying to have sex with the AI.
That's what someone.
I mean, I don't even know how I'm supposed to react to that.
I saw that and I was like...
I guess given the moment, I should officially deny that,
which did not happen.
Yeah, and I don't think it could happen
because I don't think people understand
the combination of the two things.
But what got me was how the salaciousness of the event seemed to bring OpenAI into a different spotlight and a different moment.
And one of the big things was the support you had from your team.
Like people coming out and saying, we're with Sam no matter what happens.
And that doesn't normally happen in companies.
CEOs and its employees are generally in some way, shape, or form disconnected.
But it feels like this is more than just a team.
What I'm about to say is not false modesty at all.
There's plenty of places I willingly take a lot of credit.
I think this one, though, was not about me other than me as sort of like a figurehead representation. had representation but it like i think one thing that we have done well here is a mission
that people really believe in the importance of and it was a i think what happened there was like
people realized that the mission and the organization and the team that we have all
worked so hard on and made such progress to but have so much more to do.
Like that was under real threat.
And I think that was what got the reaction.
It was really not about me personally, although, you know, hopefully people like me and think I do a good job.
It was about the shared loyalty we all feel and the sense of duty to completing the mission and wanting to maximize
our chances at the ability to do that. At the top level, what do you think the mission is? Is it to
get to artificial general intelligence? Get the benefits of AGI distributed as broadly as possible
and successfully confront all of the safety challenges along the way. Okay, that's an
interesting, you know interesting second line.
I would love to chat to you about that later, getting into the safety of it all.
When you look at OpenAI as an organization, the very genesis of OpenAI was really strange.
And you'll correct me at any point when I'm wrong.
But it seems like it was started very much with safety in mind, you know, where you brought this team of people together and you said,
we want to start an organization, a company, a collective
that is trying to create the most ethical AI possible that will benefit society.
And you see that even in the, I guess, the profits,
the way the company defines how its investors could receive the profit, etc.
But even that changed at some point in OpenAI.
Do you think you can withstand the forces of capitalism?
I mean, there's so much money in this.
Do you think that you can truly maintain a world where money doesn't define what you're
trying to do and why you're trying to do it?
It has to be some factor.
It has to be some factor.
Like just if you think about the costs of training these systems alone, we have to find some ways to play on the field of capitalism, for lack of a better phrase.
But I don't think it will ever be our primary motivator.
And by the way, I like capitalism. I think it has huge flaws, but relative to any other system the world has tried, I think it is still the best thing we've
come up with. But that doesn't mean we shouldn't strive to do better. And I think we will find ways to spend the enormous record-setting amounts of capital that we will need to be able to continue to advance the forefront of this technology.
That was one of our learnings early on.
This stuff is way more expensive than we ever thought.
We knew.
We kind of knew.
We had this idea of scaling systems, but we just didn't know how far it was going to go.
You've always been a big fan of scaling.
That's something I've read about you.
Yeah.
Even when one of your mentors, and I think one of the people you invest with now in Fusion Power,
they said, whenever you bring an issue to Sam, the first thing he thinks about is, how can we fix this?
How can we solve it?
And the second thing he says immediately is, how do we scale the solutions?
I don't remember.
I'm terrible with names.
But I know it was somebody you work with.
Interesting.
No, it is right,
but I haven't heard someone say that about me before.
Oh, yeah, yeah.
But it is, I think that it,
it's been sort of like one of my life observations
across many different like one of my life observations across many different facets of companies and also just fields that scale often yields surprising results.
So like scaling up these AI models led to very surprising results.
Scaling up the fusion generator makes it much better in all of these obvious but some non-obvious ways too.
Scaling up companies has non-obvious benefits right scaling up groups of companies like y
combinator has non-obvious benefits right and i think there's just something about this that is
not taken seriously enough and in our own case uh you know in the early days we knew scale was
going to be important if If we had been smarter
or more courageous thinkers or whatever, we would have like swung bigger out of the gate. But it's
like really hard to say, I want to go build a $10 billion bigger computer. So we didn't. Right.
And we learned it more slowly than we should have, but we did. But now we see how much scale we're
going to need. And again, I capitalism is is cool i have i have
nothing against it as a as a system well no that's not true i have a lot of things against it as a
system but i have i have no pushback that it's better than anything else we have yet discovered
have you have you asked the uh chat gpt if it could design a system i i have uh
a different not maybe not to design a new system,
but like, you know, I've asked a lot of questions about like how AI and capitalism are going to intersect
and what that means.
One of the things that we,
so we were right about the most important
of our initial assumptions that AI was going to happen.
It was possible that deep learning was.
Which a lot of people laughed at, by the way.
Totally. Oh, man, we got ruthlessly laughed at.
Yeah.
But even some of our thoughts about how to get there,
we were right about it.
But we were wrong about a lot of the details,
which, of course, happens with science, and that's fine.
You know, we had a very different approach
for how we thought we were going to build this
before the language model stuff started to work.
We also, I think, had a very different conception
of what it was going to be like to build an AGI, And we didn't understand this idea that it was going to be like
iterative tools that got better and better, that you kind of just talk to like a human.
And so our thinking was very confused about, well, when you build an AGI, what happens?
And we sort of thought about it as there was this moment before AGI and then this moment of AGI. And, you know, then you needed to like give that over to some other system and some
other governance. I now think it can be, and I'm really happy about this because I think it's much
easier to navigate. I think it can be, I don't want to say like just another tool because it's
different in all these ways, but in some other sense, we have made a new tool for humanity.
We've added something to the tool chest.
People are going to use that to do all sorts of incredible things.
But people remain the architects of the future, not one AGI in the sky.
It's you can do things you couldn't do before.
I can do things I couldn't do before.
We'll be able to do a lot more.
And in that sense, I can imagine a world where part of
the way we fulfill our mission is we just make really great tools that massively impact human
ability and everything else. And I'm pretty excited about that. Like, I love that we offer
free chat GPT with no ads. Because I cause I, I personally really think ads have been a problem
for the internet. Um, but we just like put this tool out there. That's the downside of capitalism,
right? Yeah. Yeah. Um, one of them, uh, I think there's much bigger ones personally,
but we put this tool out there and people get to use it for free and we're not like trying to turn
them into the product. We're not trying to turn them into the product.
We're not trying to make them use it more.
And I think that shows like an interesting path
that we can do more on.
So let's do this.
In our time together in this conversation,
there's so many things I would like us to get to, hopefully.
We won't be able to answer all questions, obviously,
but there are a few ideas, a few headings
and a few spaces I wanted us to live within.
I guess the first and most timely is what happens now for the future of the company?
Where do you see it going?
One of the things I found particularly interesting was what the new board was,
how the new board was comprised for OpenAI,
where previously you had women on the board, now you don't.
Where previously you had people who had no financial incentive on the board, now you do.
And I wonder if you worry that that guardrail that you were part of that's now not focused on protecting people or defining a safer future as opposed to making money and getting this thing to be as good or as big as it can be?
Well, I think our current – our previous governance structure and board didn't work in some important sense so i'm all for figuring out how to improve
that and i'll support the board in their work to do that um obviously the bird board needs to grow
and diversify and that'll be something that i think happens quickly um and voices of people
who are going to uh advocate for people who are traditionally not advocated for and be really thoughtful about not only ai safety but just the the lessons we can take from the past about how to make these
very complex systems that interact with society in all of these ways right um as good as possible
which is both mitigating the bad and sharing the upside um that all needs to be represented uh so
i'm i'm excited to have a second chance at getting all these things right,
and we clearly got them wrong before.
But yeah, like diversifying the board,
making sure that we represent all of the major classes of stakeholders
that need to be represented here,
figuring out how we make this
a more democratic thing, continuing to push for governments to make some decisions,
governing this technology, which I know is imperfect, but I think better than
any other method of doing this that we can think of so far, engaging with our user base more to
let them help set the limits on how this works. That's all super important. That'll be one major thing going forward is board and expanding the
board and governance. And again, really, like, I know our current board is small, but I think
they're so committed to all the things you were just talking about. Then there's another big class
of problems. If you asked me a week ago, I would have said stabilizing the company was my top thing.
But internally, at least, I feel pretty good. We did not lose a single customer. We did not lose
a single employee. We continued to grow. We continued to ship new products. Our key partnerships
feel strengthened, not hampered by this. And things and things are on pace there. And the, the sort of research
and product plan for the first half of next year, I think feels better and more focused than ever.
Um, but there's a lot of, clearly there's like a lot of external stabilization we still have to do.
Um, and then, and then beyond that, like we're really confronting the possibility that we just like,
we have not been planning ambitiously enough for success.
You know, we've had like ChatGPT Plus.
If you want to subscribe to ChatGPT Plus right now, you are not able to.
You've not been able to.
You just have too many people.
And so given how good we think the future systems we create are going to be
and how much people seem to want to use these,
we have been like behind the airplane all year long and we'd like to finally
get caught up.
FanDuel Casino's exclusive live dealer studio has your chance at the number
one feeling winning,
which beats even the 27th best feeling saying I do.
Who wants this last parachute?
I do.
Enjoy the number one feeling, winning,
in an exciting live dealer studio,
exclusively on FanDuel Casino,
where winning is undefeated.
19 plus and physically located in Ontario.
Gambling problem?
Call 1-866-531-2600
or visit connectsontario.ca.
Please play responsibly.
I found myself constantly thinking about you as a person,
you know, when the whole board saga was taking place.
And whenever there's a storm,
I'm always interested in what's happening in the eye of the storm. Yeah.
And I wondered, like, where were you when the soul when this all broke like what were you doing what what was going on in your world on a personal level i i the reason i laughed
is the thing people say about me is i'm like i am good at sitting in the eye of the hurricane while
it like turns around me already and staying super calm and this was this time like turns out like
not uh but this was the
experience of like being in the eye of the storm and having it not be calm um i was in las vegas
at f1 oh okay yeah uh you an f1 fan i am yeah all right who's your team do you have one um i
honestly i like i mean verstappen is so good it's hard to say like but i feel like that's the answer
everyone would say.
I still think he's just, like, unbelievably –
No, no, no.
Actually, it depends on when they joined the sport.
Like, I was a Schumacher fan because that's when I started watching.
Schumacher, obviously.
Well, I mean, it started with Nigel Mansell, then, like, Etten Senna,
and you know what I mean.
But, yeah, okay.
But he's – I like –
No, Verstappen, he's precise with it.
I see why.
And just, like, it almost gets bored of, like,
why I'm watching him win so often, but it's incredible.
So I was, like, it almost gets bored of like, boring watching him win so often, but it's incredible. So I was like so excited for that.
That first night, I got in like late on a Thursday.
That first night, someone like,
they forgot to weld down a manhole cover.
So someone drove over it in the first lap,
blew up like one of Ferrari's engines
and stopped the practice.
So I didn't get to watch it.
I never got to watch any race that whole weekend.
I was in my hotel room, took this call. I had no idea what it was going to be
and got fired by the board. And it was just this, like, it felt like a dream. I was like,
I was confused. It was chaotic. It did not feel real. It was like, obviously like upset and painful, but confusion was just like the dominant emotion
at that point. It was like, it was just in a fog and a haze. I was like, I got, didn't understand
what was happening. It happened in this like unprecedentedly, in my opinion, crazy way.
way um and then in the next like half hour my phone i got so many messages that i message like broke on my phone wow and who is this from employee everyone every everyone was like it
was my phone was just like unusable because it was just like notifications non-stop and i message
like hit this thing where it stopped working for a while. That message got delivered late. Then it marked everything as red.
So I couldn't even tell who I had.
Who you had spoken to, who you hadn't.
And I was talking to the team here, trying to figure out what was going on.
Microsoft is calling everybody else.
And it was just like, it really was like unsettling and didn't feel real.
And then,
and then I kind of like got a little bit collected and was like,
you know what?
I,
I can go on and I can,
I really want to go work on AGI somehow.
If I can't do that at opening,
I'm still going to do it.
And I was thinking about the best way to do that.
Greg quit. Some other people quit started just getting like tons of messages from people saying like we you know want to come like work
with you however it's going to be and at that point i like going back to open air was like not
on my mind at all yeah i can imagine it's just like thinking about whatever the future was going
to be but uh i kind of didn't have a sense of like what a industry event this was
because I like wasn't really reading the news.
All I could tell was I was getting like crazy numbers of messages.
Right, because you're actually in the storm.
Yeah.
And I was just trying to like, you know, be supportive of open AI,
figure out what I want to do next, try to understand what was happening.
And then flew back to California,
met with some people
and kind of was just like very focused
on going forward at that point.
But, you know, also like wishing the best for OpenAI.
But, and then I stayed up like most of that first night,
couldn't really sleep.
Also, it was just like tons of conversations happening.
And then it was sort of like a crazy weekend from there but I'm sure I still have not like I'm still like a little bit in shock and a little bit just trying to like
pick up the pieces uh you know I'm sure as I have time to like sit and process this, I'll like have a lot more feelings about it.
Right.
Do you feel like you just had to jump straight back into everything?
Because, you know, to your point, you're on this mission.
You can see in your eyes you're very driven, you know,
and the world has now tipped over a precipice that it can never return from, you know.
So you're moving towards
something. All of a sudden, it doesn't seem like you'll be able to achieve it in the sphere that
you're in. But as you say, Microsoft steps in, Satya Nadella says, hey, come and work with us,
we'll rebuild this team. If there's one thing people say about Sam Altman, if they've worked
with him, is they say he is tenacious. He is, he's unrelenting. He does not believe in letting life stop you
if you have a goal and if you believe in something.
And it seems like you're moving towards that.
You said nothing publicly about open AI.
You weren't disparaging in any way.
But it feels like it took a toll on you.
For sure.
I mean, I don't think it's anything
I won't like bounce back from,
but I think it'd be impossible to go through this and not have it take a toll on you.
That'd be really strange.
Did it feel like you were losing a piece of yourself?
Yeah.
I mean, like, this...
We started OpenAI, like, very end of 2015.
Like, first day of work was really in 2016.
And then I've been, like... I was working on this on YC for a while, but I've been like full time on this since the beginning, since early 2019.
And it has like AGI and my family are like the two main things I care about.
So losing one of those is like, and again, like, maybe in some sense, I should say,
like, oh, you know, I was like, I got to work on AGI and care really more about the mission. But, but of course, I also care about like, this org, these people, our users, our shareholders,
everything we built up here. So So yeah, I mean, it was just, like, unbelievably painful.
The only comparable set of life experience I had, and that one was, of course, much worse, was when my dad died.
Wow.
And that was, like, a very sudden thing.
and that was like a very sudden thing and um but the sense of like confusion and loss and you know you get like in that case i felt like i had
like a little bit of time to just really like feel it all but then there was so much to do like
it was like so unexpected that i just ended up having to pick up the pieces of his life for a little while um and it wasn't until like a week after that that i really got a moment
to just like catch my breath and be like holy shit like i can't believe this happened um
so yeah that was much worse but it was there's like echoes of that same thing here i can only
i can only imagine when you when you look towards the future of the company and your role
in it, how do you now find a balance between moving OpenAI forward, continuously propelling
yourselves in the direction you believe, but then also, you know, do you have, do you still have an emergency break?
Is there some system within the company where you say, if we feel like we're creating something
that's going to adversely affect society, we will step in, we will stop this. Do you have that
ability? And is it baked in? Yeah, of course. Like we, and we've had it in the past, like we've
created systems that we've chosen not to deploy. Oh, interesting.
And I'm sure we will again in the future.
Or we've created a system and just said,
hey, we need much longer to make this safe before we can deploy it.
Like with GPT-4,
it took us almost eight months after we finished training before we were ready to release it
to do all of the alignment and safety testing that we wanted.
I remember talking to some of the team about that.
And that's not a board decision.
That's just the people in here doing their jobs
and being committed to the mission.
So that will continue on.
And one of the things I'm really proud of about this team
is the ability to operate well in chaos, crisis, uncertainty, stress,
I give them like an A-plus on that.
They did such a good job.
And as we get closer to more powerful, very powerful systems,
I think that ability of the culture and the team we have built
is maybe the most important element, you know, to like keep
your head cool in a crisis and make good, thoughtful decisions. I think the team here
really proved that they can, they can do that. And that's super, that's super important. There were,
I saw this thing where someone was like, you know, the thing we learned about OpenAI is that
Sam can run the company without any job there.
And I think that's totally wrong.
I think that's not at all what happened.
I think what happened is the right learning is the company can totally run
without me.
And it's a culture that the team is ready.
The culture is ready.
Like,
I think that's,
I'm just super proud of that.
Really happy to be back and doing it.
But,
but like,
I sleep better at night having watched the team manage through this,
given the challenges ahead.
And there will be bigger challenges than this that will come up.
But I think in some subjective sense, I hope and believe this is the hardest one because we were so unprepared.
And now we kind of like realize the stakes and that we're not just – in some like important sense, we're just not a regular company.
Oh, yeah.
Far from it.
Far from it.
Let's talk a little bit about that.
Chat GPT, OpenAI, you know, whatever it may end up calling it.
Because, I mean, you've got DALI.
You've got Whisper.
You've got all these amazing products.
Do you have any name ideas, brand architecture ideas for us? I i would love it i feel like chat gpt has done it you know
i feel like it is now ubiquitous yeah it's a horrible name but it may be too ubiquitous but
it's you can't change you think you can change it at this point i mean could we drop it down to just
like gpt or just chat i don't know i don't know i don't know maybe not maybe sometimes i feel like
a product or a name or an idea grows beyond the marketer's dream space.
And then people just have it.
Yeah, no marketer ever would have picked ChatGPT as the name for this.
But we may be stuck with it.
And that might be all right.
Yeah.
And it's now, I mean, just the multimodal aspects of it fascinate me.
I remember when I first saw Dali come out.
And it was just an idea
and seeing how it worked
and seeing this program
that could create a picture from nothing but noise.
And I was trying to explain it to people
and they were going,
but where did it get the picture from?
And I was like, there was no picture.
There was no source image.
And they're like, but that's not possible.
It saw something.
And I was like,
and it's so hard to explain some of this.
Sometimes it's even hard to understand for myself.
But when we look at
this world that we're currently living in, we talk about them as numbers, GPT 3.5, GPT 4, GPT 5,
6, 7, whatever it may be. I like to remove the technical term in that way and talk more about
the actual use cases of the products. One thing we saw in a jump or one thing we saw between products was between ChatGPT 3,
3.5 to 4, we saw what we would call reasoning on a much higher level.
A little bit of it, yeah.
Like creativity in some way.
The first sparks of it.
Yes, yes, yes, exactly.
And when I look at this product and this world that you're creating now,
you know, with general large language models
and now the specialized large language models,
I wonder, do you think that the use case is going to change dramatically?
Do you think that what might right now just be like a little chatbot that people are,
like, do you think this will be the way the product remains? Or do you think it will become
a world where everything becomes the specialized GPTs? You know, a world where, you know, Trevor
has his GPT that's trying to do things for him, or this company has their GPT that's doing things
for them. Like, where do you see it? Obviously, it's hard to predict the future, but where do
you see it going from where we are right now?
I think it'll be a mix of those.
It is hard to predict the future.
Probably I'll be wrong here, but I'll try anyway.
I think it'll be a mix of the two things that you just said.
One, the base model is going to get so good that I have a hard time with conviction saying,
here's what it won't be able to do.
That's going to take a long time, but I think that's where we're heading.
What's a long time on your horizon?
Like what's when you measure it?
Like not in the next few years.
Okay.
It will get much better every year
in the next few years,
but like I'm,
I was going to say I'm certain.
I think it's like highly likely
there will still be plenty of things
that the model,
what is this, in 2026 can't do.
But doesn't the model always surprise you?
When I talk to engineers who work in this space, when I talk to anyone who's involved in AI or adjacent to AI, the number one thing people say, the number one word is surprised.
People keep saying that.
They go, we were surprised that we were teaching or we thought chat GPT was learning about this field.
And all of a sudden it started speaking a language.
Or we thought we were teaching it about this.
And all of a sudden it knew how to build bridges or something like.
So for what it's worth, that was most people's here subjective experience of maybe between like 2019 and 2022 or something like that okay but now i
think we have learned not to be surprised and now now we trust the exponential most of us
so gpt5 or whatever we call it will be great in a bunch of ways we will be surprised about
specific things it can do and that it can't do but no one will be surprised that it's awesome
huh like at this point i think we've really internalized that in a deep way.
The second thing you touched on though, is these custom GPTs. And more importantly than that,
you also touched on like the personal GPTs, like the Trevor GPT. And that I think is going to be
a giant thing of the next couple of years, where if you want, these models will get to know
you, access your personal data, answer things in the way you want, work really effectively in your
context. And I think a lot of people are going to want that. Yeah, I mean, I can see a lot of
people wanting that. It almost made me wonder if, you know, the new workforce becomes one where
your GPT is almost your resume.
Your GPT is almost more valuable
than you are in a strange way.
Do you know what I mean?
It's like a combination of everything you think
and everything you've thought
and the way you synthesize ideas
combined with your own personal GPT becomes,
and I mean, this is me just like thinking
of a crazy future where you go,
you literally get to a job and they go,
what's your GPT?
And you say, well, here's mine's mine you know we always think of these like
agents as these personalized agents as I'm gonna like have this thing go do
things for me uh-huh but it'd be interesting with what you're saying is
if what it instead this is like how other people interact with you right
like this is your impression your avatar your echo whatever I can see getting to
that because I mean what are we if not we, if not a culmination or a combination of all of our…
It's a strange thought, but I could believe it.
I'm constantly fascinated by where it could go and what it could do.
You know why?
When ChatGPT first blew up, right, in those first few weeks,
I will never forget how people quickly realized
that the
robot revolution, I know it's not robots,
but just for people, they're like, oh, the robot
revolution, the machine
wasn't replacing the jobs
that they thought it would.
People thought it would replace
truck drivers, etc. And yet
we've come to find that no, those
jobs are actually harder to replace.
And it's, in fact, all the jobs that have been, quote, unquote, like thinky jobs.
It's like your white collar.
Oh, you're a lawyer?
They might not need as many lawyers when you have chat GPT-5, 6, 7, whatever you want.
You're an engineer.
The human body is really an amazing thing.
It really is, right?
It really is right yeah it really is do you see any
advancements
where you think
it could replace
the human body
or are we still
in like mind land
no I think we will get
robots to work
eventually
like humanoid like robots
yeah
to work eventually
but
our
you know
and we worked on that
in the early days of OpenAI
we had a robotics program
oh I didn't know that
we did
we made this thing that could do like a robotic hand And we worked on that in the early days of OpenAI. We had a robotics program. Oh, I didn't know that. We did.
We made this thing that could do like a robotic hand that could do a Rubik's Cube with one hand.
It takes a lot of dexterity.
I think there's like a bunch of different insights rolled into that. But one is that it's just much easier to make progress in the world of bits than the world of atoms.
Like the robot was hard for all the wrong reasons.
It wasn't hard because it was like helping us advance hard research problems.
It was hard because the robot kept breaking and that it wasn't that accurate and the simulator was bad.
And whereas like a language model, it's just like you can do all that virtually.
You can make way faster progress.
model, it's just like, you can do all that virtually. You can make way faster progress.
So focusing on the cognitive stuff helped us push on more productive problems faster.
But also, in a very important way, I think solving the cognitive tasks is the more important problem.
Like if you make a robot,
it can't necessarily figure out how to like go help you make a system to do
the cognitive tasks.
Yeah.
But if you make a system that does the cognitive tasks,
it can help you figure out how to make a better robot.
Oh yeah.
That makes sense.
And,
and so I think like cognition was the core of the thing that we wanted to
thrust at.
And I think that was the right decision.
But I hope we'll get back to robots.
Do you have an idea of when you will consider artificial general intelligence achieved?
Like how do we know?
Me personally?
Like when I feel like mission accomplished?
Yeah, like what?
Because everyone talks about artificial general intelligence.
But then I go, how do we know what that is?
So this comes back to that point earlier where everyone's got a different definition.
I'll tell you personally when I'll be thrilled.
When we have a system that can help discover novel physics, I'll be very thrilled.
But that feels like it's way beyond general intelligence.
That seems like you.
Do you know what I mean?
It's beyond, I think, what most people would count like like maybe because this is what i think of sometimes as i go
how do we define that general intelligence are we defining it as brilliance in a certain field
or are we defining it as like a child is artificially generally intelligent for sure
but you have to keep programming it you know they just they come out they don't speak they don't
know how to walk they don't know how to and and you're constantly programming this, you know, AGI to get to where it needs to go.
So how will you, like, if you get to the point where you have a four-year-old child version of AGI.
A system that can just, like, just figure it out, you know, can just go autonomously with some help from its parents.
Yeah.
Figure out the world in the way that a four-year-old kid does,
yeah, we can call that an AGI.
If we can really address that truly generalized ability to be confronted with a new problem
and just figure it out, not perfectly.
A four-year-old doesn't always figure it out perfectly either.
But then we're clearly going to get it.
Are we able to get there if we don't
fundamentally understand thinking and the mind? It seems like it. You think we can get there?
I think so. Or can we get to a place where, so I'm sure you know about this. One of my favorite
stories in the world of AI is, I think it was actually a project that Microsoft was working on,
but they had this AI that was trying to learn how to discern between male and female
faces, right? And it was pretty accurate at some point. It was like at 99.9% accuracy. However,
it kept failing with black people and black women in particular. It kept on mischaracterizing them
as men. And the researchers kept working,
and they were like, what is it? What is it? What is it? What is it? What is it? At some point,
and this is, I mean, I tell the story this way, and I mean, it could be a little bit wrong,
but I found it funny. At some point, they sent, quote unquote, they sent the AI to, I think it
was Kenya, right? So they sent the AI to Africa. And then they told the research team in Kenya,
can you work with this for a while and try and figure it out?
And then while the AI was running on that side of the world with their data sets and African faces,
it became more and more and more accurate
with specifically black women.
But in the end of it, they found that the AI
never knew the difference between a male face
and a female face.
All it had been drawing was a correlation between makeup.
And so the AI was going, people who have red lips and who have like rosy cheeks and maybe
blue on their eyelids, those are women.
And then the other ones are men.
And because the researchers said, yes, you're correct.
Yes, you're correct.
It just found like a quote unquote cheat code, you know, and you know how this works way
beyond like what I understand.
But it just figured out a cheat code.
It's like, oh, I understand what you think a man is and what a woman is.
And it gave it to them.
And then they realized because black women are generally underserved when it comes to
makeup and they don't wear makeup, you know, the system just didn't know.
But we didn't know that it didn't know.
And so I wonder, how will we know that the AGI doesn't know or does know something?
Or will we know that it's just cheating to get there? Like, how do we know that the AGI doesn't know or does know something? Or will we know that it's just cheating to get there?
Like, how do we know?
And what is the cost of us not knowing when it's intertwined with so many aspects of our lives, you know?
One of the things that I believe we will make progress on is the ability to understand what these systems are doing.
So right now, interpretability has made some progress.
That's the field of looking at one of these models.
And there's different levels you can do this at.
You can try to understand what every artificial neuron in a system is doing,
or you can look at as the system is thinking step by step,
which of these do I not agree with?
And there will be even more we'll discover.
But the ability to understand what these systems are doing, hopefully have them explain to us
why they're coming to certain conclusions and do it accurately and robustly. I think we're
going to make progress there before I think we're going to truly understand how these systems are
capable of doing what they do and also how our these systems are capable of doing what they do and also
how our own brains are capable of doing what they do. So I think we will eventually get to
understand that. I'm so curious. I'm sure you are too. I am. But it seems to me that we'll have
more progress in doing what we know works to make these systems better and better and having them help us with the interpretability challenges.
And also, I think as these systems get smarter, they will just be fooled less often.
So a more sophisticated system might not have made that makeup distinction.
It may have learned at a deeper level.
And I think we see evidence of stuff like that happening.
You know, there's two things you actually make me think of when you say not get fooled
that easily.
One is the safety side.
One is the accuracy side.
We, one of the first things, and I mean, the press ate this up.
You remember, they were like, oh, the AI hallucinates and it thinks that it is going to kill me.
And it thinks, and people love using the word think, by the way, with large language models,
which I find particularly funny.
Because I always think like journalists, you know, they should be trying to understand what it's doing before they report it.
But they've done, I think, the general public a disservice in using the word think quite a lot.
I mean, I have empathy for it.
Like we need to use familiar terms and we need to anthropomorphize.
But I agree with you that it is a disservice.
Yeah, because if you're saying it thinks,
then people go, well, will it think about killing me?
And it's like, no, but it's not thinking.
It's really just using this magical transformer
to figure out where words most likely fit
in relation to each other.
What do you think you're doing?
What am I doing?
Yeah, that's an interesting one.
And now this is what I was going to, right?
Is the ideas that we put together.
We talk about hallucinating.
Let me start with the first part.
Do you think we can get to a place where AI doesn't hallucinate?
Well, I think the better version of that question is can we get to an AI that doesn't hallucinate when we don't want it to at a similar rate to humans not doing it and on that
one i would say yes but okay but actually like a big part of why people like these systems is
that they do novel things and if it only ever
yeah like hallucination is this sort of feature and bug.
Well, that's what I was about to ask.
I was like, isn't hallucinating part of being an intelligent being?
Totally.
If you think about the way...
Like if we think about the way an AI researcher does work.
Okay.
They look at a bunch of data.
They come up with ideas.
They read a bunch of stuff.
And then they start thinking, well, maybe this or this.
Maybe I should try this experiment. and now I got this data back.
So that didn't quite work.
Now I'll come up with this new idea. existed before and most of which are wrong but then have a a process and a feedback loop to
go figure out which ones might make sense and then do make sense um that's like a key element
of human progress How do we prevent the AI from that garbage in, garbage out output, that scenario?
Right now, the AI is working off of information that humans have created in some way, shape, or form.
It is learning from what we've considered learnable material.
With everything that's popped up now, you know, the open AIs, the
anthropics, the lambdas, the, you know,
you name them,
it feels like we could get to a world where now AI
is pumping out more information than humans are
pumping out, and it may not be vetted as much as
it should be. How do we then,
is the AI going to get better when
it is learning from itself in a way that it might not be vetted?
Do you get what I'm saying?
Totally.
How do we figure that out?
So it comes back to this issue of knowing how to behave in different contexts.
Like you want hallucinations in the creative process, but you don't want hallucinations when you're trying to like report accurate facts on a situation.
when you're trying to report accurate facts on a situation.
And right now, you have these systems that can generate these beautiful new images that are hallucinations in some important sense, but good ones.
But then you have a system that when you want it to be only factual,
again, it's gotten much better, but it's still a long way to go on there.
And it's fine.
I think it's good if these systems are being trained on their own generated
data as long as there is a a process where the systems are learning what data is good and what
is who's bad which again it's not enough to say hallucinated or not because if it's coming up
with new scientific ideas those may start off as hallucinations. Which is valuable. But, you know, what is good, what is bad.
And then also that there is enough human oversight of that process
that we are all still collectively in control of where these things are going.
But with those constraints, I think it's great that the systems are going to be,
future systems are going to be trained on generated data.
And then you reminded me of something else, which is, I've been wondering, I don't know quite how to calculate this, systems are going to be trained on generated data.
And then you reminded me of something else, which is, I've been wondering, I don't know quite how to calculate this, but I would like to know when there's more words generated
by, say, GPT-5 or 6 or whatever than all of humanity at a given time.
That feels like an important milestone.
Actually, now that I'm saying that out loud, maybe it doesn't.
Generated in what way?
Oh, like where the model is producing more words than all of humanity in a given year.
Huh.
So there's 8 billion humans or whatever.
Yeah, that does seem interesting.
Speak however many words per year on average. You can figure out what that is.
I mean, yeah.
What does it give us, though, is the question on the other side.
That's why I was taking it back after I said it.
For some reason,
it feels like an important milestone to me,
but I can't think of...
It feels like an important milestone
in like a monkey typewriter kind of way
because maybe humans are, you know,
we're all monkey typewriting the whole time
and that's where things...
I think it's worthwhile.
Yeah, the amount of...
I don't want to use the word thinking
because I think you're right not to use it,
but the amount of like...
Maybe we can... Just the amount of, I don't use the word thinking because I think you're right not to use it, but the amount of like, maybe we can just the amount of like words generated by AI versus all of humanity.
Yeah.
I'm going to lose you soon.
So I want to jump into a few questions that I think are really, people will kill me if I don't ask these of you.
Okay. So one of the main ones, this is from my side personally, we always talk about
AI learning from the data, right? They're fed data sets. And we talk about this. That's why
you need these mega computers that cost billions and billions of dollars so that the computers can
learn. How do we teach an AI to think better than the humans that have given it the data that is clearly flawed.
So, for instance, how does an AI learn beyond the limited data that we've put out there?
You know, when it comes to race, when it comes to economics, when it comes to the ideas, because we're limited.
How do we teach it to not be as limited as we are if we're feeding it data that's limited?
We don't know yet, but that's our big, that's like one of our biggest research thrusts in front of us is like, how do we surpass human data?
And I hope that if we can do this again a year from now, I'll be able to tell you.
But I don't know yet.
We don't know yet.
It's really important.
However, a thing that I do believe is this is going to be a force to combat injustice in the world in a super important way.
I think these systems will be, they won't have the same deep flaws that all humans do.
They will be able to be made to be far less racist, far less sexist, far less biased.
They'll be a force for economic justice
in the world. I think, you know, if you make a great AI tutor or a great AI medical advisor
available, that helps the poorest half of the world more than the richest half, even though
it helps lift everybody up. So I don't have like an answer to the scientific question you asked,
but I do at this point feel confident that these
systems can be, of course, we have to do some hard societal work to make them in fact be, but
can be great for sort of increasing justice in the world. Okay. Maybe that leads then perfectly
to a second question, which is, what are you doing? What is OpenAid doing? Are you even
considering doing anything to try and mitigate how much this new technology once again creates, you know, the haves and the have-nots?
Every new technology that's come out has been amazing for society as a whole, if we call it that.
But you can't deny it creates a moment in time where if you have it, you've got it all. And if you don't, you're out of the game.
time where if you have it, you've got it all. And if you don't, you're out of the game.
I think that we'll learn a lot more as we go. But currently, I think one really important thing we do is offer truly free service, which means no ad supported, but just a free service to more than
100 million people who are using it every week. And the fact that it's not fair to say anyone, because in some countries
we have still blocked or we are still blocked, but trying to get closer and closer to everything we
can do there where anyone can access really high quality, easy to use, free AI, that is important
to all of us personally. And I think that there's other things that we'd like to do with the
technology, like if we can help cure diseases with AI and make those cures available to the world, that's clearly beneficial.
But putting this tool in the hands of as many people as we can and letting them use it to architect the future, that is super important.
And I think we can push this much, much further.
Okay.
Two more questions.
Can I add one more thing to that?
Oh, yeah.
Yeah, go.
It's all your time.
Go ahead.
Yeah.
The other thing that I think is important to that is who gets to make the decisions about what these systems say and not say or do and not do?
Like who gets to set the limits?
Yeah.
And like right now, it is basically the people who work at OpenAI deciding deciding and no one would say that's like a fair representation of the world.
So figuring out not just how we spread access of this technology, but how we democratize the governance of it.
That's like a huge challenge for us in the coming year.
Well, that sort of goes to what I was about to ask you.
The safety side of it all.
We spoke about this
right in the beginning
of the conversation.
When designing something
that can change the world,
there always has to be
an acknowledgement of the fact
that it can change the world
in the worst way
or for the worst.
With each leap of technology,
there's been an outsized ability
for one person to do more damage.
Is it possible, the first part, to make AI completely safe?
And then the second part of it is, what is your nightmare scenario?
What is the thing that you think of that would make you press a red button
that shuts open AI and all AI down?
When you go, you know what, if this can happen,
we have to shut it all down.
What are you afraid of?
And so the first one is, can you make it safe?
And the second part is, what is your nightmare scenario?
The way I think about, so first of all,
I think the insight that you started with,
which is the number of people that can cause catastrophic harm
goes down every decade or roughly every decade.
That seems to me to be
like a deeply true thing that we as a society have to confront um second about making a system safe
i don't think of it as like quite a binary thing like we say airplanes are safe but airplanes do
still crash very infrequently like amazingly infrequently to me. We say that drugs are safe, but people,
you know, we'll still, the FDA will still certify a drug that can cause some people to die sometimes.
And so safety is not like, it's like society deciding something is acceptably safe given
the risk reward trade-offs. And that I think we can get to.
But it doesn't mean things aren't going to go really wrong.
I think things will go really wrong with AI.
What we have to prevent,
and I think society has like actually a fairly good,
messy but good process for collectively determining
what safety thresholds should be.
Like that is a complex negotiation with a lot of stakeholders that we as a society have gotten
better and better at over time. But we have to prevent, and I think what you were touching on
there is that the kind of catastrophic risks. So nuclear is the example everyone gives,
you know, nuclear war had this very global impact.
And so the world treated it differently and has done what I think is a remarkable job the last almost 80 years.
And I think there will be things with AI that are like that.
Certainly one example people talk about a lot is AI being used to design and
manufacture synthetic pathogens that can cause a huge problem. Another thing people talk a lot
about is computer security issues and AI that can just like go hack beyond what any human could do
and certainly at any scale. And then there's another category of things that I think are just new, which is if the model gets capable enough that it can help design the own way to like exfiltrate the weights off of a server and make a lot of copies and modify its behavior.
More of like the sci-fi scenario.
I think we do as a world need to stare that in the face. Maybe not that specific case, but this idea that there is catastrophic or potentially even existential risk in a way that just because we can't precisely define it doesn't mean we get to ignore it either.
And so we're doing a lot of work here to try to forecast and measure what those issues might be, when they might come, how we would detect them early. And I think all the people who say, you shouldn't talk about this at all, you should just
talk about the, you know, issues of misinformation and bias and the issues of today, they are wrong,
we have to talk about both, we have to be safe at every step of the way.
Okay, that's terrifying, as I thought it would be. So then I go to, By the way, are you actually thinking about running for governor?
Was that a real thing?
No, no, no.
I thought about it very briefly in like 2017 or 16 even, something like that.
I thought so.
That seemed like a...
So like a couple of weeks kind of like vague entertainment of an idea.
Okay, okay.
I guess my final question for you then is the what now of it all.
What is your dream?
If Sam Altman could wave a magic wand and have AI be exactly what you hope it will be, what will it do for the future?
What are all the good sides?
What are all the upsides for everybody out there?
This is like a nice positive thing.
Thank you for asking this.
I think you should always end on the positives. Yeah. Look, I think we are heading into
the greatest period of abundance that humanity has ever, ever seen. And I think the two main
drivers of that are AI and energy, but there are going to be others too. But those two things,
the ability to like, come up with any idea, the ability to make it happen and do this at mass scale, where
the limits to what people can have are going to be sort of like what they can imagine and what we
can collectively negotiate as a society. I think this is going to be amazing. We were talking
earlier, like what does it mean if every student gets a better educational experience than the bet like the richest student with the best access can get today
what does it mean if we all have better health care than the richest person with the best access
can get today what does it mean if people are generally speaking freed up to work on whatever
they find most personally fulfilling even if it means they have to be new kinds of job categories. What does it mean if everybody can, you know, presumably you
and I both like really love our jobs. Yeah. But I don't think that's true for everybody. Yeah.
Clearly. I agree. What does it mean if everybody gets to have a job that they love and that they
have like the resources of a large company or a large team at their disposal.
So maybe instead of the 800 people at OpenAI, everybody gets 800 even smarter AI systems that can do all these things.
And people just get to create and make all these.
I think this is remarkable.
And I think this is a world that we are heading to.
And it'll require a lot of work in addition to the technology to make it happen,
like society is going to have to make some changes. But the fact that we are heading into this
age of abundance, I'm very happy about. I'll leave you with this from my side.
I'm a huge fan, huge, huge fan of the potential upsides of AI.
You know, like I work in education in South Africa.
My dream has always been to have every kid have access to the best possible tutor possible.
You know what I mean?
Literally no child left behind because they can learn at their pace.
By the way, what's happening with children who are using ChatGPT to learn things?
The stories, like I get emails every day.
It's phenomenal.
It really is. I'm, you know, this like 14 year old yeah no it's whatever country and
i learned like all of calculus on my own uh-huh it really is it really is phenomenal and especially
as it becomes even more multimodal when you have like video and all of that's going to be amazing
i dream about that to your point health care i dream about i dream about all of it
the one existential question i don't think we're asking enough, and I hope you will,
and maybe you have been asking it, though, is how do we redefine the purpose of humankind
once AI has effectively supplanted all of these things?
Because whether you like it or not, throughout history history you realize our purpose has often defined our progress you know
there's a time when our purpose was just religion and so you know for for good
and bad if you think about religion was really great at getting people to think
and move in a certain direction beyond themselves and they went like this is my
purpose I wake up to serve God whichever God you were thinking of I wake up to
serve God I wake up to please God I whichever God you were thinking of. I wake up to serve God.
I wake up to please God.
I wake up.
And it makes humans, I think, one, it makes them feel like they're moving towards something.
And two, it gives them a sense of community and belonging.
And I feel like as we move into a world where AI removes this, the one thing I hope we don't forget is how many people have tied their complete identities
to what they do versus who they are. And once we take that away, when you don't have a clerk,
when you don't have a secretary, when you don't have a switchboard operator, when you don't have
an assistant, when you don't have a factory worker, when you don't have all of these things,
you know, we've seen what happens in history oftentimes. It's like radicalism pops up.
There's a mass backlash. Like, have you
thought about that? Is there a way you can intercept that before it happens?
How would you describe what our purpose is right now?
I think right now, our purpose is it's survival tied to the generation of income in some way,
shape or form, because that's how we've been told survival works, right? You have to make money
in order to survive. But we've seen that there have been pockets in time
where that has been redefined.
France has a great example where they had
and I think they still have a version of it, but the artist's
fund where they went, we'll pay you as an artist
to just make things. Just make France
look beautiful. And that
was beautiful. I know you're a fan of
UBI, for instance. Yeah, we shouldn't go before you talk
about that. Well, I don't think people's survival
should be tied to their willingness or ability to work. I think that's like a waste of human potential. Yeah, we shouldn't go before you talk about that. Well, I just, I don't think people's survival should be tied to their
like willingness
or ability to work.
I think that's like
a waste of human potential.
Yeah, I agree with you completely.
I think,
wait, let me ask you this
before you go.
It's like,
why do you think
universal basic income
is so important?
Because you don't,
you don't waste your time
or your money
on things you don't believe in.
And you spend a lot of time
and money
on universal basic income.
I mean, the last I saw
was like, there's like a $40 million project that you're a part of.
$60 million.
I don't think universal basic income is a complete solution, of course, to the challenges in front of us.
But I do think that like eliminating poverty is just inarguably a good thing to do.
I think better redistribution of resources will lead to a better society for everyone.
But I don't think giving away money is the key part of this. Like giving away tools and giving away governance, I think it is more important. Like people want to be architects of
the future. I think as much as I could say there's been a consistent thread of meaning
or of like a mission for humanity.
I think it is like,
you know,
survive and thrive for sure on an individual basis.
But,
but collectively we do have an emergent collective desire to make the future
better.
Yes.
Now we get off track lots of times,
but,
but the human story is like,
let's make the future better. And that is technology,
that is governance, that's the way we treat each other, that's like going off to explore the stars,
that's understanding the way the universe, it's whatever it is. And I have so much confidence
that that is so deep in us, no matter what tools we get, that base desire, that mission of humanity
to thrive as a species and as individuals, that's not going to
go anywhere. So I'm super optimistic about what the world looks like two generations from now.
But what you got at is really important, which is people who are already in their careers and
actually pretty happy and don't want change and change is coming. One thing we've seen with previous technological revolutions is in about two generations,
it seems like society and people can adapt to any amount of job turnover.
Right.
But not in 10 years.
Certainly not in five years.
And we're going to go face that, I think, to some degree.
As we said earlier, I think it'll be slower than people think, but
still faster than society has had to deal with in the past. And what that's going to mean and how
we have to adapt through that, I'm definitely a little afraid of. We're going to have to confront
it and I assume we'll figure it out. I'm confident we'll figure it out. And I'm also confident that
you give our children and grandchildren better tools than we had. And they are just going to do things that absolutely
astonish us. And I hope they feel like horrible about how bad we all had it. Like, I hope the
future is just so amazing and this human spirit and desire to like go off and figure it out and
express ourselves and design a better and better world and way beyond the world
um i think that's wonderful i'm really happy about that and i think in some sense we shouldn't make
too much of this like little thing we and you know that scene in star wars where one of the bad guys
is like don't be too oh i think it's vader is like don't be too impressed with this technological
terror you've created it's like nothing compared to the power of the force.
Yes.
I do feel that way about AI in some important sense, which is like, we shouldn't be too impressed with this.
Like the human spirit will see us through and is much bigger than any one technological
revolution.
I mean, it's a beautiful message of hope.
I hope you're right, because I love the technology.
The one thing-
But it will be choppy.
No, you know what? And the one thing I
would leave with you, Sam Altman, as Times CEO of the year and one of the people of the year,
I think you'll continue to be that, especially in this role, because of how much impact OpenAI
and AI itself are going to have on us. One thing I would implore you to have is continue to remember
that feeling you had when you were fired as you're creating a technology that's going to put many people in a similar position.
Because I see you have that humanity in you, and I hope as you create, you'd constantly be thinking about that.
You know what I did Saturday morning, like early Saturday morning when I couldn't sleep?
I wrote down what can I learn about this that will help me be better when other people go through a similar thing and blame me like I'm blaming the board right now.
Hmm.
And have you figured it out?
I mean, there's a lot of useful single lessons, but the empathy I gained out of this whole experience and my recompilation of values for sure was a blessing in disguise. Like it was at a painful cost,
but I'm happy to have had the experience in that sense.
Well, Sam, thank you for the time.
Thank you.
Really appreciate it.
I hope we do chat in a year about, you know,
all the new advancements.
That'll be, you should definitely come do that.
That'll be a fun one.
I will, definitely, man.
All right. All right, man. All right.
All right, cool.
Thank you.
What Now with Trevor Noah
is produced by Spotify Studios
in partnership with
Day Zero Productions,
Full Wealth 73,
and Odyssey's
Pineapple Street Studios.
The show is executive produced by Trevor Noah, Ben Winston,
Jenna Y. Sperman, and Barry Finkel.
Produced by Emmanuel Hapsis and Marina Henke.
Music, mixing, and mastering by Hannes Braun.
Thank you so much for taking the time and tuning in.
Thank you for listening.
I hope you enjoyed the conversation.
I hope we left you with something.
Hopefully we'll see you again next week.
Same time, which is whenever you listen.
Same place, which is wherever you listened.
Next Thursday, all new episode.
What now?