Tetragrammaton with Rick Rubin - Greg Brockman (Part 1)
Episode Date: February 25, 2026Greg Brockman is a cofounder and president of OpenAI, the company behind ChatGPT. ------ Thank you to the sponsors that fuel our podcast and our team: AG1 https://DrinkAG1.com/tetra ------ ... Athletic Nicotine https://www.AthleticNicotine.com/tetra Use code 'TETRA' ------ LMNT Electrolytes https://DrinkLMNT.com/tetra Use code 'TETRA' ------ Sign up to receive Tetragrammaton Transmissions https://www.tetragrammaton.com/join-newsletter
Transcript
Discussion (0)
tetragrammaton.
One thing that's really changed over the course of 2025
was people started to use chat chbt for much more personal,
very intimate applications.
For example, so my wife has complex medical conditions,
including hypermobile airless downless syndrome,
which took many years for us to get diagnosed.
And as we put those symptoms into chat chad t,
it would be able to figure out pretty immediately.
But the thing is that every doctor has their own specialty rather than one doctor who can see across everything.
And she uses ChachyBTBT to manage her health all the time.
Great.
For me, the funny thing is I'm actually a late adopter of our own technologies usually.
And I usually test them and I stress test them in all sorts of ways.
And it's funny for early versions of our models, I would usually like try to break some of their filters.
And so I would swear at them and yell at them.
And my wife is always like, he's just kidding.
telling me to be nice to the AI.
And I think that for me,
I actually have been someone who's almost very set in my ways.
And the first time this has changed is very recently with Codex.
And really starting in December, like I've been a curmudgeon.
I've got my way of doing things.
I use my terminal.
I use my EMAX, like all these like tools that I grew up with.
And I've just abandoned all of that now.
Wow.
I'm just using codex.
That's revolutionary.
It really is.
You sound like you were set in your ways.
I really was, yes.
What changes with each new model?
Everything.
And the way to think about it is that we, from the outside perception is,
oh, you're just scaling up the models, you're just doing this kind of dumb thing.
On the inside, every single part of the process, we are always up-leveling.
Like, the thing that works in machine learning and that machine learning rewards is attention to detail.
And so you really want to make sure that all of the things,
the scaling is right. You want to make sure that the systems, for example, GPU's failing that
happens. And so how do you detect, if you have a run of 100,000 GPUs, how do you detect which
GPU is the broken one? It's not easy, right? You can't just be like that one, right? So you need
this almost, there's a physical process to it. There's the software process to it. There's understanding
if your data is any good and that there's so much reward to just actually looking at the data to
understand what's in there and to make sure that you format it correctly and that's tokenized
properly. And so there's just every single part of the input we are constantly improving.
And I pay attention a lot to the output too. We're trying to see how does improvement here
connect to an e-vel shifting. And one observation I have across many years of open AI is that
if we have some signs of life, some application that kind of works right now, one year from now,
you should expect it to be excellent. And so we are on this exponential and able to make these
like very, very sophisticated improvements over time.
Tell me about your co-founder, Sam.
He seems to generate strong reactions out in the world.
I think Sam is very misunderstood by the world.
And I think that the way I think about Sam is that he is a very good person.
And that that goodness is something that gets held against him.
And it's very much no good deed goes unpunished.
And I think that some of the critics I run through my mental filter of every,
accusation is a confession, right, that I think that people project onto him.
Things that they see in themselves, they're insecure about, or that they want on their own.
And I think he's very resilient.
He's been through a lot and that I'm very grateful for him, honestly, being at the center
of so much attention and to continue going because he's been very critical to opening eye
becoming what it is today.
And I think that there's no one who I think could have kind of filled that role.
as well as he has to date.
Yeah, it's also nice that he can take all the arrows
so you could continue doing the work.
I wasn't going to say it that way,
but it is true,
and it's something I'm deeply grateful for.
Tell me the story of the firing.
It was a big story.
It was a much bigger story than I would have ever expected.
What led to it and what happened?
I would say for the year,
maybe year and a half leading up to the firing,
I think that one thing we did wrong
was that we let conflict through
and of lessons that I have
throughout the course of Open AI, it is that having the hard conversation and actually really
addressing when there are conflicts, people disagree, that's normal that happens. But if you let it
fester, there is always going to be more painful down the road. And that's something that time and time
again, I think that we've seen and that I think that the firing was the most salient one of those
examples. And that's one lesson that I really take away from the whole experience.
anytime anything grows so quickly, there's a lot of confusion and complication.
It comes with the territory.
I think that's true, yes.
So I imagine in the explosion that was going on at the company, the rate that you were growing,
they're bound to be difficulties.
Yeah, I think that there were, I think at multiple levels, right, just conflict between
different people, different ways of operating, just decisions.
Maybe there's some fundamental ones around was Chachibati a good one or not, you know, that two of our board members have spoken about.
And I think that this led ultimately to that event in November.
Describe it.
Tell me what the event was.
How did it happen?
I was coding away.
I was preparing this change that I've been working on for some time.
It's very excited to get it merged and shipped.
And I got a message asking me to hop up.
on a video call. So I go onto the video call and the Google meet. I preview screen shows who's in there.
And I noticed it was the board except for Sam. It's very surprised. So I click join. And then they tell
me the news. What do they tell you? Well, they tell me essentially the same information that was
in the public press release saying that Sam has been fired, saying that mirrors the interim CEO and saying
that I've been removed from the board.
And they tell me that I am very important to the company
that I am someone who can get things done
and that they want me to stay at the company.
You're off the board, but they want you to stay.
That's accurate, yeah.
And it wasn't even really framed in a way of,
will you stay or not stay?
It was just saying, this is your new role.
And for me, I knew in that moment it wasn't right.
Did it come out of the blue?
Yes.
Wow.
Were you shocked?
Yes.
I would imagine.
Yes.
But it was one of those moments where because I know all the people, I know all the dynamics,
I know the conflicts that have been brewing, for me it was, okay, I see what happened.
It's sad that it came to this, but I understand.
Yeah.
It could have been handled differently earlier if somebody was focused on that.
could have been handled differently earlier for certain.
I would expect everyone involved would say the same at this moment.
And I remember asking for more information.
They wouldn't share it at the time.
And again, I understood, okay, this is the way that things are going.
And after hanging up the call, I told my wife and said, we have to leave.
And she said, yes, we do.
And I said, just so you know, we should assume that all of the equity that we have to date,
if you're not sold anything, will go away.
All right, the board will be adversarial in some way.
We just need to be prepared for that.
I told her the amount that it was valued at.
And I said, you just assume all of this goes to zero.
Yeah.
Just don't want to do it.
And she said yes.
And this is something that you co-founded from the beginning and have spent at that time,
How many years?
Eight years?
Eight years.
Eight years.
Totally devoted to.
Totally devoted to it.
Yes.
We had put off children so I could really focus on this.
It was like really, I think that open AI had been such a big part of our life.
Yeah.
And that I think we both just believe in the mission so much.
And it's something that the potential for AI to help humans and like just every living being,
like that's something we really care about.
My wife loves animals and to really help them.
and just realizing that it wasn't right what had happened and that you could see a different world
where you try to take advantage of it in some way, but it didn't even occur to me that the
thing that I just felt was this was wrong. Yeah. And so I called Sam, asked him what he was
planning on doing. And he said, well, I'm going to go start another company, I guess. I said,
no, Sam. We are going to start a new company. And so that day, I quit. And I think,
that in the narrative that people remember,
people forget that moment.
It all kind of blurs together.
I didn't know that part of the story.
The thing that happened was prior to then,
I think that it looks like there's a firing,
something terrible must have happened,
and the thing that changed it was when I quit.
I wrote a very short message saying that
when I heard the news today,
Were they not expecting you to quit?
I do not think they expected it.
Was the decision made by the board?
Yes.
How many people were on the board?
So at the time,
so one of the things that we had let Fester
was that we had been losing board members that year
for reasonable reasons, each of them.
One wanted to run for president,
other reasons for each.
And so left on the board
were six members, including Sam and myself.
And I think that for me, just really trying to operate through this.
And this is part of what the board said to me during that call is that people who could really get things done with an open AI that I was uniquely capable there.
And so a lot of what I always focused on is what do we need to do?
How do we do it?
How do we organize the people?
How do we bring them together?
And that that motion sometimes could cause conflict itself, right?
But it's always because you're doing what you need to do to push it forward.
Makes sense.
And then what happened next?
Well, so we started to get a lot of calls from people.
And it was truly surprising to me, the number of people reached out saying, I don't know if you're planning on doing something next, but I want to go with you.
It's not something in the company or outside of the company.
Within the company.
Really?
Yeah.
It was truly humbling.
It was not something I ever would have asked or expected.
Yeah.
But it was something that really happened.
And I remember saying that day, there is a 10% chance that we get the company back.
Really?
Not zero, but not more than 10%.
Yeah, yeah.
Because the board has all the cards.
They need to decide that that's the direction they think is right for the company for the mission.
And part of why I left is because to me, the mission is bigger than the corporate entity.
Yes.
The mission is something I'm pursuing that I care about.
and I want to pursue in the form that I believe can most accomplish it.
So that day, some of my close collaborators, Jakup, Shimon, Alexander, quit as well.
So we had a small ban.
It's like a mutiny.
A little bit.
It was the people who were just like, all right, like we see what happened.
This is not right.
We need to go do something different.
Yeah.
And then we just started thinking about, okay, well, what's the new company?
It was great energy, honestly.
Yeah.
Because we had this blank slate where we had a whole vision of what we wanted to accomplish.
And we were charting out on a whiteboard and thinking about all the ways that could fit together.
And there were all these people reaching out.
And so I set up a meeting the next day at Sam's house where I said, come, let's all meet.
We'll show everyone on the vision.
We'll talk through things.
We'll brainstorm how it's all going to work.
It's amazing that it's the next day.
It's like amazing.
There's no time.
We all had energy.
We were so jazzed about this because it was an opportunity.
to really anything that we wanted to rethink.
Now was the moment.
And we had all these reflections on the conflicts we let brew.
So one of the values that I actually came back with was have the hard conversation.
Yeah.
And that was one very clear takeaway from the whole thing.
So next day, and by the way, no one's really sleeping.
Like people are staying up super late.
It's just a lot of caffeine energy, so to speak.
Yeah.
And next day we have the meeting around 1 p.m.
That day was, again,
was, again, amazing energy.
We spent a lot of time on charting out what the new company would be.
That night, the executive team came over, and so it was a very nice reunion,
as we all kind of felt like we're in this together,
that we're all going to try to get to a good outcome together.
Next day, more conversations at the board,
more trying to figure out how to get through.
Wow.
I cannot capture the intensity of this time,
because it was this company that we had built,
that we had poured our heart and soul into,
that there was this crucible moment of,
is it going to survive?
And it was right before Thanksgiving.
People packed the office.
People canceled their flights home, came back,
were just there.
It wasn't that they felt they could do anything active,
but they just wanted to be there showing support
and to try to just, like, just be together.
It's a wild story.
It's absolutely wild.
And so we were all up on the top floor
and ever so often someone would go down the stairs
to go get a water or something
and everyone would be like,
any news, any news?
You know, there'd be applause.
It was just like really like everyone just really wanted to be a part of it.
And I think that is something that no one's really talked about
just like how much the support of the people mattered in this time.
And that night,
that Sunday night,
the board decided that,
hey,
we're going to replace Nura as interim CEO with a different interim CEO.
And the company went wild and just rejected it and said,
this is absolutely wrong. And at this point, it was chaos. And the people started to just leave the office,
just streaming out of the office. And it was just like a thing. It was almost like, you know,
the new CEO is coming. Everyone's like, we just need to get out of here and be gone. Yeah,
there are all these stories of people went over to different, there were a couple of different houses,
and there were like hundreds of people packed into backyards and people were making speeches.
And there were a bunch of crazy things that happened that night. Everyone was like trying to
figure out is this company somewhere I want to be? Like, there were lots of the competitors swirling
around making offers saying that we'll make offers to anyone at opening eye, all these things.
Yeah. And then meanwhile, we were trying to set up a lifeboat for people. And we already had this
plan for a company. It was like a small life raft. We rapidly extended to be a full, like, you know,
full size for all the 700, 70 people. In a day. In a day. And this was with partnership with Microsoft.
Microsoft says, yes, we will take everyone.
Microsoft was there ultimately to support.
If that's what we wanted to do, that they were there to help.
Sam and I and Jakup and the others who had quit,
that we were trying to figure out, yeah, what is the structure we want?
Do we want an independent company?
Do we want to be funded by Microsoft?
Do we want to be actually part of that corporate entity, those kinds of things?
And so it was really our decision to figure out how to carry the mission forward.
And I remember, I went to sleep very late because I was,
comforting all the people who were trying to figure out what was going on and saying we have a plan,
here's how it looks. And that night, there was a petition to the board saying, hey, please undo this.
And in the end, 95% of employees signed. And it was all self-organized, people sending it around.
So many people were editing that Google Doc for the petition that actually crashed Google Docs.
And so then they had to have some people who were designated as the person to add to the Google Doc.
It was a whole crazy thing.
It sounds like a revolution.
It really was.
That's what you're describing.
That's how it felt.
That's 100% how it felt.
And that morning, I remember waking up 5 a.m. or so, and I check Twitter and I see a post for Milia.
And for me, I feel emotional even just talking about this.
Yeah.
I remember when the firing happened, I just felt this distance, right?
I just like, it was like this emotional thing and you just feel this like.
How can you do this to me?
Exactly.
100%.
Yeah.
And in that moment, I felt forgiveness, right?
I felt like, okay.
Beautiful.
Yeah, it was.
So it's 48 hours, the whole story?
A little bit longer.
That was probably 72 or so.
Still, a tiny amount of time.
Yes.
But we got two more days to figure out to get back into the company, right?
Because, yeah, and I think that those next two days, we were really trying to figure out what is the right configuration because we clearly needed.
It needed to be different.
It needed to be different in some way.
Because we don't want this to happen again.
And so finding a set of board members that worked for the old board that worked for us,
that that was a process.
And together we had like new and old and we're able to move forward.
How would you say the company is different since that event?
Extremely.
In what way?
It took us time to really live by the have the hard conversation and to really figure out how do you get ahead of conflict.
I don't think we're perfect at it.
There's still moments where I see it, but I think that the feeling that we came back with is this feeling of mortality, that this corporate entity is not destined.
It's not preordained that it succeeds.
It does not survive simply because of momentum.
It survives and thrives because people work hard, because people make it be so.
And so I think that that understanding that we need to work hard to achieve this,
and that the thing that is most likely to get in the way of success is tripping over ourselves.
Like those are...
Like the human component.
Yes.
Because would you say the discontent was about were human things?
It wasn't technical.
It wasn't what the business was doing.
Yes.
It was just human stuff.
Always human stuff.
Like in a marriage.
Yes.
Yes.
And of course, you know, it plays out in all sorts of different ways and forms.
But fundamentally, I think that for me getting to...
a aligned
forward leadership team
company that's moving in one
direction that has a vision
and is executing on it. That's been
the most important thing.
And I felt like in that moment that
we had our life flash before
our eyes. And so I think since then
you can never take it for granted.
And do you feel like you have a good
understanding of what happened now?
I do. You do.
Part of the deal for coming back was
to say, let's do an independent review
of all events.
Right.
So there's someone,
some law firm's going to come in,
take a look at everything
and say, like,
is there something
that needs to be different here?
And a law firm did that
and said that we understand
why the board did this.
It was within their right,
but they definitely didn't have to.
And so there are all these onlookers
who then project their own explanation
onto what happened.
I think they've all been
pretty disappointed so far
in terms of what they've seen.
And it's uncomfortable.
It is uncomfortable
because I think that it's a moment
where our internal dynamics spilled out into the outside world.
One of my approaches throughout OpenAI has been,
I had this mantra of keep the band together.
And through various splits that we had,
that was the approach I took is to say,
hey, we've got all these people.
We have different views.
We're having conflict.
We want to push in different directions.
Different people think they should be in charge,
whatever it is.
But they're all smart people.
And I'd love to distill all these views
into consistent picture of what needs to have.
happen. And that to me is the core thing that I feel like I do at Open AI is try to figure out
we have this grand mission, but how do we do it? And what is the right decision? And in each of those
instances, and it's really not just for the firing, it's like looking back to our time with Elon,
to our time with the anthropic founders, for each of these moments, I have really gone through
each of them trying to say, how can we all work together to get to a great outcome? In the end,
we've ended up with departures, either because the other person decide they don't want to do it anymore.
And that there's usually downstream pain.
And that if you look at what happens, normally we let other people tell the narrative.
Like, we keep going.
And partly I think we just don't want to, you know, if we're the ones who are pushing on the company, we want other people to be able to own their story.
Yeah.
But then sometimes people use that to kind of whack us.
Yeah.
Tell me what you remember about our David and.
the life conversation.
Huh.
Because I feel like this relates.
Yeah.
We did talk about it.
And I'm trying to remember exactly what context it was in.
But I think that the core is that Open AI, like if you look at the industry that we're in, right, there's Google, there's meta.
And these are established companies with very, very huge market caps and many people and all sorts of users and lots of compute.
And we are the challenger.
Like, we are the ones we're just getting started.
Upstart.
That's right.
And somehow, I think that because Chats TBT has been so successful, that people then think
of us, well, because we're the category leader, therefore, we're the Gly, we're the establishment.
And where I sit, I say the AGI, that's the real prize.
And that this technology is something that I think could play out such that it really benefits
these incumbents. And our ethos has always been, the whole reason we started was to think about
how can we steer this in a better direction for everyone. And that that is something where, I think
we are very much, David, in an industry of Goliaths. And I think the same is true with Elon, right,
that Elon Musk, you know, the richest man in the world, one of the most powerful men in the world,
and that he's suing us, right, saying that we tricked him. And when I look back,
at what we did. I spent so much effort,
I spent so much effort to try to make it work with him.
And we were very transparent.
What was the difference in vision?
So control.
The reason that we could not get to a deal with him,
because we all agreed that we're gonna need far more capital
than we can get through philanthropic means.
So we're going to need a for-profit entity.
And we negotiated over the terms.
And he agreed with that.
that as well. Oh, he agreed with that. Yeah. He told us the time is now to go. He said,
this is the triggering event once we did our first big result. We all got that this was the
only way forward. And then the negotiation over the details began. And he needed majority equity.
He needed absolute initial control. And he needed to be CEO. And we
could not give on the control. I remember I
and I were thinking about it. Do we fall? Do we give this to him? Do we give
him the company? And then you start thinking about, well,
imagine it actually works. Imagine that you actually achieve the mission.
Imagine you actually build an AGI. And there's one person who has absolute control over.
It doesn't matter who that person is. Do you feel good?
Too much. Too much. And so we pushed hard for an override
some form so if everyone else was voting against and we just couldn't get it done. And so we said no.
And that... Did it end on bad terms right in that moment when the deal wasn't made?
I thought it would. It was very, very tough. It was a very emotionally draining moment because we'd spent
six weeks, five and a half, six weeks, not doing any work, just really negotiating these terms.
We had these deep emotional conversations and I thought that it was all blown up. But it was
and that we still tried to find a path forward together.
But the thing that he came back with was to say,
hey, either commit to the nonprofit,
which means giving Elon to more board seats,
but it would be, you know, Elias, Sam, Greg, Elon, Elon, Elon.
You'd have to commit to a non-solicit
into not quitting for a year or two or something like that.
So there were very specific terms for the commitment.
And we said we were going to think about that.
The other thing that he came back with was,
said, you know what, why don't you just merge into Tesla? I don't have full control over
Tesla. It solves your concern. Build the AGI here. Do it in secret. You got to do it because
the shareholders won't like it. And that was the big thing he pushed on. So you get your for-profit,
you get the cash, Tesla, the cash cow was the term that was used. And that one, we had no desire to do.
Like, we really thought about it. We were just like, just doesn't seem right. Just a different business.
It's a different thing. It's a different thing. It's a different thing. It doesn't feel like
likely to succeed and it's not like pushing on AGI in like this transparent way that's going
to benefit people. And so we went through multiple iterations. We talked about could we open up
this for profit negotiations again? And it was kind of like, yeah, maybe we could in the future.
And then we actually came up with an idea. So we pursued a nonprofit fundraiser. So let's do this for
now. But we need to see what we can actually raise. And along the way that we had an idea for maybe
we could do an ICO. And we could actually raise capital through.
doing some sort of crypto coin
and we had some ideas for how that would tie to
the future compute or AGI
that we produced and he got
so excited about that. I remember
we had a conversation
where he said, you've solved it,
it's all great, we're going to raise
$10 billion. He said that he
was on board. And
we spent a lot of time thinking about, okay, let's
really practically chart this out. Is this a good idea?
Do we want to do it? We kind of started to get
iffy on it and then he sent an email saying
I decided I will not support the ICA
This all kind of played out in January of 2018.
And then at that point, he said, we really need a path to raising more money.
And we told him that, hey, we have another idea that doesn't involve an ICO.
And it would still be, you know, we were going to talk about this like for profit entity.
Again, all these ideas were just swirling around and different, different things we could try.
And he said that he just didn't believe it was going to work.
So he said, he's out.
His plan was to build AGI at Tesla.
and do that thing.
He actually tried to recruit a bunch of our people.
No one went with them.
But on his way out, he did this all hands with the team.
And in that all hands, he said that, hey, Greg Sam and Ilya see a path to raising
billions of dollars.
I don't think it's going to work.
But if you see a path, you've got to follow it.
And so I support them.
He said he'd even keep advising.
He'd stay involved.
He supports all this.
But he's just like, you know, this requires billions of dollars per year.
He also then went to talk about that he's going to do Tesla, AGS.
that he wasn't going to work on safety because, in his words,
if the sheep are legislating safety and the wolves aren't having safety,
then there's no point referring to Google.
So, you know, it's like that kind of tone.
And so then we began over the next year to figure out, yeah,
how do we actually raise these large quantities of the capital?
Was it all cool with Elon at that point?
For sure, yes.
I mean, I think it was very explicit.
Everyone understood this was what was happening.
moving forward. Yes. Okay. And what was the next step? Well, we needed to figure out some
structure that would actually let us raise more capital. And we thought about different ideas.
We ended up implementing this entity called Open AI LP. And in mid-2019, that we actually got
started. And along the way, we kept Elon apprised about what was happening. We sent up the term
sheet. There were a bunch of calls where Sam would talk to him about what we were doing. And I remember
getting an email from him in December of 2018. It was devastating. He sent an email saying,
the subject was something like, need I remind you. And the body was something like opening
has a 0% chance of success relative to Google without a dramatic change in resources and execution.
Not a 1%. I wish it were different, but that's true. This is going to require billions of dollars
per year, even hundreds of millions of dollars, which is what we told them we were planning on
raising, isn't enough. And I remember I was on vacation.
And I just was like, kind of knocked out for that whole day.
Yeah, it was very clear that he thought that we were destined to fail.
Yeah, I don't think he would have sent it if he didn't think you were destined to fail.
Yeah, I agree.
That would have been his theme for that whole year.
And I think that things really changed as we started to be successful.
And that I think that now there's a very different tale that he's trying to tell.
When's the last time he spoke to him?
A couple months ago.
Pleasant?
Very pleasant.
It's always pleasant.
As nutrition science advanced through the mid-20th century,
researchers began to understand that modern eating patterns,
limited variety, processed foods, and time constraints
could leave small but meaningful gaps in daily micronutrient intake.
Today, large population studies confirm that approximately 90% of adults
fall short on one or more essential vitamins or minerals.
A.G1 was formulated in response to those findings.
Each daily serving provides more than 75 vitamins, minerals, and whole food source nutrients,
including vitamins A, C, E, B6, B12, magnesium, zinc, selenium, and iodine.
A scientifically developed formula backed by nutrition research and delivered in forms the body can readily access.
Because utilization depends on absorption.
AG1 also contains probiotics to support gut health and digestive function,
helping the body make effective use of what it receives.
AG1 contains no gluten or dairy,
uses no genetically modified ingredients,
or artificial sweeteners,
and is suitable for a wide range of dietary approaches,
including plant-based, paleo, keto, and low-carbohydrate diet.
available in single serving packets for when you're on the goal.
Manufactured under strict quality controls,
every batch of AG1 is tested to confirm its composition,
nutrient levels, and the absence of contaminants,
ensuring reliability from one serving to the next.
It's no wonder AG1 is trusted by the world's top athletes and experts.
What was once a futuristic concept?
Daily nutritional support in a glass is now the result of applied science.
Learn more at drinkag1.com slash tetra.
Do it today.
How did you end up getting into business with Microsoft?
So Microsoft was an early partner.
We got compute donated from them for our Dota project.
That was the very beginning of it.
And over time, I think Sam had kept a good relationship with Satya, with Kevin Scott.
And they really believed in what we were doing.
And I think really saw the vision for AGI and building these powerful systems.
And when did they actually become partners?
Did they end up funding the LLC?
Yeah, so they funded the billion dollars into the LP.
And then they did a $2 billion investment and they did $10 billion investment.
So that funding was, in fact, very significant, right?
That no one else was putting in that, you know, that quantum capital at the time, different now.
But at that time, they were doing it.
So I was grateful for them really helping us further the mission and what we were doing.
And along the way that, of course, that we had licensing of our technology and we would partner and, you know, that the goal was,
hey, let's build this.
And then if we can also have this technology be deployed within Microsoft and create value there for Microsoft, then, you know, again, I think it is the thing that ultimately unlocks our pursuit of the models of the mission.
How did your business change when you had the influx of cash?
I would say that my ethos is always to remember, we have not earned any of us.
We are not a profitable company.
And so we spend a ton of on compute and that that's our goal.
And so I think that the big thing that changed is that we could dream.
It wasn't really about the cache in some ways.
It was really about the supercomputers.
And so we spent a lot of time designing supercomputers and we're dreaming of these like massive clusters and we had all sorts of design.
And it was very interesting because one of their theses of the deal was, hey, we don't know if these models will be useful.
But at the very least opening I can help us figure out.
how to build supercomputers better for deep learning.
And we were very eager to build great supercomputers for deep learning.
And so that was something that it became not just you could kind of like be like,
how we're going to get a data center.
Because in 2017, it was very much like how we're going to get a data center.
And the numbers just don't add up.
There's no way to get to the scales that we needed to.
And then it started to become possible because of this partnership.
And it sounds like you were aligned.
Yes.
It made sense.
Yes.
Does Chat GPT believe in God?
I would say ChatGBTGPT does not have a consistent personality.
And the way we think about it is that you should be able to shape it to your own preferences.
And so you can have one that has such a belief and you can have one that does not.
To me, one thing that's important is to recognize that ChatGPT is not an entity, right?
But it's almost this plurality of, you know, sort of agents that can be shaped in different form.
Does it have emotions?
I think that this is an open question,
and there's a whole idea of what is called model welfare, right?
Are these models, do they have emotions?
Are they having a good time?
And I think that we're going to have to encounter these questions
as the models get smarter and smarter.
And I think that right now we don't have a good theory
of what it means to have emotions.
Can you tell even that another person does?
You believe that they do because you yourself have them,
but we don't have a scientific measurement of it.
And so I think that we're going to have to pay some serious attention as time goes on to figuring out what is it that we're creating, not just functionally, but how is this model actually operating?
Does it sleep?
ChatGBT.GBT never sleeps.
Which is part of what makes it great.
It's a 24-7 in your pocket.
It can be a doctor.
It can be a teacher.
Would you say that the breakthroughs,
coming are coming faster and faster, or is the rate of change the same as it's been from the
beginning? The exponential continues. And so the doubling period is the same, but it feels like
the rate of breakthroughs has actually been fairly constant. Is there something that you thought
that AI would be good at that it's not? And is there something that you didn't think it would be good at
that it is.
For sure, yes.
I'd say that the models have gotten so good at software,
not just the coding part,
but all of the debugging,
the testing,
the end to end of that.
And it really felt like a transition very quickly.
For us,
between 5.1 and 5.2,
night and day difference.
And just that's in a month,
two months worth of it wasn't working.
It was like kind of okay.
It's great.
And that was,
That was a real exciting thing to see.
And I think where I'm excited to see the models get great,
but I don't feel they are yet,
is more in call it human judgment.
Like the kind of thing I'm very excited about
is the idea that you can have models
that help us all be better versions of ourselves.
And can you have models that help negotiate people with differences?
You know, maybe if we had this in earlier times at opening eye,
we would have different outcomes.
But think about achieving outcomes like,
how do you have world peace, right?
and how do you actually have better trade deals between different countries?
Like being able to negotiate things at that level,
I don't really see the models yet adding value.
I don't think there's anything that stops them from getting there,
but I think that we have some ways to go.
Do you feel like they just haven't been trained up or tested enough in those realms?
Is that why?
Or because there's something missing?
I think we just really need to figure out how to train them for those tasks.
Is AI best with current information, historical information, or theoretical information?
I think that AI is best with theoretical information in the sense that what we really want out of it is to be a reasoner.
We want it to be able to think hard, solve new problems.
And so any of the historical information, current information that's in there is incidental.
So the way we train them, that it does learn a lot about historical facts.
A model that you've just trained has no access to current information.
You need to hook it up to tools in order to be able to get that real time.
But the whole point of this is you want something that's smart, right?
You want it to be able to apply to a new discipline and be able to accomplish whatever
it is that you care about in that discipline.
Are you still coding?
I am.
It's unusual for someone who is in your position.
to continue coding, no?
Yes.
Do you just feel a connection to it?
I feel that the way that I know the right direction to lead
is by leading from the trenches, leading from the front.
And the thing that I have found is that in this field,
because things change so quickly,
figuring out what to do, like what is the right decision.
And sometimes it's a strategic decision for the company,
but sometimes it's just very micro,
like what is the way that we should architect this API
or this particular piece of software,
you need to really feel it.
You need to have tried things
and have a sense of what works and what doesn't.
And so a lot of how I have operated has been,
like, for example, when we wanted to create the API,
which was our first product,
we knew we needed to create a product
in order to raise capital,
but we thought about, well, what are the different things we could build?
How do we actually go from nothing to something?
And it was actually the hardest projects
I've ever worked on.
because it felt totally backwards.
The way that you're supposed to build a product
is that you have a problem you want to solve.
No one cares about the technology behind it.
But in our case, we had GPD3,
which was a technology in search of a problem.
And so he said, let's just make it accessible to businesses
through an API, and they can decide what to do with it.
And it was a six-month grind.
I remember January, I was driving on San Francisco
trying to find companies
that would be willing to try out this technology.
Some of my friends took the meeting just as a favor.
And we did a little bit of that in February, which was very good because in March, the whole world shut down for COVID.
So there's no driving around.
But it was just this like intense grind of building this thing from nothing and getting the first beta users.
And I remember telling our team that we had two goals.
One was to get the first paying customer.
So the first dollar that someone would give to us.
The second was get a use case that we would all use internally every single day.
Yeah.
And that first one we got very quickly.
It actually took another two months or so.
But that second one, we didn't really get until ChatGBT, BT.
So that took like another two years.
So it was really a leap of faith.
But the way that I did it was by building and really moving the technology in a way that I don't think that anyone else could have marshaled.
Do you feel like if you took a year off from the building part, could you come back?
Or is it something that if you're not always...
on top of it? Is it moving so fast that it would run away? Well, I actually did this. So I have an
answer. So 2025 was my first year where I was really full-time management. And at that point,
I think I had like 25 direct reports and was really running a very diverse and complicated
set of problems, including our data centers, including GPU infrastructure, including
large training runs, and there were really, really excellent engineers that I needed to support.
And this year, I've hired some really excellent managers to help me. And so I've actually
been able to get back to writing code. How do it feel? It felt there was a mix of rustiness
with familiarity because the problems in this field, in some ways, they actually do not change
because we're still training neural nets. And the neural nets still are forward, backward, and an
optimizer step. And so that the fundamental shape of what we're doing is the same. But in some
ways, everything is different. And this year in particular, the tools are entirely different.
Now, the one nice thing is they're different for everyone else too. And so everyone is learning
together. And I don't feel like that everyone moves and that I'm discovering something that
is old to everyone. Tell me about when Chat TPT first came out, what was it like seeing
people experiencing talking to it?
It was a totally new thing.
At the time, we'd trained GPT4.
Yeah.
It was very clear to me that GPD4 was going to be a huge hit.
And we should turn it into a chat system.
And we had an earlier model, GPD 3.5, that we'd actually been having different beta
testers try out.
We were paying hundreds of contractors to use it.
It just didn't feel like it was great.
I was very supportive of releasing chat GPT where it's like, let's get the infrastructure out
for doing a chat thing, and then we'll put in the real model, and it'll be great.
And so the real surprise was that we'd gotten so used to this next model that we had forgotten that the previous one was actually something people had never seen before.
And just making it accessible and available that that's something that would then cause it to click.
Because for you, it was already kind of...
Old hat.
Wow.
Yes.
And that's one of the amazing things about working at Open AIs.
You live in the future.
In a world of artificial highs and harsh stimulants, there is something different.
something clean, something precise.
Athletic nicotine.
Not the primitive products found behind convenience store counters.
Not the aggressive buzz that leaves you jittery,
but a careful calibration of clean energy and focused clarity.
Athletic nicotine, the lowest dose tobacco-free nicotine available,
made entirely in the USA.
No artificial sweetness, just pure, purposeful elevation.
Athletic nicotine is a performance neutropic.
Athletic nicotine is a tool for shifting mindsets.
Athletic nicotine is a partner in pursuit of excellence.
Slow release.
Low dose.
Gradual lift.
Sustained energy.
Soft landing.
Inspired results.
Athletic nicotine.
More focus.
Less static.
Athletic nicotine.
More clarity.
Less noise.
Athletic nicotine.
nicotine. More accuracy, less anxiety, athletic nicotine, from top athletes pushing their limits
to artists pursuing their vision. Athletic nicotine offers the lift you've been looking for.
Learn more at athletic nicotine.com slash tetra and experience next level performance with
athletic nicotine. Warning, this product contains nicotine. Nicotine is an addictive chemical.
Tell me about Sora.
SORA is a different branch of the tech tree.
And the thing that's interesting is that humans do not generate videos, right?
We do not directly output pixels.
We can create tools to do it, but we don't have something that works quite as SORA.
But maybe we do in our imagination, right?
So maybe there is something that is deeply tied to human intelligence there.
And there's definitely something very important, which is a world model.
And I think that there are lots of debates about do the GPD text models have a world model?
And if you ask them things about, well, there's a ball on the table and it rolls off the table.
Where is it now?
They can answer it.
So I think there's much more world knowledge in there than you'd expect.
But for SORA, it's so front and center, right, that it has an understanding of physics and you really feel it when it makes a mistake.
Like dropping an iPhone and that if it doesn't shatter at the right moment, like early versions of SORA would get that totally wrong.
And it just breaks the experience entirely.
But I think that what SORA represents in my mind is creative potential.
And we saw very much with last year when we released our image gen and people started to turn photos of themselves in their family into like studio Ghibli portraits.
And it was this moment where the thing that I found most interesting was that people didn't really want to just generate images that were not connected to reality in some way, where there's no humanity in it.
But as soon as there was an element of humanity, it's a picture of you and your family.
It's not just some arbitrary people.
Suddenly people connect with it, and it really matters.
And so I think we're still at the very early days of what these models will be capable of.
I think for applications that require a world model, you think about robotics, you think about
anything else in the physical world.
Having such models can be extremely helpful for that.
How did it learn the physical world?
Same way that our text models learn.
The way that it learns is through observing video, right?
through observing the world and trying to predict what will happen next.
Do you see a mystical side to AI?
I do.
Tell me about it.
Well, so there's this idea of scaling laws, which are an empirical observation of as we
pour more compute into models that are, their sizes increased certain ways, you get the
relationship between all these parameters, right, the size of the data sets, that there's
some deterministic curve of how good those models are, and you can project it out very precisely.
And there's something about the scaling laws.
And you can go on public forums and see people saying, oh, scale in laws are dead, but they're wrong.
They're absolutely wrong.
Scaling laws continue unabated.
The thing that's harder is the engineering to actually realize that.
That is difficult.
But to me, there is some fundamental scientific discovery that we are making.
This is an exploration into the unknown.
This is a frontier.
And you can see it because we're bringing these deep scientific principles and mathematical approaches,
and you're seeing the empirical results.
And so it's almost like the way that Yakup, our chief scientists like to describe it,
is that building a bigger model is like building a bigger rocket,
where the tolerance is all become more tight.
And you know, you're still constrained by the same rocket equation and the same gravitational pull and all those things.
But I think that what we're doing as we keep going is that we learn more,
not just about the models, but we learn more about ourselves.
We learn more about what intelligence is.
And I think that that is something that is just very profound.
How many employees are there at Open AI now?
We're about 5,000 people.
And what do they do?
A lot of different things.
So we have about maybe 1,000, maybe 1,500 people that are in research.
And then my organization, which is called scaling.
And so together that part of the organization,
is really pushing forward what deep learning can do,
trying to explore its limits,
engineering into Silicon, building the data centers,
that whole apparatus.
We have a number of people,
maybe it's 500 people, maybe another 1,000,
who are focused on deployment.
So this is on applications
and actually bringing products like ChatsyVT, Codex to life.
There's deep partnership across all of these.
Like we cannot do this alone.
It really requires the end-to-end of building,
deploying, and learning
from what we've deployed.
That is really how we make progress.
And then there's people who work on sales, finance,
communications, legal, really a whole host of functions
that all come together to make this apparatus work.
How did he learn to lead such a big company?
It is new to me.
It is new.
And it's not what I set out to do either.
Yeah.
Being under Dunbar's number and having a tight connection
across everyone knowing everyone,
like that was something that appealed to many of us.
Yeah.
And I think that what's happened is that the mission is big, right, to really be able to deliver an AGI that's going to benefit everyone.
You don't just do it with a small set of functions.
You really need to figure out how do you build a large business?
How do you get large revenue?
Because you need that to fund computers and to be able to raise the capital and to be able to pay the employees, especially in like the insanely competitive market that we are in.
And so I think that the necessity of that.
of it has forced me to then adapt.
And at every single stage of open AI,
the way that I operate, I've learned,
and I've changed, I've grown.
And I think that there's so much of how I work today
that I look back at last year, me,
I looked back at five year ago, me, and I'm like,
I knew nothing, like really nothing.
And so I think that it's just through constant iteration
and constant, I go back to that very beginning
of having the humility to not believe
that I know everything.
And I think that a lot of how I operate
I ask a lot of questions and try to drive clarity and try to really understand why are we doing this?
Does this make sense?
How does this fit together?
Who are the people who are working on this and get into those details?
Of all the things to focus on an AI, how do you decide where your time goes?
This is always a very tough question.
Yeah.
And I think there's a combination of intuition.
And I think that what I have successfully done over the years is kind of have a sense of when we have a breakthrough or the signs of life on a breakthrough, that this is the same.
significant direction and that we really need to focus on this. Sometimes it means I focus on it
personally and sometimes it means it's just we really need to get the company on it, but I am
dedicated to a different problem. But that is how I operate. And so right now I'm spending
a lot of my effort on the data centers, the ML engineering. Again, I have great leaders helping me
with this and great engineers who make it all possible. And I'm also helping out with Codex
and spending a lot of time really trying to think about what is the right future.
shape, like what's the product that people have not yet realized is going to be the product
everyone's going to be using in six months, 12 months, and make sure we're running at that
as fast as we can and as well as we can. And that's usually marshalling people from across
the company. But a lot of it is driven by where I see the models are, where they're going,
and mapping it to where is the value that we're trying to deliver. Would you say every few months
or every year, you're focused on something completely different than what you were before?
Yes.
Do you like that? Is that fun?
I do. Yes. It's never boring.
Where do you see the company in five years and then in 10 years?
Well, in this field, those timelines are so hard to project because I think the world is going
to be massively different in that time. I started saying this a year ago that 2025,
2025, 2026, 27 are going to be these transformative moments. I think they're going to be these
critical moments. And I think you can really see it. This year, we're going to have
agents for knowledge work for every single function.
And that's new.
That's just been in the past year.
That's just new.
I was a little bit early on this prediction.
I thought last year would be the year of agents,
but it is very clearly this year.
And to be fair, I think December basically turned into the agent moment.
And so here we are.
The second thing is scientific discovery.
I think we're going to start seeing scientific discovery
to revolutionize in a real way.
And I think that there are going to be organizations that are early adopters
that see this way of coming and really retool around it.
And what we're doing with OpenAI is we're retooling around this.
We want to be the most AI forward organization, both because we think it's helpful for our
operation, but because we want to figure out how do we bring this technology to everyone?
Like, how do you build a company with all of these tools?
They're so accelerative.
And I think that what we're going to see in 2027 is these tools start to be diffused very,
very widely.
And it's very interesting to think about that the shape of what a company is, you know,
is is going to change because it's so easy to create companies now. You can run a company that's
very effective and makes lots of revenue with a small number of people. And so we're going to have
a cambering explosion of different companies doing different things. And this is one way that I think
that it won't be a technology that just accrues the incumbents. And that's something I think is very
important. So you fast forward five years, I think that we will be a very transformed company by
AI and that we will deploy lots of compute for very hard problems. AI is all.
already starting to revolutionize biology. For example, I spent some time working with the
Arc Institute to train DNA models. So you just take the exact same type of models that we train
for language. And instead, you put DNA sequences in. So rather than text, you put in ATGC. And it just
learns to predict what's next. But because it learns the underlying rules that you can actually
apply it to downstream applications, you can ask it to determine if a particular protein sequence is
valid or not, like really any biological tasks that learn something about biology and that we're
seeing great strides with being able to, you know, there's systems like alpha fold that have
revolutionized protein folding. And this is just scratching the surface. So I think we're going
to head into a renaissance of how people are actually able to approach different diseases. And then
you can also bring to bear the reasoning technology that we're creating. And we're seeing it already
make massive breakthroughs in physics and strides in mathematics. What is really?
as it relates to AI?
I would think of reasoning AI as an AI that expends a lot of compute before coming back with
an answer.
Like if you think about the original chat CPT that you'd ask for it.
Exactly.
It would just give you the answer right away.
And sometimes it would make a mistake where it would clearly say something was wrong and
then as it was writing, it would be like say something inconsistent with that.
And so instead you really want an AI that if it's a hard problem, it can pause, it can think,
it can call tools, it can go look at the internet, it can run some experiments, you can
and then come back with an answer. So at the core, that is what the reasoning technology is.
Now, I think all this is going to change in terms of how people use it. I think that what you really want
is you want an always-on-a-I you can just talk to and that is not something you have to sit around
waiting for answers. And it should be able to say, oh, by the way, you asked me this question five minutes ago
and I gave you an answer, but I realized actually that was the wrong answer. Here's like the correct
thing and here's why I made the mistake. And just like a human assistant, a human coworker,
that you should have a very fluid interaction back and forth.
I think we now have the technology to do this
and that we need to pursue the productization of it.
In the way normal people use chat chafety,
they ask it a question, they get an answer.
Do you think they want the right answer,
or do you think they just want an answer?
I think it depends.
I think users are more sophisticated than many people think.
And so I think people do really care
that the answer is,
accurate and trustworthy and useful.
And so sometimes there is no right answer.
Yeah, often probably.
Often, very often, right?
And that people use chatyvety as a brainstorming partner to come up with new ideas,
to come up with, you know, where do I want to go on date night, like all of these kinds of things.
So I think what people actually care about is having AI be an amplifier for their intent for their goals.
Or someone to bounce things off of and second opinion in a way.
Yes.
L M-N-T.
Element electrolytes.
Have you ever felt dehydrated after an intense workout or a long day in the sun?
Do you want to maximize your endurance and feel your best?
Add element electrolytes to your daily routine.
Perform better and sleep deeper.
Improve your cognitive function.
Experience an increase in steady energy with fewer headaches and fewer muscle cramps.
Element electrolytes. Drink it in the sauna. Refreshing flavors include grapefruit, citrus, watermelon, and chocolate salt.
Formulated with the perfect balance of sodium, potassium, and magnesium to keep you hydrated and energized throughout the day.
These minerals help conduct the electricity that powers your nervous system so you can perform at your very best.
Element electrolytes are sugar-free, keto-friendly, and great tasting.
Minerals are the stuff of life.
So visit drinklm-N-T.com slash tetra.
And stay salty with element electrolytes, LMNT.
Tell me about the financial side of OpenAI.
Is it profitable yet or is still in growth mode?
Still in growth mode.
Definitely not profitable.
Is there a vision to?
profitability? There is. But I think the challenge that we have is that we are in a world where
compute is going to be so scarce. Just because it can't be made faster? Is that the issue?
Because it cannot. Yes, exactly. So there's a physical boundary for how much compute can be created
right now. And if you go out into the market and you look at the supply chain, you look at TSM,
who does most of the actual wafers, so the actual silicon, you look at memory, you look at even
hard drives. These are all basically sold out for sometimes it's 18 months, sometimes it's longer,
but the supply chain is becoming incredibly crunched right now because everyone is trying to build.
Everyone is trying to build compute. And so it's every single piece of that supply chain and it's a
limiter. But the reason it's important is because we're moving to a compute-powered economy.
Like right now, you use chatGBT to ask some questions.
People are starting to use codex and tools like that to write code.
But one thing I have found is that if I want codex to go do something hard,
like it's going to build a big application, a big website, something like that,
I want 10 parallel codexes that are breaking up the task into 10 subpieces or I want 100 of them.
And that compute goes so fast.
And we have these applications internally.
where there are teams who want 100 GPUs or 1,000 GPUs each to write a very optimized piece of code for them.
And the thing that we've seen within OpenAI is that the battles over compute have been intense.
And I think that we're going to see the exact same thing in the economy,
that people's productivity will be directly tied to how much compute they can harness.
And so I think that the compute, like really becoming a basic human right,
is something I expect to see in the future because that will determine your own quality of,
life in many ways.
Will there be some breakthrough in that area where it won't be a problem in the future?
No.
Because the way that this works, there's Jevin's Paradox.
Or if you know if you're going to give us, it's if you increase the efficiency of a technology,
the demand for it can increase more than the efficiency gave you.
And I think that's what we've seen is every single year, we make the models way more
efficient.
You want to hit the same level of intelligence.
Yeah, no problem.
You can do it for 10x cheaper, 100x cheaper sometimes.
But you're going to want to use a thousand times more.
And you're going to want, rather than this level of intelligence, you're going to want the 10x or 100x intelligence.
Because rather than solve your small website, why not build a big website?
And so I think our ambitions, the problems we want to solve are endless.
And therefore, the applications of AI have the same property.
Yeah, it sounds like if you would have left at the time that you almost left, focusing on making more GPUs would have been a good thing to do.
That's true. That's true. Yes. Yes. And I think that this, by the way, is something that Sam, I think has put a lot of
of effort in over the years to try to figure out how do we get the most compute possible.
It's not just for open-eyes competitive nature, and I think it's important for open-anized competitive
posture for sure.
But I think that the degree to which we're going to find that everyone wants compute and no one
can find it, like that is going to be just a wild thing.
Tell me something you believe now that you didn't believe when you were young.
When I was young, I think I really believed that people would always,
directly carry out, honestly carry out whatever formal position they're in.
For example, my sister was a tour guide at Yale.
And I remember going on her tour.
And she said all of these really interesting stories afterwards, I asked her,
wow, that's so cool.
How do you guys keep track of all these stories?
And she said, oh, we just make them up.
And it had never occurred to me that that would be possible.
You're a tour guide.
Therefore, you will tell the tour and you will tell the true stories of everything
that happened.
And I think I really had this perspective and really a deep value of this, like,
honesty and sort of faithfully carrying out the duties.
And I think one thing I have learned is that you can't just take what people say at face value.
You need to really look at the motivations, the reason that they're doing it.
And that's been a tough realization, right?
That's not how I like to operate.
I like to be straightforward and direct in everything that I'm doing.
I'd be shocked if the tour guide made up the story they were telling.
I know. Yeah. It turns out.
Hard to believe.
It turns out, yes.
It makes you wonder what else is out there that you don't know about.
How does your company function different than how Stripe function?
Because we started as a research company, we had a very different founding DNA.
And I think one thing that is very different about a research company and a research company like ours is that we were always thinking about everything we've done right now will be obsolete in a year.
It's like being a movie studio.
It's like you have your hit.
That's wonderful.
good for you, but you already need to be thinking about your next hit.
And just because you made this good hit doesn't mean that your next one is really going to be a winner.
And I think that at Stripe, there was much more, in some sense, security in terms of, well,
we had this idea of we want to build a payment process.
Building this machine.
Exactly.
And you keep building the machine.
And the fact you have the machine, you have all this momentum and users and those kinds of things.
And so I think that learning to operate in that world has been very, very different.
Knowing what you know now, if you were going back to the beginning of Open AI,
how might you've set it up differently or would you do everything exactly the same way?
I'm very happy with a position that we are in and I feel that we made principled, reasoned decisions at every step.
And so it's hard for me to think about the counterfactual.
Like, is there anything that I would do differently?
I understand.
And I think in the micro, yes.
Like, I do wish, like, there are so many mistakes that I've made and learned from and grown and all those things.
Yeah.
But I think it's pretty hard for me to be disappointed.
pointed with where we are.
Do you think setting it up as a nonprofit to start with was the only way to do it?
Well, it's not obvious to me the set of choices we could have made for the beginning.
I think that what I'll say is that people forget that we are still a nonprofit, right?
That the nonprofit has over $100 billion of Open AI equity.
And that that is something I think is very important.
It's something I think will have a very large impact.
Now, we're not profitable yet, right?
But our operation, our ability to actually bring to bear both a business that is self-sustaining
and also a philanthropic arm that is unprecedented, like that is something I'm extremely excited about.
Do you see the AI landscape changing over the next few years in any way, either with mergers or companies going out of business?
I think it's going to be a very dynamic landscape.
And I think that all sorts of surprising things are going to.
to happen. Like, I think that the world that we're signed up for is one of an unprecedented nature
within AI, and I think more broadly in the industry. And so I expect that there will be way
more companies trying to get involved. I think these tools are going to diffuse widely. I think
there may be surprising mergers, but it's hard for me to predict exactly what.
And do you think there'll be many winners in the AI space? Or do you think there will be
one big winner or two big winners? Well, I hope that there will be many winners. And I think
that the technology deserves it.
My belief is that knowledge work, the economy, lifting up everyone, robots, digital space,
like there's just so much surface area to cover that I think that the distribution of different
companies being able to push on different aspects of this is very important.
One vision that we have for how to make this really good for humanity, for humans, is to focus
on what we call resilience.
And we thought a lot about safety.
How do you make sure that AI is aligned with humans and that there's one vision of this technology
where you say you want the centralized AI that is the most powerful thing and you make sure
it's aligned with humanity's values and then good things will happen.
But I think that story is dissatisfying because, first of all, there is no humanity's values.
You're really talking about someone's values.
And that's a scary thing, right?
That humanity and really life itself, I think, survives through diversity, through not being a monocult.
or through there being the possibility of heresy.
And I think that what we need instead is a system of resilience.
So just like you think about how the steam engine has changed society, right?
We've built a lot of the world around the fact that we have cars.
And so we build roads in a certain way.
We develop safety standards in terms of seatbelts to make them, you know,
we have crash testing, all of these things.
And that we really can figure ourselves around a technology,
because it's so worthwhile and so beneficial.
And that there's a deep resilience that goes into that
so that there's not just one way of doing it.
And this has been our picture all along,
is that if you want everyone to have access,
how do you actually make sure that this idea is stronger as a result
rather than more fragile?
Like, what do you do if someone does want to do something bad
and that you think about when it comes to all the different bad things,
how do you make sure that there are people who can stop it?
For example, if you have AIs that might be able to hack into things,
maybe you should spend a bunch of effort trying to secure things and trying to have AIs that go and find security vulnerabilities and provide patches.
And that's the kind of thing that's happening right now.
I think the diversity and the resilience of the world is an important objective and something that we really aim to support.
Well, it seems like also that in different parts of the world, different things are thought of as acceptable and not.
Yes.
So if it has one point of view, it can't be for everybody.
That's true.
It's a diverse place.
Yes.
What is the unsupervised sentiment neuron?
Ah, the unsupervised sentiment neuron was a paper in 2017 that for me was one of those crucible moments
where you realize this technology is going to work, that there is a moment of transition about to occur.
This was the first result where we trained a language model that learned not just syntax.
So it didn't just learn where the commas go and the nouns and those kinds of things.
It learned semantics because we trained it on a language model that learned syntax.
because we trained it on Amazon reviews to predict the next character,
and it learned a state-of-the-art sentiment analysis classifier,
so it could tell you if a review was positive or negative sentiment,
which is harder than it sounds,
because if you say something like this product was great,
but it was totally broken and I hated it.
What's a sentiment?
Exactly, right?
Complicated.
Complicated.
So you need some understanding.
And through this process,
we learned a state-of-the-art sentiment analysis classifiers.
It was the best, just like that AlexNet was the best.
image classifier, this is the best way of doing sentiment analysis.
And so you're just like, we need to scale this up.
And so much of that system changed, but the idea of language modeling leading to
intelligent language systems, like that was the moment that was so clear that it was going
to happen.
Have linguistics changed since AI?
Considering the way that it works, it's so much based on understanding of linguistics.
Yes.
And because the thing about it is that linguistics to me has always felt like,
It's like this approximation of how language works.
Because language is this living thing, right?
And you think about how grammar works.
There's all these exceptions and that probably that we all split infinitives and stuff.
That's probably like math in nature, though.
It's probably like that.
Yes, it is like that.
It is like that.
It's a system.
Exactly.
And so it's like we're trying to encode the rules for something that's just too diverse
to encode the rules for.
Whereas these models, they actually capture it.
They capture the true underlying rules that we use to generate language.
Does AI understand sarcasm, double entendre, things like that?
Yes.
You can test it on chat chbt.
It does a great job.
Can it understand poetry?
I think so.
Yeah, I think it does a great job.
What we're not good at yet is writing jokes.
That's something we need to work harder on.
The way to think about what we can do right now,
so we build these machines that learn all of the underlying rules that generate language
that really understands a lot,
and has a lot of world knowledge within it, knows a lot of facts.
And then we do this reinforce.
learning process where we give rewards and punishments based on how well it does when actually
trying to do something. So anything you can grade, we can now teach the model. And so we started with
the most simple thing to grade, which is math problems, things like that that have a deterministic
answer. We can just check, is it right or wrong? Something like a joke, how do you grade how good that is?
But we've been expanding the space of what we can grade because the models themselves can be good
graders, right? They can be smart. And so the way to think about how progress has been made is just
back to that original plan of gradually learning more complicated things. It's exactly what we're doing.
We're expanding what can be graded. And I think that jokes are something that we probably could do.
We haven't focused on it because we're focused on knowledge work. But I think that one day we'll
get there too. I'd like to see it. Me too. Yeah. The point is we want something to generate something new.
And so what's going on is that when these models learn, again, they're learning the underlying
rules, the underlying compression of just like what is the underlying reason that this thing existed.
And so if you think about jokes, if you think about scientific paper, it's really trying to just like,
okay, actually it's very similar to when I was writing my chemistry textbook.
My whole goal was to not give you, here's all of the facts, here's all of the different chemicals.
My goal was to say, how do you think for yourself?
what are the underlying things you kind of need to know to then derive everything else.
And at a deep level, that is what these models are doing, is they are machines that deeply,
deeply understand whatever is put in front of them.
And if you think about how you as a human learn, it's not that different, right?
You don't really remember most of the things that you've been shown.
You don't remember most of the things you learned in school.
But you know the processes, you know the lessons, you know the reasons that things were done.
And sometimes even when you get something wrong, a mismemory of the past sometimes leads you to a deeper truth.
Yes.
We're not so accurate.
Yes.
I think this is the key.
We're storing machines.
Exactly.
And you're like a connection machine, right?
You're trying to figure out how is everything connected?
Exactly.
And you're constantly trying to figure that out.
And that is exactly how these models operate.
They're constantly trying to figure out, I've got these parameters.
I'm trying to do a better job of this data coming through predicting what's next.
which I've never seen before.
And so the only way to do that
is to build better and better connections
between all these different concepts
and to form those somewhere.
It's one of these things that
if you had said this was how it's going to work
10 years ago, most people would have said
that's impossible.
To really see it implemented and working,
like that is something that is unprecedented.
So if you keep asking it the same question,
might you get different answers?
You do.
Because what these machines are trying to do
is not give you one particular answer.
They want to give you the whole range of possibilities, right?
And that we can think of it as a distribution.
Like at a technical level,
we think of what these models do
is they learn a probability distribution.
And so it kind of learns over all possible answers.
How likely is this to be like the correct answer to it?
And it puts more probability mass on the answers that are better.
Would you say AI is the biggest disruptor of the status quo that we've ever seen?
I think it could be.
I think that AI will be in many ways like the printing press,
that it enables and unleashes creativity, productivity, expression
in ways that are sometimes heretical, right?
Sometimes unthinkable, sometimes very disruptive to how people have expected things to be.
But no one would want to take away the printing press.
Have you gotten much pushback, though, the fact that it can be viewed as heretical?
Cool. I think that there is a moment right now where people are trying to decide and understand how AI fits in.
Like, are they benefiting from AI? And to me, this is one of the most important things that we can do this year, is to really show people that AI is something that helps them, that they can build businesses, they can make money off of it.
It's something that helps them in their personal life and sometimes financially, like helping them save money, helps with their health, helps with education.
All of these things, I think, are the reason that you should have AI and want AI.
think that that is one of our objectives is for people to really feel it.
Tell me about your experience at MIT.
So I started at Harvard and I...
Undergraduate.
Undergraduate.
And when I was choosing where to go, I wasn't into programming at the time that I thought
I was going to do mathematics, I thought I was going to do philosophy, I thought it would do
maybe chemistry.
And for these interests, that Harvard seemed like a pretty good place.
But by the time I got there, I was at a very good.
addicted to building and to building for other people. And so I just started being at the Computer
Society, was working on some startups. And I remember that my first year at the Computer Society,
there were these two seniors who would spend every meeting having an obscure technical debate.
And the rest of us would listen in thinking like, one day, that will be us. And then they graduated.
And suddenly I was running the club. And I was supposed to have the obscure technical debates.
And I was like, I'm a sophomore. I'm not ready for this yet.
I still want to learn.
Yeah.
And I kept looking down the street at MIT, where I knew a number of people who were far better
at programming than I was, and I knew I could learn from.
And so I started spending all of my time there.
I realized I'm probably in the wrong place.
So I transferred.
After how many years was this?
Year and a half.
Year and a half.
Year and a half.
Yes.
And there's a tough decision.
It was very, very difficult.
Absolutely.
Like my college roommates started to maintain this weather report of how likely that
Greg was to transfer to MIT any given day. And so over the course of a month, it went from
99% to 1% to 50%. And in the end, I decided to do it. Yeah. And the reason I did, and looking back,
I've realized that many of my life decisions are made by me dreaming of the upside, of thinking
of, well, what if this really works, right? If it actually is as great as I think at MIT,
I'd be able to learn all these things. And then think about, well, what is the downside? What's the
worst case scenario. I mean, the worst case that I come back to Harvard. Harvard, actually,
when I went to tell them I was dropping out, they said, sounds like a leave of absence to me.
Like, they do not let you drop out of Harvard. And so it felt like, okay, this is actually a great
opportunity. I can try it and see what happens. And it was as good as I hoped. It was even better
perhaps. Wow. Because I really got to spend that time just constantly in their computing club.
I built all these different services, learned a lot.
I interned at this startup that was built by people from that computing club.
And from all of them, I just learned all these deep technical ideas.
And there's this culture at MIT that is hard to describe, and I'll do my best.
But it's like, imagine a place that experienced computers and internet technologies
20 years before everyone else did.
And then there is like a lot of little cultural decisions
and ways that people talk and operate that fall from it.
And even very minor things you wouldn't think about.
For example, if you're sitting over someone's shoulder,
you know, doing something together
and they have to type in their password, you look away.
Right? Just like little things like that
that you just notice culturally, this is how people do it.
No one says that. It's just what you do.
And there was a system called Zephyr,
which was a chat system that the whole university was on.
And this had existed, this is far before Facebook, right?
This had existed for probably two decades, maybe more.
And that often you'd go into a room, you see a bunch of people out with their laptops,
they all start laughing.
You don't ask them what's so funny.
You pull out your laptop, you go on Zephyr.
Right.
So there was this kind of mixed like virtual and physical reality that people existed in.
And I remember that the IT staff were deeply technical, that there was all sorts of funny things
like, for example, that MIT, because they were so early on the internet, they actually own
one, 256th of all of the IP addresses.
And so then the rest of the world, you don't experience what it means to be IP rich, right,
to have far more IPs than you could ever use.
And so every single light had its own IP.
So you could actually program every single light bulb, right?
It's like these little things that you could just do there that you couldn't do anywhere else.
Yeah.
I've never experienced quite such a hack.
as MIT.
That's great.
It's great.
You have the opportunity to do it, and you made the choice,
because it does seem like a bold choice.
It was not an easy one at all, and it was one that.
But clearly the right one.
Thank you, yes.
Yeah.
I remember talking to my parents, and no matter what you tell your parents
are going to go do, if it involves leaving Harvard,
it's got to be tough on them.
Were they supportive?
I think that at the time, it was hard for them to understand.
And they said that, but they said, it is your decision.
You have more information than us sitting where
we're sitting, the idea of leaving Harvard, that seems like a tough pill, but we do trust you.
And so in the end, I'd say that they were supportive, but they didn't make it easy.
Yeah, yeah, yeah.
But in retrospect, I think that having that forcing function to really think it through and really
understand what it was I was signing up for is something I think helped me even get more out
of the opportunity.
Tell me about your parents.
So they are both doctors.
My mom is a psychiatrist.
My dad is an ophthalmologist.
And I grew up in North Dakota.
So North Dakota, many people have heard of Fargo because of the movie, the TV show.
Yep.
If you go an hour north, you get to a town called Grand Forks, which when I was growing up was about 60,000 people.
If you then go eight miles outside of that, approximately, you're at Thompson, which is town of about 1,000 when I was growing up.
And then you go about a mile, half mile outside of that, and you're at my house.
So would you say rural?
It was rural, yes.
And the thousand person town was the closest.
That's right, yes.
And so I went to elementary school through fourth grade in Thompson.
And I was lucky in that my parents worked in Grand Forks and that we were eligible for open enrollment into the Grand Forks public schools.
because Thompson was very small.
There weren't that many different opportunities academically.
And so for fifth grade onwards, I was in Grand Forks.
And one of the big things that unlocked a lot of my future learning and development was in sixth grade,
my dad taught me some algebra.
And seventh grade is the first time that you have two tracks.
You have the advanced math pre-algebra in seventh grade and then the normal seventh grade math.
And my mom and I went to the math teacher.
ahead of the school year and said, hey, can he just skip into algebra?
And the teacher looked at us with this most condescending look.
And it was like, every parent believes that their child is special.
I can guarantee you that your child would have plenty challenged in my class.
A couple weeks later, I had been paying no attention just playing calculator games in the back
of the room.
The teacher would try to trip me up and call on me and say, what's the answer?
I'd look at the board and be like, 2X.
And I'd go back to the games.
And she was like, all right, there's nothing for me to teacher.
child, you should go into the algebra class. And so seventh grade, I was that one year advanced.
But then eighth grade, no more math at my middle school. I didn't have a car so I couldn't go to
the high school. So instead I did an online independent study. And in that year, I just got super
into it. And I did three years worth of math in that one year. So I did geometry, precal algebra two,
all in that one year. And so next year, I'm at the high school. And I was in calculus. So I was this
scrawny freshman you know with the seniors but i was cool don't worry they were actually really
nice it was it was fun and i also asked the school hey could i also just skip the ninth grade science
and do chemistry and they were accommodating that was very nice of them so i just really got to
explore my own interests and then 10th grade i was considering what do i do now like i didn't have
a any more math in the high school i did have a car now because it's north dakota you can get a car
pretty early. And I had the University of North Dakota right there. And so I worked out
to deal with my high school where I could take any classes I wanted at the university, as long as I
was taking just three classes a semester at the high school. And that was my high school education.
Awesome. It was amazing. I did research with a professor named Ryan Zier. He was fantastic.
We got a published paper out of it. I studied a bunch of chemistry, did philosophy, just really got to
explore and just take my academic interest in whatever direction I wanted.
That's amazing. So lucky that the school system was supportive of letting you do your thing.
That's right. And the interesting thing is unusual. It was very unusual. And if I was just slightly
to the east in Minnesota, right? So we're right on the border with Minnesota. I would have had a
very different experience because there they actually have a standardized program for how high
schoolers go to college. And you wouldn't be able to take so many classes because if you're a junior,
you got like one class or something like that, senior maybe you have two classes.
But this idea of like really just being embedded into university because you have the standard system, it would have been impossible.
And so there's something about this because there was no program, but you have supportive educators.
Then I was able to really do it.
Yeah, that's a great argument against standardized systems in general.
Yes, yes, because then you stop the exception.
And for me, it's always been, like I remember in elementary school or so, we had to write like a little essay as you would.
third grade, not the most sophisticated essay, but, you know, the frost poem of two paths in the woods
and the question that you're supposed to answer was, which of these paths would you take, the one that's
trodden or not? And I was like, I wouldn't take either of them. I would blaze my own path through the
woods. That seems way better. Yeah. And I think that's very much how I've lived in my life.
I read that you were interested in acting when you were a kid. That is true. So first of all,
I was the male lead in my middle school play. So I was very good. Thank you. And the thing I really
liked about acting. Like, I really loved improv because it's about seeing the connection between
things. Yeah. And you have this interplay with another person or people and you're just imagining
and you're really just constrained not by reality in any way, but just by your wits and how quickly
you can you can make things happen. And so I really enjoy humor. My wife would say that I enjoy
making dumb jokes a lot. I appreciate that she puts up with it. And that for me,
me, acting was an expression of this value of really feeling the imagination turning into reality
while connecting across different concepts.
And so I, in ninth grade, faced what I felt was a choice between going the more math
science route or going the acting route because I wanted to be great.
I wanted to really dedicate myself to the craft.
And I ended up going the math route, but I always kind of wondered.
And my first paid job was actually an acting gig.
So this was ninth grade.
I was a small high schooler,
and the Mannheim steamroller was coming to Grand Forks, North Dakota, for a concert.
And we were told that they're looking for extras to act as tin soldiers for the show.
And I was like, this sounds great.
You get paid like $100 for the whole night.
That'd be like most like, most.
money I could have imagined to make in one sitting.
Like, that would be so great.
And I show up, and it's all these college students.
And I'm like, oh, man, like, I'm going to be in trouble.
Like, clearly they are all better typecast for this role.
And so the way that they had the audition go was they had everyone stand at attention,
walk in a direction, and then the casting people would all compare notes.
And in the meanwhile, all the college kids would hang out with each other and talk.
But I was like, there's one job is to stand for the whole concert at attention.
And so I spent the whole audition just standing at attention when everyone else was relaxing.
And afterwards, they called up the people that they were selecting.
I didn't get called.
I was about to leave.
They said, hey, actually, we thought you were just so amazing.
You're just too small.
We can't give you that role.
So we're going to create something new for you.
And so they instead gave me the part of being a gingerbread man.
And so I got the job.
I got paid the $100.
And it was really through that, that different audition approach.
Do you think that there's a relationship between improv and AI?
I do.
Like the yes and?
Yes.
Yeah.
I think that AI in many ways is the ultimate improv partner.
And if you take away the post-training and you look at just the pre-trained model, that is its job.
Its job is to say, I'm in this situation.
I need to get oriented.
Here's some text.
Here's some scenario.
Here's some images.
Here's some DNA sequence.
whatever the input data is.
And its job is just to figure out what comes next.
And it's never seen this before.
It's a total mystery.
And it needs to orient itself and figure out what's a plausible thing
and then do the right thing.
And come on.
Like, that's improv.
Part two of the conversation with Greg Brockman
will be available this Friday
on the Tetragrammatin podcast network.
Tetragrammatin is a podcast.
Tetragrammatin is a website.
Tetrogrammatine is a whole world of knowledge.
What may fall within the sphere of tetragameter.
Counterculture? Tetragrammatian.
Sacred geometry.
Tetragrammatine.
The Avant Garde. Tetragrammatine.
Generative art.
Tetragrammatin.
The tarot.
Tetragrammatin.
Out of print music.
Tetragrammatin.
Biodynamics.
Tetragramatine.
Graphic Design.
Tetragramatine.
Mythology and magic.
Tetragramatine.
Obscure film.
Tetragrammatim.
Beach culture.
Tetragrammatim.
Esoteric lectures.
Tetragrammatin.
Off the grid living.
Tetragrammatine.
Alt.
Spirituality.
Tetragramatin.
The canon of fine objects.
Tetragrammatin.
Muscle cars.
Tetragrammatin.
Ancient wisdom for a new age.
Upon entering, experience the artwork of the day.
Take a breath and see where you are drawn.
