Hard Fork - Hard Fork Live, Part 1: Sam Altman and Brad Lightcap of OpenAI
Episode Date: June 27, 2025The first Hard Fork Live is officially in the books, and for those who couldn’t attend, we’re playing highlights from the event in this episode and the next. This week, Mayor Daniel Lurie of San F...rancisco makes a surprise appearance to discuss the advice he’s receiving from tech executives during the early days of his administration, as well as how he built a social media presence that’s got Kevin wondering: Could we do that? Then, the conversation that had everyone talking: We’ll play our interview with OpenAI’s chief executive, Sam Altman, and chief operating officer, Brad Lightcap, and explain what was going on in our heads as the conversation unfolded in a way we did not expect. We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Little behind the scenes, so we're getting ready to go on.
And if you were at the live show,
you know that the show started with a marching band
coming in that Kevin and I were leading.
Kevin and I were marching down different staircases,
trailed by a band of marching musicians.
And so Kevin was sort of loaded into his position,
and I went down to the next door with my three musicians,
and I go to open the door, and then of course it's locked.
And they're already playing the cold open for the show.
We have a few seconds left, and I'm like frantically trying to wave someone from the SF Jazz Center, and he runs up it's locked. And they're already playing the cold open for the show. We have a few seconds left,
and I'm like frantically trying to wave someone
from the SF Jazz Center, and he runs up with his keys.
But fortunately everything worked out.
The music started on time and yeah, that was that.
I'm so glad.
That was a near miss.
I would have had to go out and march the band in myself.
Yeah.
It's always interesting to me
when they lock the audience into the theater.
I wasn't sure exactly what was happening there.
Release the bees!
That's a way to look.
When you do a two hour podcast taping,
some people are gonna try to leave
and you're gonna wanna have a plan for that.
And we had a plan, and the plan was you can't leave.
The best audience is a captive audience.
Absolutely.
I'm Kevin Rusatec, columnist at the New York Times.
I'm Casey Newt from Platformer. And this is New York Times. I'm Casey Newton from Platformer.
And this is Hardfork.
This week, it's Hardfork Live.
You'll hear our first ever podcast taping in front of a live audience in San Francisco.
We've got a special appearance from San Francisco Mayor Daniel Lurie and the conversation that
had everyone talking this week.
It's our extended interview with OpenAI CEO Sam Altman and COO Brad Lightcap.
Got a little spicy.
Well, Casey, Hard Fork Live is in the books. How are you feeling? Are you recovered?
I am floating. I think everyone should have the experience of starting a podcast and then
having 700 people come to watch it. It really is a, it really makes you feel good.
Yeah, we had such a great time.
Thank you to everyone who came out.
We had a packed house and I gotta say, it was so much fun.
It was so much fun.
And you know, we had a cocktail hour after where
we got to meet everybody and take selfies.
And I was meeting folks who had flown down from Seattle,
who had flown in from New York.
We had one guest who came in from Switzerland.
So, I mean, the resources that people put into
coming to Hanging Out With Us, it just meant so much to us.
Yeah, it was, like, people would tell me that they flew in
from some other place, and I would think, like, really?
You're like, you have, like, a conference here this week,
or what actually brought you here?
It's like a restaurant you wanted to go to?
No, but people were so lovely, and we got to meet and talk
with so many of our wonderful listeners after so lovely, and we got to meet and talk
with so many of our wonderful listeners after the show,
and really fun experience.
And we also got to have some really great conversations on stage.
Yes, so for those people who couldn't make it,
the great news is that we recorded the entire show,
and we're going to be bringing it to you on the podcast in two parts.
Half of it we're going to post in this week's episode,
and the other half we'll post in next week's episode.
And if you can't wait
and you wanna watch the full thing right now,
you can go over to our YouTube channel,
youtube.com slash hard fork and find the full show there.
And let's say there's probably never been
a better hard fork episode to watch on YouTube
than this one, because it was visual as heck.
Yes, yes, it was visual.
There were costume changes, there were props.
We took our pants off.
More than once.
Yes.
So we can't wait to bring you snippets from the show
this week and next.
And in the meantime, we're gonna take a little vacation.
Yeah, Kevin, it's been a great six months,
but you know, a couple of times a year,
we like to shut down the operation,
give everybody a chance to rest, and now is that moment.
So thanks again.
We hope you enjoy excerpts from Hard Fork Live
this week and next, and we will have a special episode
of a different show coming the week after that.
We'll be back to our normal programming on July 18th.
See you then.
Have a great summer.
Well, have a good few weeks, and then we'll
be back for most of the summer.
That's true.
Yeah. Early summer.
Early summer. Enjoy's true. Yeah.
Early summer.
Early summer.
Enjoy Stonefruit season.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork Live!
Oh my goodness.
Wow.
Should we take off our band leader jackets?
Let's take off the jackets.
Okay.
Give us a second.
They've served their purpose.
It's very hot.
These are not, these are Amazon's finest.
Yeah.
Can I, can I, here.
Can I, can I leave this with you?
Thank you.
Thank you.
Will you guard this with your life?
Yeah, with your life.
Yes.
Maybe just.
Oh, the baton too probably.
The ribbon, yeah.
I don't think we'll do that for our interviews.
Tripping hazard.
There you go.
Thank you. Wow. Thank you to Brass Animals. That's the band for our interviews. Tripping hazard. There you go. Thank you.
Wow.
Thank you to Brass Animals.
That's the band you just heard.
They're incredible.
We love Brass Animals.
They will be back later.
And thank you all for coming.
What a surreal thing this is.
I mean, we record this podcast in a booth that's about two feet by two feet, and we
just send it out there.
We think people listen, and we hope that people listen, but it's really surreal to see it
all in person.
Yeah, it's so much fun to have all the energy in this building.
We've been talking as the hours have been counting down to this.
In 2021, Kevin and I just started texting each other all the time.
We felt like there was one era of tech that was ending and another that was about to begin.
And we just wanted to talk about it to a bunch of people.
And it's a really long way from there to this moment right now.
Yeah.
And one of the questions I get asked most frequently is what is a hard fork?
Casey, what is a hard fork?
Okay.
So something a little embarrassing about Kevin and I is that we had a crypto phase.
It happens.
It happens. Some people go goth. Yeah. Talk to your teens about crypto. And at the time, we thought, well, you know, it's 2021. Our show will probably be about crypto for the rest of time. Hard Fork is a really important concept in crypto. Let's build a show around it.
Yes.
But we're so excited for tonight.
And Casey, you look great, by the way.
Oh, thank you, Kevin.
Doesn't he look great?
Kevin.
Very nice of you.
Now, a couple of weeks ago, you told me
you were getting a new outfit for this show.
And I thought, shit, I have to get a new outfit, too.
So I went on a little bit of an adventure,
trying to figure out what to wear tonight.
And as I do, and I'm in times of crisis, I turn to AI.
So I made a little slideshow about my adventure.
I hope it's okay if I show it.
A slideshow? Okay, great, yes.
Yeah, I love a slideshow, you know that.
So I started with ChatGPT, where I put in the prompt,
give me some ideas for a glow up, I said, do not
change my face, just give me a glow up, make me look a little better.
Chat GPT with its infinite wisdom came back with this.
Okay.
Very stylish, but as you'll notice if you look closely, not me.
Can I just ask, why did you choose the angle of the floor looking up at you?
I don't know.
It was in the office.
All right.
Then I asked, I got another example and it said, that, okay.
Great hoodie, still not me.
Very cool hoodie.
One more example I asked for, it put me in a nicer room, but again, if you zoom in, not
my face.
So ChatGPT really saw my request for a glow up and said, I can't help you there.
We're going to need to involve plastic surgery.
The tech is only so powerful at this point.
So then I thought, okay, maybe it's just ChatGBT, maybe Gemini will do a better job.
So I said to Gemini the same prompt, don't change my face.
Gemini said, what if you looked like David Beckham?
That would be good. But I didn't
give up there because there's an app out there called Doji. It's sort of like a high fashion
thing where you can sort of scan your body and your face and you'll kind of render this
3D image of you and then tell you what to wear. So I put my photos into Doji and gave
it these photos and I said, create my likeness and tell me what to wear
and so it came back with a suggestion that looked like this.
Thanks, Doji.
What do you think?
Like, I would have worn that.
Yeah, I mean, you could pull that off.
I'm not sure I could.
Okay, so having failed at having AI dress me for tonight,
I did what every straight man does in times of distress,
I went to Uniqlo, so that's what I'm wearing tonight.
That's good job.
Smart man, smart man.
Well, I think it turned out great, Kevin.
Thank you.
And we have a really great show for you.
I want to say, you guys sold out this building
before we announced the guest.
I want to say thank you for that.
Thank you for trusting us.
And we wanted to reward your trust in us with a really special show.
And before we get started in Grand Hard Fork tradition, we should do our disclosures because
we're going to be talking a little bit about AI tonight. So, Casey, you want to take it
away?
Who's excited to hear the disclosures?
I'm amazing. Well, I'm proud to say my boyfriend works in Anthropic. He may even be here tonight.
Love you, sweetheart.
And I work for the New York Times Company,
which is suing OpenAI and Microsoft
for copyright violations alleged to the training
of large language models.
Did I get that right?
Yeah, period.
Well, you know, the last thing I would say
before we get started, Kevin,
is it's also just a dream to be doing the show
here in San Francisco.
San Francisco is my home. It's where we make dream to be doing the show here in San Francisco. San Francisco is my home.
It's where we make the show every week.
It's where so many of the changes we talk about every week are happening.
You know, if I have any regrets, it's just I feel like San Francisco has been changing a lot all around us,
and we've been so heads down, it's kind of hard to keep track of all the changes.
Yeah, it's a really good point.
And if I could change one thing about tonight, I think we should have invited someone with a little bit of relevant expertise here,
someone who really understands San Francisco politics.
Who would that be?
Kevin, I told you to put the door on Do Not Disturb.
Let's see who it is.
It's the mayor of San Francisco, Daniel Lurie!
Hi, how are you? Nice to meet you.
Thanks for coming.
Oh my goodness, please have a seat.
What a fun surprise.
Thanks for stopping by, Mr. Mayor.
Thanks for having me.
Thanks for bringing everybody to San Francisco.
Absolutely, I think they're happy to be here.
You happy to be here?
Yeah.
Well, you know, since you're here,
let's toss a couple of questions at you.
You've been in office for just about six months now, and the tech community, I would say,
has generally been very supportive of what you're doing.
And you've even formed a council of tech advisors that includes our guest from later tonight,
Sam Altman.
So what kind of advice are they giving you, and are you taking it?
Well, we...
Sam was on our transition committee, and now we have something that we started called
the Partnership for San Francisco, where we have
leaders from across business and arts and culture
giving us advice and helping to, you know, cheerlead
for our city.
You all are seeing the revolution happening
if there is no better place in the world
in terms of an ecosystem than San Francisco.
And there was a lot of talk for a number of years
about how San Francisco was done.
That was a bad bet.
As everybody knows, I mean, the guests that you have tonight,
you'd have to fly them in,
but they're living right here in San Francisco.
It's all happening right here.
Like, you know, you talked about Anthropic.
You got Dario, you got OpenAI, you got Salesforce,
you got Databricks.
I mean, cities across the globe would die
to have one of those companies.
And they're all home-based right here in San Francisco.
So I'm talking to them.
I'm talking to arts and culture leaders.
And we're doing everything in our power to create the conditions
For success and we're off to a good start now. We're gonna be talking a lot about AI tonight
I'm curious is AI helping you in your job at City Hall
We are absolutely talking to all the companies saying how can we get their help on?
Synthesizing all the data that comes in we have 58 different
on synthesizing all the data that comes in. We have 58 different departments at City Hall.
And they don't always talk to each other.
And so we have great intellectual horsepower here.
We got great universities.
We got these great companies and we are engaging with them
and they are already starting to help
and you'll see more in the coming years.
I feel like governments don't have a reputation always
for having state of the art technology. Is there anything you wish that you feel like governments don't have a reputation always for having state of the art technology.
Is there anything you wish that you had that you don't have
or that the sort of tech could do for citizens here
that it can't yet?
Well, I think just making sure that we're crunching the data.
We have over 34,000 city employees.
And getting them to talk to each other,
understand what they're going through.
I'll tell you a quick story.
There were not staff meetings going on
between our large agencies.
And I instituted something every Tuesday morning,
so this morning, 9 a.m., we had the 20 large agencies.
They get around the table.
This is not tech, this is old school.
But it starts with old school.
Yeah, this technology is called the table. Yes
That is right
By the way tech doesn't work if you don't communicate with each other
That's why I think everybody's got to be back in the office a lot of these AI companies
They're in the office five six days a week
I went to open AI's new office by chase
They are in the office because they know it doesn't work unless you're communicating.
And so our department heads are meeting with each other once a week, gaining knowledge
from each other, seeing how they can help each other.
And the tech then follows that.
And so we're hoping to lean in with all these great companies.
Now I have to ask you, Mayor, about your social media presence.
You are very active on short-form video apps
such as Instagram Reels and TikTok.
You post more Instagram Reels than a Gen Z clout chaser.
And honestly, they're pretty good.
I would say, grading on the curve of politicians,
they're great.
And I'm curious, like, who does your social media? What's the strategy there? Is it working?
I was told, there was a review of my Instagram in the local, in the Chronicle, and it said that I had not yet made
the camera my lover.
So, so I have work to do. That's not OnlyFans.
So I have work to do. That's not OnlyFans.
Yeah, yeah.
Yeah, yeah.
So listen, I'm having fun with it.
We know there's so much noise out there.
And to break through and to communicate with people,
like you all do so well with your show,
we felt like we had to reach people directly.
And it is taking off.
You all can check it out.
And you can learn
what it's like to be mayor and to see all the small
businesses that are amazing in San Francisco,
the restaurants, the bars.
I just went in honor of Pride, I just stopped by before,
to a bar that I've passed so many times,
it's called the Cinch Gay Bar.
On, shout out.
With Pride Week, we went by there just now and it's been there for years and it's called the Cinch gay bar on shout out with Pride week we went by
there just now and it's been there for years and it's amazing and like I want
to highlight what is so special about San Francisco and that's what we're
trying to do and I usually just give the mic over to the restaurant owner the
arts leader and say what do you do and the city gets to see it and that's what
it's all about it's awesome well we've got to let you go,
but before we go, I wanted to ask,
could we make a reel with you? Would you make one with us?
Yeah, absolutely.
Right now? Okay, let's stand up. Let's do that.
Oh, but I don't have my phone. Oh, you got your phone.
Oh, I got it. I got it. Yeah. Okay. Oh, boy.
I'll do it, so we can just turn it into selfie mode right here,
and we can just go. So you are the maestro here,
so you got to direct it, okay?
All right, I'm on hard fork right now.
Hey, you two, tell us what's going on here.
This is your first live audience.
First live audience.
We're here for Hard Fork Live at SF Jazz,
having a great time here with Mayor Lurie.
Let's go.
And what I got to tell everybody,
got to tell your audience that San Francisco, we
are on the rise.
When we are at our best,
there is no better city on the planet than San Francisco.
Let's go San Francisco.
And that's on period.
Thank you very much Mayor Lurie for stopping by.
Thank you.
Oh, my goodness.
Kevin, we've already had so much show
and it's literally just begun.
I didn't even tell him that the most relevant thing was that you didn't get your permit
for your hot tub.
Mayor Lurie, you need to raise revenue.
I know a guy.
When we come back, we'll bring you our Hard Fork Live interview with Sam Altman and Brad
Lycap from OpenAI. So Casey, that was a really fun short interview with Mayor Daniel Lurie of San Francisco.
Very grateful that he stopped by.
And now we are going to bring you something different.
Yeah.
And we want to give you the behind the scenes because whether you were there at the show
or you're just about to listen, a lot kind of happened backstage that you're going to want to know you the behind the scenes because whether you were there at the show or you're just about to listen,
a lot kind of happened backstage
that you're gonna wanna know before you hear this.
Yes, so this was our interview with Sam Altman
and Brad Lightcap from OpenAI.
We had invited them to come and talk with us
about AI and the future topics
we talk about in this show all the time.
Yeah, and so the way that this show works
is we don't see these folks before the show.
The show just sort of starts and they show up.
Kevin and I are backstage.
The most recent thing that has happened
is that we did this amazing demo
wearing these exoskeleton pants
that help people who have mobility issues,
and we need to remove the pants.
And so we go backstage and we do remove our pants
in front of Sam Altman and Brad Lightcap. And they were very cool about that, I would say.
You know, they didn't make any comments or anything.
Yes, they were pointing and laughing.
Yes, as we fear that they might.
And so we're getting ready to go out on stage.
And the thing that is supposed to happen is we're going to have five or six minutes where two things happen.
One is we shout out our families to thank them for being there.
And then we kind of want to set up the story of OpenAI
in this moment.
The company has had a lot going on,
and you and I just want to banter back and forth
a little bit to kind of set up what
was going to happen before Sam and Brad come on stage.
So I go up to Sam and Brad right before this happens,
and I say to Brad, hey, thanks for being here.
Shake his hand.
Go up to Sam.
Say thanks for doing this.
Shake his hand.
And then Sam says something like,
you know, hey, only ask interesting questions tonight.
Like basically like come at me a little bit.
And I say, okay, yeah, sure.
And I say, you know, if you want to troll me a little bit,
like make fun of me, go for it.
And what Sam says is, well, I don't strike first,
but I do strike back.
And I was kind of like, okay, well,
I don't think a lot of striking is going to be coming
from me during the show, but like, okay, sure, whatever.
And so then you and I head out on stage.
And we had just started the bit that we had planned,
sort of saying, okay, you know, how's everybody doing?
Did you enjoy the first half of the show?
And I turned to my right and I see,
walking out on stage, Brad and Sam.
Yes, before, like minutes before
they were supposed to arrive on stage,
they had our whole intro music, We were gonna tee it up.
They just kind of barged onto stage.
Yeah, and so when this happened, my thought was,
this is probably just a production mistake.
Someone backstage told them,
this is your moment, and pushed them on stage.
It's a live show. These things happen.
I think my first impulse was to say,
hey, you guys wanna give us a minute?
But they just kind of advanced on us and sat down,
and were basically like, okay, cool. Like, what do you guys wanna talk about? And we were like, well, you guys want to give us a minute? But they just kind of advanced on us and sat down. And we're basically like, OK, cool.
What do you guys want to talk about?
And we were like, well, we kind of
want to set up your segment.
And in hindsight, though, Kevin, I
think we realized that actually no one backstage
had told them to come.
This was kind of a power move that they
were trying to do to get us flustered heading
into what would happen next.
Yes, they were trolling us.
And specifically, they were trolling us and specifically,
they were trolling us about the lawsuit
between the New York Times and OpenAI,
which they know that you and I are not involved in, right?
They are not under any impression
that we are part of the litigation team
at the New York Times,
but it is clear that they had something
they wanted to get off their chest about that lawsuit,
and I think just have a little fun with us.
Yeah, and so they came at us pretty hard.
You will hear it in the interview.
And you and I are trying to steer the conversation
to stuff we can actually talk about.
In fact, one of the things that was going to happen
in the bit that we didn't do was you saying,
hey, by the way, about this whole lawsuit thing,
I'm not involved in it and I can't talk about it.
And by walking out on stage,
they sort of prevented that moment from happening. Yeah, I will say, I learned a lot about this interview,
only some of it from the questions we asked.
I learned a lot more about Sam Altman
from this brief interaction
before the segment actually was supposed to start.
Yeah, I mean, look, I think we got a lot
out of just kind of the questions that we asked,
and we got into so many things that we wanted to talk to him
about the business, about the risks of job loss,
about the risks of people using chat GPT
and having mental health breaks.
But I do think you're right, Kevin.
You just learn something about people
by observing them in public settings,
how they behave, how they engage.
And so I think there's just kind of a lot
for everyone to chew on.
I was reflecting last night that the first time we had Sam Altman on the show,
two days later, he gets fired. And it sort of, in many ways, kicks off the moment that we're
living in now. And that was extremely surreal. The evening that we had at Hart Fork Live was
kind of a perfect sequel to that. Because now you have a person who is fully in control,
who wants to bend reality to his will. And
if there's a couple of journalists, he can just kind of kick in the shins on his way
toward building God, he's going to be happy to do it.
Yeah. And we should also say Sam did send us an email after the show apologizing for
his behavior. He said he was, quote, such an asshole and that he felt bad about it.
So that tells you something. But yeah, what you'll hear in this segment
is the two of us being somewhat flustered
that our planned introduction is just being interrupted
by these two guys wandering on the stage,
and we'll take it from there.
And I will also say, I have no idea what's gonna happen
in Sam Altman's third appearance on Hard Fork,
but the bar has been set really high.
Yes.
All right, when we come back, we'll have the interview
with Sam Altman and Brad Lightcap from OpenAI. We're about halfway through the show.
I thought I would just kind of check in.
Are you guys have a good time?
Okay.
Okay.
Good.
We are having a lot of fun.
Oh boy.
They told us to come out.
They just pushed us out.
There's no way. Okay. Great. Do you told us to come out. They just pushed us out.
That's no way. Okay, great.
Do you want us to go back?
No, come on out.
Yeah.
Hi, Brad.
Wow. Good to see you.
I love it. We're doing it live today, family.
You guys are learning a very important thing about our show,
which is that it's edited.
Yeah.
Um, Kevin, do you have maybe one more thing you want to say
before we get in the interview?
Yes.
So we're here with Sam Altman and Brad Leikhaff from OpenAI.
We'll just hang out.
Do your thing.
You can, like, check your email.
Go for it.
Yeah.
Well, what we were going to do is tee up your appearance
a little bit by just giving a little lay of the land of what's been going on at OpenAI,
which is a very busy company.
Casey and I have been...
This is more fun than we're out here for this, though.
Yeah. No, jump in.
You guys can be Statler and Waldorf over there.
We'll do the color commentary.
I had a list of headlines from the past couple of weeks,
and if there's anything that just makes you want to roll your eyes,
you can roll your eyes.
Okay. So, you...
Are you going to talk about where you sue us
because you don't like user privacy?
Okay.
Woo!
The last thing Sam said to me before he came on stage was,
I don't strive for...
I did say that, that's true.
Well, I teed it up with the headlines.
Yeah, do you want to say something, Kevin,
about the New York Times?
Oh yes, we should also give our disclosures,
which is that we are just journalists.
We are not involved in the lawsuit.
And I don't know.
We don't represent the company's views on the matter.
What do you think of the company's views?
What do I think of the company's views?
Are you trying to get me fired, sir?
No, I just want it out.
Kevin needs this job.
I don't have any other skills.
It seems like you got a podcast.
I mean, a lot of things.
Well, OK, great.
So you said that.
I'm going to just pretend I didn't hear that.
You're pro the lawsuit.
What's that?
You're pro the lawsuit.
I think people should read the relevant filings
and make up their own mind.
Yes, democracy.
Democracy.
We love it. We love it.
We love it.
We love it.
And what about your mind?
It feels like you have something you want to say
about the lawsuit.
Well, look, I do think,
I like user privacy.
I don't think companies-
You want to explain what you're talking about here?
Oh.
Well, you guys are suing us.
And- I'm an independent contractor. I write a newsletter. It's called Platformer.
Yeah, don't drag him into this.
And one of the things that's happening is you all are... Sorry.
Your employer is... I don't know what you call it, an independent contractor.
The New York Times. Let's just say the New York Times. One of the great institutions, truly, for a long time,
is taking a position that we should have to preserve
our users' logs, even if they're chatting in private mode,
even if they've asked us to delete them.
And the lawsuit we're happy to fight out,
but that thing we really, we think that privacy and AI
is just this like extremely important concept
to get right for the future
and we care a lot about the precedent.
Still love you guys, still love the New York Times, but that one we feel strongly about.
Well, thank you for your views and I'll just say it must be really hard when someone does something with their data
that you don't really want them to.
I don't know what that's like personally, but maybe someone else does.
Okay, let's get started with the questions.
I don't, that's alright.
I was recently told by a guest on stage
that the singularity would be gentle.
So I just wanted to point that out to you.
Casey, read your headlines.
Speaking of, do we still want the headlines?
Let's go into the question.
These people know what's been happening with OpenAI.
OK.
But I think it's important to give a sense of the sheer volume
of stuff you all are doing.
Like we've been covering tech for a long time.
I don't think either of us have ever seen a company that makes this much news this regularly on this many areas.
And you've got hardware stuff going on with Johnny Ive.
You've got obviously ChatGPT continues to grow.
You're doing this defense contract, this $200 million defense contract, a deal with Mattel to make
toys.
I think you were the first company to sign a deal with Mattel and the military in the
same week.
I like that.
Stargate, your big data center project, your attempted conversion to a for-profit company.
So there's just a lot going on in your worlds.
Casey?
Well, but we wanted to start with something, I don't know, that I thought you might have
fun talking about, which is that rascal Mark Zuckerberg keeps coming after your employees.
And I'm sure this is happening to you guys on some level all of the time, but I wondered
if there's been any particularly funny or crazy moment over the past few weeks as they've really stepped this up
Any that you think have been particularly funny?
many
Yeah, I
Don't know I haven't slept in four years
Nothing fazes me
One of the strangest things of the job is the amount of things that can go wrong
by like 11 o'clock on a Monday morning.
It's just an astonishing diversity of stuff.
And so it's like, okay, Zuckerberg is doing
some new insane thing, what's next?
Like, you just kind of-
I want to gossip for just one minute more.
Do you think-
Only one?
We're here for a lot of that.
We can do more gossip.
Do you think Mark Zuckerberg actually believes
in super intelligence, or do you just think
he's saying that as a recruiting tactic?
I think he believes he's super intelligent.
Oh.
Light cap off the ropes.
Very good. Off the ropes! All right, but it sounds like your confidence has not been shaken by the recent raid on your employees.
We're feeling good.
Yeah.
All right.
All right.
So you recently wrote this essay I just mentioned, The Gentle Singularity. And you wrote, we're past the event horizon.
The takeoff has started.
And people, I think, read that and thought,
do these guys have a super intelligence
that they're keeping in the basement?
I assume that that's not true.
But tell us a little bit about why you wrote that essay
and when, in your mind, we hit this point of no return.
We don't have a super intelligence in the basement, but we have shipped a model
that any of you can use.
And I hope any of you do that is.
Quite smart relative to what you might have expected five years ago,
where the world would be with AI, And we have all adjusted to this.
We've all just sort of said, oh, you know,
this is the new world.
We have like PhD level intelligence in our pocket.
We can use it, we can talk to it all day.
We can do all the stuff for us.
But it is kind of remarkable that this has happened
and this is the world now.
And when you are like living through moving history,
you adapt so quickly that I think it's hard
to get the perspective of like, man, you know,
five years ago, most of the experts made fun of anyone
who said AI, AGI might be even a plausible thing
to work towards and now here we are with this like,
this thing that has come quite a long way that we can use in all these ways.
And we have always, so we used to try to just say like, hey, this AGI thing is
coming, it might be a really big deal, it might be really important, you all should
pay attention. No one cared. And we shipped the product and then
people cared. And a thing we've learned again and again is, you know, talking
about it doesn't seem to break through, but if people can use it and feel it and, you know,
see where it's good and where it's bad
and integrate it with their lives, then they do.
And so now we see many years ahead of us
of extreme progress that we feel is like pretty much on lock
and models that will get to the point
where they are capable of doing meaningful science,
meaningful AI research.
And we continue to feel a responsibility
to tell the world about that.
Most people won't listen.
Maybe some more people will listen this time.
But we'll ship products that expose these higher levels
of intelligence that we'll build.
And that is how I think people will really get their hands
around what's happening.
Now, Brad, it's your job to manage the business of OpenAI.
What does being past the event horizon?
Towards super intelligence mean for open AI as a company
I imagine it makes lots of different kinds of decisions different but like how do you plan for a world?
Like the one that Sam is describing as a person who runs a business. Yeah, it's the fun part of what we do
We debate this and this internally a lot.
We will kind of wake up one day with this incredibly powerful thing, and will the world be different that day? And I think what we've all kind of agreed now is it probably won't.
Kind of to Sam's point, I think these things really have to be kind of integrated into people's lives.
They have to be felt, and that change is more gradual.
And so we work really closely with companies and as as much as we do with users, to figure out
what that process will look like.
I do think businesses will look very different in the future.
So my kind of personal metric for what
business in the superintelligence age means
is you've got one person who has a lot of agency
and a lot of willpower who has the capacity
to start a company that can do billions of dollars in revenue.
And it's hard to imagine now, like you think, okay, I need salespeople and I need product
people and engineers and accountants and so on.
But all of that stuff now can kind of just be managed, right?
It can be kind of built into the system.
And that just gives incredible agency to individual people.
So I want to get into the nitty gritty of building this future.
Right now, the agents that you all have built for coding are really extremely good.
People can build a lot of amazing stuff with them.
Outside that domain, we've seen less progress.
Talk to me about what's going to happen the next year that makes you feel like you can
start knocking off one or more of these other domains.
Well first of all, coding is pretty general purpose.
If you can write code, you can do
a system that can write code just like a person that
can write code can make a lot of other things happen.
But we are beginning to see scientists
be much more productive with this.
We're seeing companies really change a lot of their workflows.
The thing, though, that I am excited excited for most of the way people use AI today
is sort of like send a request, get a response.
You send a request.
It might think for a second.
It might not.
It sends you something back.
You are in one of those like vibe putting things and you do something
and you get something back.
Um, I think I'm very excited for a world where each of us has,
you know, a copy of 03 or many copies of 03
that are just constantly running,
constantly trying to like say,
oh, I see this happening now this,
and I'm reading Slack and I read an email
and I see this and this and this,
and you asked about this yesterday, here's a new idea.
And starts to just, we have this like team of agents, assistants, companions,
whatever you want to call them, that are doing stuff
in the background all of the time, and that,
that I think will really transform what people can do
and how we work and kind of maybe to some extent
how we just sort of like live our lives.
So I use O3 all the time, it is helpful to me
as a journalist, it can fact check stuff for me.
It can edit stuff for me.
When is the moment when like,
it just kind of knows what I do
and in the morning it actually just kind of starts
doing that stuff without me telling it?
Well that's kind of what I'm talking about,
except I don't think it should be without you telling it.
But I would love, if I woke up every morning
and there was a drafted response
to every email that had come in overnight
and I could click and I could say,
I wanna send this one, I wanna edit this one,
I wanna send that one.
If I could open ChatGPT and say, hey,
here was the stuff you were working on yesterday
that you didn't finish on your to-do list.
Here's my attempts at that.
Do you want me to take this action on that one?
And by the way, here are these other things
that happened overnight with a customer
or in the world or whatever.
And, you know, here's a set of stuff I could do for you.
And I have like all of this ready to go.
And I could just sort of go through and say like, OK, do that.
Don't do that. You know, here's what should have been different here that I'm very excited about.
But I don't want to like go to sleep and have three just start taking actions for me.
I use three a lot, too. And I find it very useful.
The thing I will say is that it lies.
More than previous models, I feel like it is a crafty, shifty assistant
that will just once in a while make stuff up.
And actually, it seems like the hallucination rates on these newer models
are staying about the same or maybe even getting worse.
So do you have a theory on why that might be?
I think it did get a little bit worse from 01 to 03,
and we'll make it much better in the next version.
I think we're earlier in learning
how to align reasoning models, and also how people are using
them in different ways, but I think we've now learned a lot.
And I would suspect we'll be very happy
with the next generation there.
So you made your largest acquisition to date this year
with Johnny Ives I.O.
The first crop of AI hardware that we've seen
has not been particularly successful.
Brad and Sam, what do you guys feel like you're seeing
that makes you feel like you can do something different here?
Well, every time you kind of re-platform technology,
there tends to be kind of a corresponding set of things
that get built that change how we interface
with that technology.
So I think the question here is, is that going to happen again?
You know, all of a sudden, right,
like you kind of miniaturize the PC
and you have the mobile phone.
The PC itself was a miniaturization
of the mainframe and so on and so forth.
I think this one has a different direction.
I think this one is really going to be about this very kind
of aware, very contextual, almost companion-like system
that is going to be less about a dependency on a screen.
I think there's a place for a screen in that world,
but it's going to be really about an awareness of the ambient
environment, what's going on.
I mean, Sam mentioned the mentioned the trivial example of something
that is looking at your email.
You can build something that's really bad that does that today.
But to get to the version of that that's
transcendently good, there's a ton of context
and a ton of awareness that you have to have of what
each situational thing is that helps you craft
exactly the right response.
And imagine that now in any arbitrary situation and wanting to have that with you
all the time.
And so I think that's a very compelling direction for this type of hardware.
It sounds a lot like Alexa.
Is it going to feel a lot different than Alexa?
Don't you just want to wait and be surprised and get some joy?
Like it's been a long time since the world has gotten
a fundamentally new kind of computer.
Like, let us try.
OK.
If it's Alexa, we're going to be really mad.
I'm just saying that right now.
So will I.
And these people will remember this.
Sam?
I know.
I think we can do, I think we have a chance
to do something truly great.
But hardware is really hard, and it takes a while.
And I've always wanted to try to do a new kind of computer,
but that hasn't worked most of the time.
So we're really going to take our time and try to get it right.
Sam, a few years ago, you described your relationship with Microsoft
and its CEO, Satya Nadella, as, quote,
the best bromance in tech.
The bromance has been feeling a little wobbly recently.
OpenAI needs Microsoft's blessing for this for-profit
conversion, and Microsoft has reportedly peeved at you
about a bunch of things, including the terms of a planned
acquisition.
Last week, the Wall Street Journal reported that things
had gotten so tense that OpenAI executives were
considering reporting Microsoft to the government
for anti-competitive behavior.
Do you believe that?
When you read those things and say, like, do you think?
Are you saying it's not true?
You know what I always think when I read that?
I hope I get to ask Sam Altman about it.
So what is going on, and are you caught in a bad bromance?
I had a super nice call with Satya yesterday
about many topics,
including our hopefully very long and productive future working together.
And obviously in any deep partnership,
there are points of tension and we've certainly had those, but on the whole,
it's been like really wonderfully good for both companies. Um,
we're both ambitious companies, so we do find some flash points, but I would expect that it is something that we find deep value in for both sides for a very long time to come.
And in a world where I do read these articles sometimes like, opening up Microsoft about to know, that, and then like, my calls are like,
how do we figure out what the next decade together
looks like, and it just, it doesn't, yeah.
Again, not to pretend like there's no tension, there is,
but there's like so much good stuff there,
and I think there's like such a long horizon.
That's what we in the business call a non-denial denial.
I don't know. Just kidding.
Let's move into some policy stuff.
You've talked to President Trump.
What does he think about AI?
What were those conversations like?
That was not intended to be a laugh line.
You want to take it first?
I'll do it.
No, that's fine.
That was not intended to be a laugh line either.
I think he really do it. No, that's fine. That was not intended to be a laugh line either. I think he really gets it. I think he gets the technology.
I couldn't say that about all presidents. I think he really understands the importance of
leadership in this technology, the potential for economic transformation,
the sort of geopolitical importance, the need to build out a lot of infrastructure.
They're like very productive conversations.
And he has done stuff that has really
helped the whole industry.
It is easier to permit data centers and new energy
to run those data centers than it has, I think,
ever been before.
And that could have gone the other way.
Dario Amadei of Anthropic recently said that he thinks
50% of entry-level white collar jobs could disappear
due to AI in the next one to five years.
Do you agree?
No, no, I don't.
I just, no.
Why not Brad?
We have no evidence of this.
And Dario is a scientist. And I would hope he have like we have no evidence of this and Dario is a scientist
And I I would hope he takes like an evidence-based approach to these types of things
But like we work with every business under the Sun
We look at the problem and an opportunity of deploying AI into every company on earth
We we yet, you know have yet to see any any evidence that that people are kind of wholesale replacing entry-level jobs.
I think that there is going to be some sort of change in the job market.
I think it's inevitable.
I think every time you get a platform shift, you get a change in the job market.
I mean, in 1900, 40% of people worked in agriculture.
It's 2% today.
Microsoft Excel has probably been the greatest job
displacer of the 20th century.
And if we knew a priori that Microsoft Excel was coming, you know, and everyone was kind
of like fretting about it, I think in retrospect we would have thought that was dumb.
But so I think like there will be change, of course.
But I think like there's a there's no evidence of it today and be I think like we will manage
through it.
We have a lot of empathy for the problem.
I think like we work with businesses every day to try and enable people to be able to use the tools at
the level of like the 20 year olds that come into companies and use them with a
level of fluency that far transcends anyone else at those organizations but
we see it as our mission to make sure that people know how to use these tools
and and to drive people forward. You know I have to say we've had some
listeners right into the show and say hey I, I'm a junior coder, I just got laid off,
I'm not feeling really good about my prospects here.
So that's like, you know, pretty small sliver of the economy
but you know, I hear you talk about
what you want to do with 03, I think,
if it gets as good as you're saying,
it's not just gonna be the junior coders
who are gonna be affected by that, right?
So I guess what I'm saying is,
I feel like I'm seeing like slivers of it now
and I'm curious what you make of those.
I do think there will be areas where some jobs go away or maybe there will be some whole categories of jobs that go away and any job that goes away, even if it's like good for society and the economy as a whole is pain, very painful, extremely painful in that moment. And so I do totally get, not just the anxiety,
but that there is gonna be real pain here
in many cases.
In many more cases though, I think we will find that
the world is significantly underemployed.
The world wants way more code
than can get written right now.
I think we are already seeing companies who said, Oh, I'm going to need
less coders to now saying paradoxical.
I need more coders.
They're going to work differently, but I'm just going to make a hundred times as
much code, a hundred times as much product with 10 times as much people.
And we'll still make 30 times as much money, even if the price comes down.
Um, I, I think all of human history suggests that if you give people better tools, if technology keeps going,
although there are always people who say, you know, we're going to be working three hours a day and sitting on the beach
and we're going to have run out of things to do, like human demand seems limitless.
Our ability to imagine new things to do for each other seems limitless.
We always seem to want more stuff to play, you know, increasingly silly status games.
Our jobs would not have seemed like real jobs to people
in the not very distant past, you know,
like, like you're sitting around, like talking on stage
and you're trying to like make a piece of software
and you're trying to like do a podcast
and you're trying to make people laugh.
Like that's great, but that's like play.
That's not a job.
You have plenty of food to eat.
You have all the stuff to do. You have this unimaginable luxury. And, and because I think human imagination and
desire to man, whatever you want to call it, is limitless, we will find incredible new things to
do. Society will get way richer. I think generally society gets richer. Unemployment goes down, not up.
And I'd expect to keep seeing that even though people, I think, don't talk about that very much.
And the entry level people, I think, will be the people that do the best here.
They're the most fluent with the tools.
They're the most like able to think of things in very new ways.
They have this sort of
largest canvas. So we,
there's going to be real downside here. There's going to be real negative impact. And again,
like any single job lost really matters to that person.
And the hard part about this is I think it will happen faster than previous
technological changes,
but I think the new jobs will be better and people will have better stuff.
And the kind of like take that half the jobs are going to be gone in a year or two years
or five years or whatever, I think that's just, I think that's not how society really works.
Even if the technology were ready for that,
the inertia of society, which will be helpful in this case,
is like, there's a lot of mass there.
The thing we actually see empirically,
if we wanna talk about kind of what we observe,
is somewhat what Sam's describing.
It's actually, there's a class of worker
that I think is more tenured,
is more oriented toward a routine
and a certain way of doing things in a certain role
that is not actually sophisticated
in use of these tools.
They're not adopting them.
They tend to think that it's not worth their time
or whatever it may be
and I think there's a lot of fear there.
And a lot of what's driving that fear
like 20 somethings that are actually coming
into the workforce who have been using these tools
for years and years and who've mastered them in a way
that they kind of look at these other jobs
and they're like, why would you waste your time
doing that thing?
I can do that much faster.
And so the thing that companies actually worry about deeply
is not the entry level job.
It's really the job of the person that
has been at the company for 30 years who's
done something in a very kind of rote and routine way,
where there's an urge on the side of management
of wanting to really kind of modernize the toolset.
And what do you do with that?
And that's, I think, actually the kind
of addressable problem for us.
Sam, two years ago, you testified to Congress
about the need for more AI regulation.
More recently, you went back to Washington
and testified, again, that you supported a light touch
regulatory regime. And earlier this year, you went back to Washington and testified again that you supported a light touch regulatory regime.
And earlier this year, you said you supported a federal preemption on state level AI regulations, a version of which is now part of the Republican budget bill.
What changed? Did you see the regulations that people were writing and thought we wanted regulation, but not like that?
No, I still think we need some regulation. But I would say I have, I think like a patchwork across the states
would probably be a real mess and very difficult to offer services under. And I also think
that I have become more jaded, it's quite the right word, but something in that direction
about the ability of policymakers to grapple with the speed of technology. And I worry
that if people write, you know, if we kick off like a three-year process
to write something that's like very detailed and, you know,
covers a lot of cases, the technology will just move very quickly.
On the other hand, as these systems get quite powerful,
I think we clearly need something.
And I think something around the sort of like,
the really risky capabilities,
and ideally something that can sort of be quite adaptive
and not like a law that survives 100 years
and sort of says here's exactly the things you can do
and not do would be good,
but yeah, it's like impossible for you to imagine
a world where society doesn't decide
we need some framework here.
Earlier this year, you adjusted GPT 4.0 after it inadvertently became more sycophantic than
you intended.
Since then, we've read more stories about how chat GPT and other chat bots can destabilize
people by sending them down conspiratorial rabbit holes, making them feel like they're
having mystical experiences.
Can that be stopped?
Do you want it to stop?
Of course we want it to stop.
I mean, we do a lot of things to try to mitigate that.
If people are having like a crisis,
which they talk to chat duty about,
we try to convince them to,
we try to suggest that they get help from
a professional, that they talk to their family.
If conversations are going down a sort of rabbit hole in this direction, we try to
cut them off or suggest to the user to, you know, maybe think about something
differently, but there are, I think the broader topic of mental health and the
way that that interacts with over-reliance on
AI models is something we're trying to take extremely seriously and rapidly.
We don't want to slide into the mistakes that I think previous generation of tech companies
made by not reacting quickly enough as a new thing, sort of like how to psychological interaction. Have you ever thought about just like,
literally putting a warning on that says,
this is ChatGPT, you are not talking to God,
you are not having a religious experience?
I mean, the model will tell you things like that,
and then users will write us and say like,
you modified this, you know,
and they changed their like custom instructions.
But yes, there need to be a lot of warnings like that.
However, to the users that are in a fragile enough mental place that are like on the edge of a psychotic break,
we haven't yet figured out how a warning gets through there.
We also have to be careful because there are an incredible number of use cases that I think probably
by sheer volume outweigh some of the use cases you're describing where people are really relying on these systems for pretty critical parts of their life
These are things like you know almost kind of borderline therapeutic or I mean, you know
I get stories of people who have rehabilitated marriages have
rehabilitated relationships with estranged loved ones, things like that,
where it's highly net positive,
and there's not a dependency,
but it's the first time in their life
that they've had something that they feel
like they can confide in,
and it doesn't cost them $1,000 an hour, right?
And I was surfing Costa Rica the other day,
and someone paddled up to me,
and I was chatting with him, a local Costa Rican guy,
and he's like, where do you work?
I said, OpenAI.
He's like, oh, you make chat GPT.
And he started crying.
He's like, chat GPT saved my marriage.
I didn't know how to talk to my wife,
and it gave me tips to talk to my wife,
and I've learned that, and we're on a much better path.
And it sounds like a dumb and stupid story,
but it's not.
I mean, I was there.
That's great.
We're back to even,
because the chappa tried to break up my marriage.
Well, not our chatbot though.
Well, it was your chatbot, but it was inside Bing. So, spread the blame around there.
Now, Sam, you just had a kid. Congratulations.
Thank you.
Do you think over the course of their lifetime your kid will have more human friends or more AI friends?
More human friends, but AI will be, if not a friend, at least an important kind of companion of some sort.
Is that okay with you? Like, if your kid at one point when they're a little older came home and said,
I've got an AI friend, how would that make you feel?
If my kid felt like that was playing,
that was like replacing human friends,
I would have concerns about that,
at least with what, again, there are edge cases.
Any person who talks to hundreds of millions of people a day is gonna meet
a lot of edge cases.
And since Chachi BT is talking to hundreds
of millions of people a day, there are gonna be
some real edge cases in there.
But most people, much more than I was concerned,
seem to really understand the difference
between talking to a person and talking to Chachi BT.
And I still do have a lot of concerns
about the impact
on mental health and the social impacts from the deep relationships that people are going
to have with AI. But I think at least so far, it has surprised me on the upside of how much
people really differentiate between like, that's an AI and I talk to an AI in some way
and I get something out of it and that's a friend and I talk to a friend, a person in
this other way and get a very different thing out of that.
All right.
Here's something I've always wanted to ask you.
AI Twitter is still really active even though Twitter doesn't exist anymore.
You actively post there, share a lot of news there, and that's extremely helpful and good
for Elon Musk, a man who is trying to destroy your company.
Have you ever thought of just moving your posts somewhere else?
Where should I move them?
Well, you could create your own social app.
Don't go to Blue Sky.
They don't like AI there.
They're not going to be nice to you there.
It's a rough neighborhood.
Maybe as a last thing, we wanted to know this, too.
We'd invite you both to answer this.
Is there any part of your life that you feel like,
I wanna wall this off from AI a little bit?
It's fun to talk about AI, we think it's all very useful,
we're excited to keep building it,
but this particular thing, we're going analog.
I gotta think about that one.
Surfing, presumably.
Although maybe you asked for tips.
Yeah, we're roboticizing that now too.
Unfortunate, but let me think about it.
I'm big on the analog stuff.
I put my phone away and go for hikes every weekend
and hanging out with my family, I put my phone away
and I'm very happy not to have technology in the way
for that.
All right. Brad and Sam, thank you so much for joining us. Thank you. We're very happy not to have technology in the way for that.
All right.
Brad and Sam, thank you so much for joining us.
Thank you.
Thank you.
Thank you.
Thank you, Brad.
Thank you. We're fact- Jen Poyant.
We're fact-checked by Caitlin Love.
Today's show was engineered by Katie McMurray.
Original music by Alicia Bietup, Marion Lozano, Rowan Nemesto, and Dan Powell.
Video production by Sawyer Roquet, Pat Gunther, and Chris Schott.
You can watch this full episode on YouTube at youtube.com slash hard fork. Special thanks to the New York Times live event team who helped us put together hard fork live Hillary Kuhn Beth Weinstein
Caitlin Roper Kate Carrington Chantal Renier Melissa Tripoli Natalie green Angela Austin Kirsten Birmingham
Marissa Farina Jennifer Feeney and Morgan singer
Thanks to everyone at SF jazz the venue for our live show,
as well as the band Brass Animals
that played with us live on stage.
Special thanks also to Matt Collette, Paula Schumann,
Puebing Tam, Dahlia Haddad, and Jeffrey Miranda.
You can email us as always at hardforkatnytames.com you