Today, Explained - Chaos at OpenAI
Episode Date: November 22, 2023It’s been a wild few days at OpenAI, whose board fired CEO Sam Altman on Friday only to rehire him late Tuesday. Vox reporter Sigal Samuel explains what happened at the company behind ChatGPT — an...d what it tells us about the future of AI. This episode was produced by Avishay Artsy and Hady Mawajdeh, edited by Matt Collette, fact-checked by Laura Bullard and Isabel Angell, engineered by David Herman and Rob Byers, and hosted by Noel King. Transcript at vox.com/todayexplained Support Today, Explained by making a financial contribution to Vox! bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Breaking news overnight, a stunning new twist in the power struggle over the future of the company behind ChatGPT.
Sam Altman, the co-founder and OUSTA leader, will now return to OpenAI as CEO.
We know that these negotiations have been going on in the last 24 hours.
The soap opera, I don't know if it's officially over, but I think we're into extra innings at this point.
Sam Altman's shocking ouster from his company on
Friday became a back-in-ster late yesterday. Now the man who some people feared had too much power
in the world of artificial intelligence has even more power and the backing of a friendlier board.
Coming up on Today Explained, five days that shook the tech world. A November to remember.
You can't keep a good Sam down. You know what? I bet ChatGPT
could write something better. Hang on.
Get groceries delivered across the GTA from Real Canadian Superstore with PC Express.
Shop online for super prices and super savings.
Try it today and get up to $75 in PC Optimum Points.
Visit superstore.ca to get started.
You're listening to Today Explained.
I'm Sigal Samuel. I'm a senior reporter for Vox's Future Perfect.
Sigal, what is OpenAI? What does this company do?
So OpenAI is a really unusual tech company. It was initially founded in 2015 together with
Elon Musk, a name you may or may not have ever heard of before. And like the goal was to really focus on researching AI safety.
So how to make advanced AI,
but how to make sure that it's going to be safe and beneficial to all humanity.
That's the mission.
My worst fears are that we cause significant,
we the field, the technology, the industry,
cause significant harm to the world.
I think that could happen in a lot of different ways.
It's why we started the company.
Okay, so founded in 2015, and then where does it go from there?
So from there, you know, at the beginning, it's a pretty low-key thing.
You know, it's doing its own research like so many other groups,
but it progressively makes more and more advances,
especially in the realm of large language models, the kind of AI
that eventually culminated in what we now know as ChatGPT. But, you know, at first it was like
pretty quiet. They were having very cautious launches. And then a year ago, almost exactly
a year ago, they ended up launching ChatGPT, which really sort of rocketed OpenAI to becoming a household name.
ChatGPT is that application that lets you type in a question or the beginning of a prompt,
and then it will fill in the rest of the text for you. So you can write the first line of a poem,
say, and ask it to complete the rest. It'll complete the rest of the poem for you. So it
does an amazing job with language. GPT-4 takes what you prompt it with and just runs with it. From one perspective,
it's a tool, a thing you can use to get useful tasks done in language. From another perspective,
it's a system that can make dreams, thoughts, ideas flourish in text in front of you.
The funny thing is that when they initially launched, they said in-house, this is just going to be a low-key research preview. And lo and behold,
it becomes like the fastest growing, you know, such app in history. And OpenAI becomes incredibly
famous overnight. And this kind of propels OpenAI into a situation where it's pushing more and more for
commercialization. It's pushing out products aggressively and quickly. And it's starting to
look less and less like the company it was originally founded to be, which was supposed to be
very careful, very sober, very safety conscious. Tell me about ex-ex-CEO Sam
Altman. Who's he? So Sam Altman is in many ways a classic kind of Silicon Valley entrepreneur,
pretty famous in the startup world from his work at Y Combinator. He's really known as,
at this point, the poster boy for AI.
He has kind of become synonymous with AI, I think, like for much of the public,
because of his role launching products like ChatGPT and because he's at the helm of OpenAI.
He's not just a big deal in Silicon Valley.
He has become the face of AI, or at least one of the main faces of AI.
He's testified before Congress. He's met with CEOs.
And, you know, Chachi PT has really been the AI product in generative AI that has changed the
perception of its abilities earlier this year. And he is intertwined with all of that.
He's a fantastic salesman. I'll say that. He's really been able
to kind of push OpenAI's vision for AI into the mainstream. Is he good at his job? What's his
reputation like in Silicon Valley? Do people like this guy? Is he expedient? Is he high-functioning?
I think that he's actually really well-liked in the Valley. You know, as we've seen recently, you know, he has a lot of backing from the staff of OpenAI.
There's a lot of people who are really, really in his corner.
They're Team Sam, ride or die.
In terms of Sam, I mean, look, this was his baby.
He brought in these people.
It seems to be quite a culture of love, literally, this weekend.
They are all tweeting hearts at each other,
different color hearts. So yeah, he is well-liked. There's also a reputation in some quarters,
though, for alleged manipulation, deceitfulness. So there are some people who are not overwhelmingly happy on personal grounds. And there's also critics on an intellectual level of his approach to AI, which in recent months and years has become seemingly less focused on keep AI safe and mitigate the risks and more focused on let's rush to market.
Let's make a profit.
Let's commercialize. Sigal, can you walk us through what has happened over the past five days, five and a half days, starting late Friday?
Absolutely. It's been a complete roller coaster.
Friday, late afternoon, we're all getting ready to wind up for a nice, quiet weekend.
Little did we know, news was about to drop that would basically be an earthquake in
the tech world. So OpenAI announces that they're firing Sam Altman, the CEO. He's being fired by
the board. The board releases a very vague kind of cryptic statement that they're firing him
because he has not been consistently candid in his
communications with the board. They go on to say the board no longer has confidence in his ability
to continue leading OpenAI. So it seems like there's some alleged lack of transparency.
And the board has decided in a kind of boardroom coup to suddenly get rid of him.
Alton was notified about that via video meeting on Friday that he had been fired.
And Greg Brockman, who was not part of that meeting, although he's the chair of the board,
he was stripped of that role.
He was told he was going to be the president of the company, but he quit in protest on
Friday night.
Several other executives and senior researchers also resigned.
This was done by just a small board who, it should be noted, is the board of the nonprofit
of OpenAI.
And this is a bit of a funny detail, but it's important for the story.
OpenAI was initially founded as a nonprofit.
It later spun up a for-profit arm.
But the nonprofit board is really in control of the bigger picture,
and it has control of firing and firing. And the board's mission is not to protect,
you know, investors. It's not to protect employees. It's to protect humanity. It is
to safeguard the best interests of humanity. So it has to do what it thinks is
best in terms of like keeping AI safe. So for whatever its reasons weren't made clear initially,
it said it's firing Sam. The company president, Greg Brockman, gets super upset about this and
he quits in protest. All right. On Saturday, suddenly, the pair is trying to get the board to reinstate them.
They're going to OpenAI. They're trying to negotiate. Negotiations don't go their way.
Sam Altman posts a funny picture on Twitter on X where he has to wear like a visitor badge going
to OpenAI. He's like, this is the first and last time I'm ever wearing one of these.
And meanwhile, the whole staff of OpenAI, not the whole, but a huge majority,
starts to revolt in Sam's favor. They're really upset that he's been fired.
By Sunday, both Sam and Greg have accepted jobs with Microsoft, which is the major investor in OpenAI.
In a cryptic post this morning, Altman said of his new role, the mission continues.
And Microsoft is like super happy. They're saying, yeah, this is fine. This is great.
Sam and Greg can continue working on their cutting edge AI in-house here at Microsoft.
We chose to explicitly partner with OpenAI and we want to continue to do so.
And obviously that depends on the people and OpenAI and staying there or coming to Microsoft.
So I'm open to both options. But this is like turning into a huge problem for OpenAI because more and more OpenAI employees are now threatening to leave for Microsoft as well.
Get this, more than 700 employees, that's 95% of the company, say they
are ready to follow Altman to Microsoft, where he's set to build the new AI venture. They've
signed this letter calling for OpenAI's board of directors to resign in full and then reinstate
Altman. And the letter states, we are unable to work for or with people that lack competence, judgment, and care for our mission and employees.
Okay.
By Tuesday, new reports are indicating, hey, guess what?
Altman and Brockman are back in talks.
They're still in talks about a possible return to open AI, even though when we went to sleep the night before, we all thought they were safely at Microsoft.
So this was like a huge chaos. And then just when you thought it couldn't get any wilder, late, late, late Tuesday night,
when we were all contentedly about to finally go to sleep, it is announced that actually OpenAI
has reached an agreement, at least in principle, to reinstate Sam Altman. He and Greg are going to be back at OpenAI. Greg is posting, like,
pictures on Twitter on X now of, like, what seems like a huge proportion of the staff
celebrating, cheering, and OpenAI folks are saying, we're back. Like, we are so back.
Okay. They are back. And what about the board that fired really, it seems like she was castigated by Sam for writing
a academic paper that appeared to be critical of OpenAI's approach to AI safety.
She's now off the board.
And instead, we've now got Brett Taylor, who is an early Facebook officer and was also
helming Salesforce.
And we've got Larry Summers, who's a former Treasury Department
secretary. The only one who was on the board previously and is still on the board is Adam
D'Angelo, who's in charge of Quora. So it's really a big change. There's also talk that Microsoft
may later insist on having some kind of place on the board, whether that's an
active role or at least an observer role on the board so they don't get caught out unawares like
they did, you know, in the past few days. Does it strike you as weird that the two women are
off the board replaced by two men? Yeah. This is an industry that's not necessarily friendly to women. And what Helen Toner did took a lot of guts. And
that's sadly not always rewarded in these contexts. And like, we're talking about the end
of the world. I mean, that's what strikes me. It's like, okay, the dynamic seems to be the women
were holding us back. And now we can go and we can we can have Terminator time. Okay, so it has been five days, five days of roller coaster.
Is Sam Altman coming back to the same company he left
or has OpenAI been fundamentally changed by this drama?
I think that OpenAI has been fundamentally changed
by this nutty drama in Sam's favor.
I think that he is a winner of all of this because he's back and he's now going to have a board
that's much more friendly to his approach. The people on the board who were, you know,
from his perspective, maybe reining him in, trying to hold him back, they're gone. And now he's got
on the board a bunch of dudes who, you know, I think it's fair to say AI safety is maybe not
their top concern here. So I think Sam has freer reign now to do, you know, what he thinks is the
best approach. And what with the staff having mutinied in his favor and then Sam being reinstated on the
back of that, I think it shows Sam to be in a position of huge power. I think this is basically
like a major consolidation of his power at the company. Coming up, a fault line runs through it,
the profound existential fears that led to this week's chaos at OpenAI.
Sigal Samuel will be back with us.
Support for Today Explained comes from Aura.
Aura believes that sharing pictures is a great way to keep up with family,
and Aura says it's never been easier thanks to their digital picture frames.
They were named the number one digital photo frame by Wirecutter.
Aura frames make it easy to share unlimited photos and videos
directly from your phone to the frame.
When you give an Aura frame as a gift, you can personalize it,
you can preload it with a thoughtful message, maybe your favorite photos.
Our colleague Andrew tried an AuraFrame for himself.
So setup was super simple. In my case, we were celebrating my grandmother's birthday.
And she's very fortunate. She's got 10 grandkids.
And so we wanted to surprise her with the AuraFrame.
And because she's a little bit older, it was just easier for us to source all the images together and have them uploaded to the frame itself.
And because we're all connected over text message, it was just so easy to send a link to everybody.
You can save on the perfect gift by visiting AuraFrames.com to get $35 off Aura's best-selling Carvermat frames with promo code EXPLAINED at checkout.
That's A-U-R-A-Frames.com, promo code EXPLAINED.
This deal is exclusive to listeners and available just in time for the holidays.
Terms and conditions do apply.
BetMGM, authorized gaming partner of the NBA, has your back all season long.
From tip-off to the final buzzer, you're always taken care of with a sportsbook born in Vegas.
That's a feeling you can only get with BetMGM.
And no matter your team, your favorite player, or your style, there's something every NBA fan will love about BetMGM.
Download the app today and discover why BetMGM is your basketball home for the season.
Raise your game to the next level this year with BetMGM, a sportsbook worth a slam dunk,
an authorized gaming partner of the NBA. BetMGM.com for terms and conditions. Must be 19 years of age
or older to wager. Ontario only. Please play responsibly. If you have any questions or
concerns about your gambling or someone close to you,
please contact Connex Ontario
at 1-866-531-2600
to speak to an advisor
free of charge. BetMGM
operates pursuant to an operating agreement
with iGaming Ontario.
Support for this show comes
from the ACLU. The
ACLU knows exactly what
threats a second Donald Trump term presents.
And they are ready with a battle-tested playbook.
The ACLU took legal action
against the first Trump administration 434 times.
And they will do it again to protect immigrants' rights,
defend reproductive freedom, fight discrimination,
and fight for all of our fundamental rights and freedoms.
This Giving Tuesday, you can support the ACLU. With your help, they can stop the extreme
Project 2025 agenda. Join the ACLU at aclu.org today.
Play Today explainedained How.
I'm sorry, Dave. I'm afraid I can't do that.
It's Today Explained. We're back with Seagal Samuel.
She's a senior reporter at Vox, and she has been covering the chaos at OpenAI.
Seagal, let's start this way.
You wrote for Vox recently that OpenAI is a company built on a fault line.
What did you mean by that? Yeah, so the fault line was OpenAI's dual mission, which is simultaneously to build AI that's as smart as or smarter than humanity,
while also making sure that that AI will be safe and beneficial to all of humanity. That's part of our mission is to really make sure that we have an answer for like how humanity
continues to fit in and continues to be the end beneficiary of all of these systems, kind
of almost no matter how smart they get.
There's like an inherent tension over there because, you know, if you've got super advanced
AI, it could theoretically harm people in a variety of ways.
And I don't just mean like
Terminator scenarios. Hasta la vista, baby. I mean, everything from entrenching bias against
minority groups to enabling bioterrorism or manmade pandemics. And I think inevitably,
that was going to lead to some fracturing. Okay, so there are people on this board who were more
cautious about the future of AI. And there are people on the board who were more incautious, who were like,
yeah, probably the worst won't happen. But I wonder if you can talk us through
the perspective of people in Silicon Valley who are afraid of what AI is capable of. Because
we see AI scientists being polled, and some of them saying, oh, yeah, almost casually,
it seems, yeah, this could bring
about the end of mankind, the extinction of humanity. And I think what we really want to
understand is, are those concerns legitimate? Yeah, I think like these, the people who really
worry about this are kind of sometimes classed in one way as like they're called the doomers
in this world. But there's really a spectrum of people, right? There's some people who
really think, wow, this is for sure going to kill all of us. And it's some kind of Terminator
scenario or really sci-fi sounding scenario. And they're imagining maybe an AI that gets
smarter than humans and all of that. But you don't even need to go that far to be really genuinely worried. You could actually
just have AIs that are pretty dumb, which is to say just only as smart as me or something,
an average person. You're very smart. But the thing is, if you have enough of those AIs that
are not super smart, but they're working in tandem. And especially if they're, you know,
working with a human actor who's a malicious actor, that could do a lot, a lot of damage.
So, you know, like you don't need to be a way out there doomer to be like legitimately concerned
about what AI could do, even before it gets to a hypothetical smarter than human stage,
just like a reasonably smart stage, there could be a lot of
risk. There could be a lot of risk. And with Sam Altman coming back and the composition of the
board being changed to favor the risk-takers, do we get the sense that there is now a broad
agreement that perhaps the cautious people were being over-cautious or overstating the case?
Yeah, I think there is a sense like that. And it's funny because you hear
a lot of people saying like, oh, the cautious people were overstating the case. But regardless
of your opinion on that, I think just having Sam and Greg back and having a board that now is not
very safety conscious should maybe make you feel at least a little concerned, even if you're not worried about
like real doomsday scenarios, but you're just worried about like other risks of AI, like
entrenching or perpetuating bias against women or people of color or misinformation. Like even if
you're just worried about things like that, this is all on the continuum of safety and risk. And maybe you want leaders who are concerned about protecting
safety and mitigating risk. What OpenAI has said in recent months is like, they've made it explicit
that their real number one focus is getting to AGI, artificial general intelligence, which is as smart as or smarter than humans. And that
has increasingly become kind of like the holy grail that the leaders are pushing towards.
And the concern is like, are they pushing towards that dream at any cost? And over the past few
months, it really seems like they're more and more determined to get to AGI, even if there's
like warnings along the way that maybe this isn't great for safety. And by the way, did we ever pull
humanity and agree that we want AGI? Is that even something that should exist? Like, who knows?
But they're charging full steam ahead. Artificial intelligence is a race. There are a lot of
companies that want to win this race. OpenAI seemed to be winning because of chat GPT. It was the coolest thing ever. It was really performing. And with Sam Altman out, I think you could speculate that another company could win the race or at least be in the running in a new kind of way. Sam's now back at OpenAI. Does that mean OpenAI has won the race, is winning the race,
is sure to win the race? Where does this leave the race for artificial intelligence?
First of all, like, for humanity, if this is treated as a race, that's a really bad thing.
Like, we shouldn't want it to be treated as a race because that's going to be a race to the bottom.
Unfortunately, the race continues. I think that Sam would have continued
that whether he was at Microsoft or at OpenAI. I think it should be noted that OpenAI has done
more than any other company to catalyze the arms race kind of mentality, especially with its release
of ChatGPT a year ago, and with all that it's been kind of pushing to market quickly since.
So I think this just basically continues the race, and OpenAI continues to be at the front of the pack.
So at the end of the day, you know, if you ask the question, did OpenAI's board make the right
decision when it fired Sam? Or did it make
the right decision when it rehired him? Paradoxically, I think the answer might be yes to both.
I think if they saw something that made them concerned that Sam was taking the company in
direction that was not super safety conscious, and remember, their mandate is to protect the best interests of humanity,
not of OpenAI, then it was right to get rid of him. But at the same time, when the conditions
then changed such that it looked like there might be a mass exodus of OpenAI's top talent to
Microsoft, a company that seems significantly less safety conscious, it might have actually been the wisest move to rehire Sam, Greg,
and keep in-house all of that top talent so that at least they're at a company that is,
we could say, at least nominally committed to AI safety and has some, you know, kind
of direction on them that's going to keep them caring about safety, as opposed to pushing
them straight into the arms of Microsoft, where they could potentially develop AGI in a less safety-conscious environment.
So paradoxically, OpenAI board might have been right to fire Sam
and also might have been right to rehire him.
That was Sagal Samuel.
She's a senior reporter for Vox's Future Perfect.
Today's episode was produced by Avishai Artsy and Hadi Muagdi.
It was edited by Matthew Collette.
Laura Bullard is our fact checker.
And Isabel is an angel for helping us out with facts today.
David Herman and Rob Byers are our engineers.
And I'm Noelle King.
Today Explained, the podcast, is off Thursday and Friday of this week.
New episodes drop on Monday. And if you listen to us on your local NPR station,
you'll get to hear some of our greatest hits. Happy Thanksgiving, everyone. Thank you.