Big Technology Podcast - What We Know About Sam Altman's Ouster, The Morning After
Episode Date: November 18, 2023A quick solo podcast on what we know about OpenAI firing CEO Sam Altman with Big Technology host Alex Kantrowitz. Tune in for a discussion of what might've happened, how it happened, why it happened, ...how it impacts OpenAI moving forward, and how it shuffles the competitive balance in the AI field moving forward. --- You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Welcome to Big Technology Podcast, special Saturday edition, special emergency edition,
where we're going to try to break down what exactly happened at Open AI, what is happening at OpenAI,
where Open AI and the rest of the AI field go from here.
Because the firing of Sam Altman by the Open AI board is one of the more remarkable,
one of the more shocking, one of the more surprising tech stories that I've ever seen.
And it's going to have plenty of implications, not just for open AI, but for Microsoft, for Amazon, for Google, and basically for the entire tech industry.
That's why this is so important.
And that's why we're going to spend a little bit of time here going through exactly what happened and what comes next.
This is a solo episode, no guest.
I'm just going to run through a little bit of my notes, a little bit of what I understand has happened here.
And I thought it was just important to get something quick up on the feed to give you some perspective about this developing story.
One note before we go ahead, this is obviously a developing story with a lot of fast-moving,
quick-moving parts.
So bear with me.
We're going to continue to cover it on the big technology podcast feed as we go on, as we learn more.
I want to be very careful about the information that I share.
But we are starting to learn what's coming out.
And the first thing that I'll say is what happened.
So we have a post on Twitter from Greg Brockman, who's formerly the president and the head of the board at OpenAI,
the co-founder of that company
who shared a little bit of a play-by-play
of how he and Sam Altman found out
that both Sam was out
and that he was going to be removed from the board.
Okay, here's what he says on Twitter.
He goes last night, and this is from early this morning.
Last night, Sam got a text from Elia,
and Elia is Elia Sutskever,
who is the chief scientist
and a co-founder of Open AI.
Sam got a text from Elia asking to talk at noon Friday.
Sam joined a Google meet
and with the whole board,
Greg, who was not there.
Ilya told Sam that he was being fired, and the news was going out very soon.
At 1219 p.m., Greg got a text from Elia asking for a quick call.
At 1223 p.m., Ilya sent a Google Meet link.
Greg was told that he was being removed from the board, but was vital to the company and would
retain his role, and that Sam had been fired.
Around that time, OpenAI published a blog post.
As far as we know, the management team was made aware of this shortly after, and
other than Mira, who's the new CEO, Mira Murati, who found out the night prior.
The outpourings of support has been really nice.
Thank you, but please don't spend any time being concerned.
We'll be fine, greater things.
Okay, so basically what this looks like is a complete coup from the Open AI Board.
And the board, of course, includes Sudgevr.
And then independent directors like Core's CEO, Adam DeAngelo, tech entrepreneur,
Tasha McCauley, Helen Toner of the Georgetown Center for Security and Emerging
technology. And that's basically it. So they decided that it was time for both Greg Brockman to
be removed from the board and Sam Altman to go. Now, initially it sounded like this was a conduct thing.
I mean, to have the CEO removed last minute and, you know, basically announced blindsiding
Microsoft shared as the market was still open, totally unprecedented, at least not in any near
memory. But we're starting to learn more, you know, in terms of what that conduct might have
looked like. And initially it sort of said, okay, it seemed like it was maybe bad behavior and
there might be some repercussions for the Open AI board as we go forward in terms of the
accusations that they made. But then it starts to come out that it might have just been a little
bit of a difference of opinion in terms of the future and the direction of this company.
So I want to bring your attention to a memo from the chief operating officer of OpenAI Brad Lightcap, which Axios published today.
So he says, team, and this is going to Open AI's employee goes, teams after yesterday's announcement, which took us all by surprise, we've had multiple conversations with the board to try to better understand the reasons and process behind their decision.
These discussions and options regarding our path forward are ongoing this morning.
We can say definitively that the board's decision was not made in response to malfeasance
or anything related to our financial business safety or security and privacy practices.
This was a breakdown in communication between the board and Sam.
Our position as a company remains extremely strong and Microsoft remains fully committed to our partnership.
Interim CEO Mir Maradi has our full support as CEO.
We still share your concerns about how this process has been handled and are working to resolve the situation.
and we'll provide updates as we are able.
I'm sure you're all feeling confusion, sadness, and perhaps some fear.
We are fully focused on handling this, pushing toward resolution and clarity and getting back to work.
Our collective responsibility now is to our teammates, partners, users, customers, and broader world,
who shares our vision of broadly beneficial AGI or artificial general intelligence.
Hang in there, we are behind you, 1,000%.
Okay.
So if it's not malfeasance, if it's not financial mismanagement, what is it?
And the likeliest theory right now is safety.
Simply that Open AI was moving too fast, that it was pushing this technology, the cutting edge of this technology forward too quickly.
And it got uncomfortable, both for Sutskever, who seems to be the person who spearheaded this move, but also for the rest of the board.
Now, here is a reminder coming from Nick Thompson, who's the CEO of the Atlantic.
that Setskever was a protege of Jeff Hinton, who's now perhaps the most passionate critic of the risks of out-of-control AI.
Jeff Hinton, of course, helped invent the modern genre of AI.
He was part of this group called the Deep Learning Conspiracy, along with Jan Lacoon and Joshua Benjo,
as a professor in Canada, basically helped make this current era of AI go forward with their deep learning model.
And Hinton, who left Google, has come out passionately talking about the risks of this AI.
And it's a good point to talk about a little bit about the corporate structure of open AI.
It's very confusing.
They effectively started as a nonprofit and they moved to a cap for profit.
But they always kept the power in the hands of the board to decide whether or not to remove the CEO, whether things got, you know, if things got a little bit too hairy, shall we say, in the area of safety.
And that seems to be exactly what they did.
so why where did these safety concerns come from i think that we can go back to two moments
the first is just earlier this month in developer day inside open ai so if you remember that
some of the coverage that we've done here about developer day what open i did was it made
it possible for people to build their own chatbots based on its technology with no coding
experience they open it up to basically create your own gpt and what it
did is basically took this extremely powerful technology that the company had been very careful about
building safeguards around and they opened it up to everyone and i don't think it's a complete coincidence
that this is happening right after that demo day because you know rather than having some of those
safeguards now everybody has it and there's been some rumors you know or going around talking about how
this is a product of what happened at demo day i think that is you know a potentially crucial moment
where all of a sudden, Sam Altman took the powerful technology that Open AI had and democratized it almost too much for, you know, potentially for the liking of people within the company.
Also, maybe something that also played a role is this copyright shield, although I doubt it, but basically the fact that OpenAI said, if you get sued for copyright, you know, we're going to back you up legally.
Maybe that played some factor.
But there had also been this happening, this had also been happening in concert with some really unbelievable developments,
within the company.
And I'm going to point you to this interview that Lareem Powell Jobs did just a couple days
before Altman's firing, just talking about the models that are underdeveloped, right?
So Lorraine Powell Jobs says to Altman, what is the most remarkable surprise that you believe
will have happened in your field or in your company in 2024?
And Altman says the model capability will have taken such a leap forward that no one expected.
job says wait say it again and altman says the model capability like what the systems can do will have taken such a leap forward that no one expected that much progress here's jobs and why is that a remarkable thing remarkable thing why is it brilliant and altman says well it's just different to expectation i think people have like in their mind how much better the model will be next year and it's remarkable about how different it is okay so clearly this technology had been advancing dramatically under altman and he's
he had his foot on the gas pedal, taking funding, working to productize it as a company.
And the company effectively had these nonprofit routes that were always going to be in tension
with this model of saying, let's go ahead and push forward, you know, take funding from
Microsoft, productize it, let anyone build their own GPT.
And I'm not saying this is exactly what happened, but it sure points to the fact that the board
structure and the ambition of Altman just came to a head.
And that's why this firing is important.
important because it's not just the deposing of a leader. It's a reorientation of direction,
right? If this is actually the way that this is going to move, it's not going to be the same
open AI anymore because, you know, they may have been researching these incredible breakthroughs,
but the way that they're going to move into the public hands, the public's hands, is going to be
quite different. And, you know, the way that they're going to productize this is going to be quite
different it's almost like this cycle like that open i needed uh money to function of course because
this stuff is extremely expensive to run so it goes to microsoft now to justify this money it builds
products and it goes splashy these big PR tours that that altman is going going forward with and then
you know you use that money and then you research and then you have to go through another cycle and it
just builds and builds and builds and you know right now there was also this moment of competition in
AI where you had open AI, but you had all these other companies like Google with Gemini and Amazon
and Google funding Anthropic and maybe even Elon Musk's grok coming in and being like,
we can do the same thing too. And the pressure goes on and it builds. And there's a tension there
between a company that's basically founded on the principle of developing AI safely and a company
that wants to lead and be part of the game. And it's almost as if there's this go fast and go
slow tension that a lot of people have been pointing out. And if indeed Open AI
has decided that it's choosing go slow, then it is a very, very different world for open AI going
forward. And that is a serious implication to Microsoft, which we're going to cover in a second.
But let me just quote Aaron Levy, the CEO of Box and a friend of this show. He says,
this is not your standard startup leadership shakeup. 10,000s of startups are building on
open AI and have assumed a certain degree of technical velocity and commercial stability.
This instantly changes the structure of the industry. I mean, Levy puts,
it pretty much better than I can, which is effectively saying, in progress, there are startups
building on top of Open AIs technology. And this potentially, if the go slow part of this company
has won out, this potentially changes everything about their roadmaps, the way they're going to build
and, you know, maybe gives rise to others, but we're going to start to see this very, very big
change. Now, one caveat here is that Miramoradi, who is the now interim CEO, has been somebody
that's been on board with this mission, right?
She was the chief technology officer.
She was involved in chat GPT.
She was involved in the Dolly.
She was involved in the Microsoft deal.
So putting her in charge, you know, and not Elias of Skever,
indicates that in some way maybe they're continuing to go forward in a similar direction,
although not exactly the same way.
But it does seem right now, and of course this is subject to change, that safety was a huge
part of it.
So where does this leave Microsoft?
I mean, this is not a good.
thing for Microsoft. You know, Microsoft put almost all of its eggs in this open AI basket. Of course,
it's developing a little bit on its own, but it will be impacted strategically here. And not
surprisingly, the executives at Microsoft are not happy. This is coming from a Bloomberg story.
It says executives at Microsoft Corp, the largest investor in OpenAI, by the way, they own 49%
of the company still do. They were also taken by surprise, the story says. Microsoft CEO, Saty Nandelo,
blindsided, quote unquote, by the news and was furious, according to someone with direct
knowledge of his thinking.
I mean, of course, Satyndella is going to be mad.
He's going to be really mad.
Selling Open AI service through Azure has been a crucial component of Microsoft's strategic
rise over the past couple of months.
I mean, they're already in good shape, but they've really benefited from this AI.
In fact, the AI narrative, AI and Open AI in particular has been central to their growth in
23, it seems like Satyadela does not waste a moment to appear on stage or an opportunity to
appear on stage with Sam Altman. They announced in their earnings call recently that they've gone
from 11,000 to 18,000 clients of their Open AI service within Azure in one quarter. So clearly
this was where the momentum was for Microsoft. And I'm going to talk about it in a moment,
but let me just preview it. If Sam Altman is not at Open AI and decides to compete, build a
competing company, right? Just think about that. And Open AI decides that the velocity gets a little
bit slower, then it could potentially be a real strategic liability for Microsoft. Okay,
before we get there, let's just talk a little bit about the AI safety crowd. So this is from
Luis Metzakis, who's a friend of the program and is at Semaphore. She says, if Altman was
fired over AI safety concerns, that's way crazier in a lot of ways than a sex or financial
scandal. It shows how deeply committed, effective altruists are to the idea that it is essentially
borrowed from science fiction, that AGI could kill all of humanity. If true, I think this could
completely radicalize people working on AI in one way or the other. Either you say, wow,
this threat was so extreme that they got rid of Altman, or these people are so crazy in their
beliefs they got rid of Altman. And honestly, I think that AI doomers or AI, you know, whether AI can
wipe us out is about to go on trial. I mean,
it's really going to be a public. If this was the case that they actually decided to push Altman
out because of safety concerns, we're about to see the public really get a hearing on these
dooms. I mean, they could have potentially incinerated billions of dollars of value for these
concerns. And the question is like, are they so prominent? Are they so present that it was worth
doing this right now? You know, maybe they were a front for more interpersonal problems. Who knows?
I mean, we have Sitskever also tweeting that ego is the barrier.
to growth a couple months ago. Who knows? Maybe that was a sub-tweet. But we are going to start to
see a deeper and more harsh interrogation of whether Doomers, their belief, that we're going to get
killed by AI really stands up to scrutiny. And I want to say it's already begun because
Eliezer Yudkowski, who's like the high priest of AI Dumerism, who talks about our potential
to be killed by AI, I thought was a pretty innocuous tweet yesterday responding to the Altman news.
he goes, Mirr Maradi reached out to me in 2022 for a one-hour Zoom call.
Sam Altman never essayed any such content.
Also, I don't think Maradi has made any jokes about how funny it would be if the world ended.
I'm tentatively 8.5% more cheerful about Open AI going forward.
Now, of course, lots of ego, lots of me, me, me in this statement by Yudkowski.
But he was piled on by so many people after this tweet.
And I think that's just anger coming out saying, we don't know whether.
whether the AI is going to kill us. I mean, it's, and I've said this before, we seriously don't have any
proof or any even rational arguments about how this will happen. And now you've basically
needcap the world's most popular startup because of it. So I do think the AI, uh, doom narrative is about
to go on trial in a big way. And now let's get to what Sam Altman does next. So Altman is out,
Greg Brockman, push off the board and then he resigned his position from within the company.
And then there have been three more, uh, opening.
AI, prominent open AI researchers that have left as well. You have Jakub Pachowski, the company's
director of research. This is according to the information. Alexander Madri, the head of a team
evaluating the potential risks from AI and Simone Sidor, a seven-year researcher. They all resigned.
Look, I do think that they all believe in researching this. They are an amazing position to push
the category forward. What happens next? It seems to me like they are going to start their own company,
that they're going to continue to press forward.
They're going to continue this mission.
And by the way, again,
they weren't exactly building on proprietary technology.
I mean, some of this, a lot of this,
almost all of this, is built on the transformer model
that was research within Google.
You have a couple, you know,
you take a couple million to train up a new model.
You could potentially get right back in the game
and maybe do it from scratch without any tech debt,
any legacy risk.
And that's clearly what a lot of people think is going to happen.
People in the know,
So here's Jim Fan from Invidia.
He's their senior AI scientist writing on LinkedIn.
And by the way, he was an intern at OpenAI.
He says, zooming out, we will inevitably see the birth of a new heavyweight competitor or new heavyweight competitors.
And that isn't necessarily a bad thing for the community.
AI will be a bit more decentralized.
New capital will pour in.
Every party will act with more urgency.
And new PhD grads will have more, have at least one extra job offer to consider.
So it certainly seems like we're about.
to see the formation of another AI startup, perhaps.
Well, I would say likely with some of these open AI exiles.
I don't think they've given up on the mission.
And they're certainly going to have plenty of money thrown at them.
I mean, you're starting to see lots of even joke letters coming in, you know, on Twitter
of people who are talking about how they want to fund this company.
But, I mean, if you have Sam Altman, Greg Brockman, and just three of these researchers
raise their hand and say, hey, we want to do this right, you know, potentially.
Think about how much money they could raise.
I mean, it's definitely in the billions.
The question is what number of billions that's going to look like?
And if that's the case, it really does put these domer strategy sort of to the test, right?
Because if they really were so concerned about AI safety, and maybe you do what you can do.
But effectively, they're going to unleash, you know, one more startups.
They're going to open basically a huge gaping hole in the AI world for more startups to come in,
maybe some that don't end up being so dedicated to safety.
And the question is what will happen to open AI next?
Small company, relatively given the amount of employees and other big tech companies.
And losing this type of talent has to have an impact.
We talked about it on yesterday's emergency podcast.
There will be an impact there.
There will be definitely an impact to Microsoft.
And there will be a great shuffling of AI competition.
And I said on CNBC yesterday, but the AI tariff.
Darwinism. I mean, it's about to go into another gear. And an already intense competition is just
going to be pushed up another notch. All right, before we go, I want to talk about a couple of
other possibilities that might end up, you know, showing up. I don't think the end of the story
has come out yet. I do think we're going to hear more. But I do have a couple of other
thoughts of something that might have happened. This is, of course, total speculation. So take it
for what it's worth. But I want to cover some of these, some of these other possibilities. One
is that they could have been potentially running out of money. I know that board said there was
nothing financial here. Let's see what happens. I mean, this stuff is super expensive to run. And on
Big Technology podcast here, Ranja and I have talked about how the model is just vulnerable. And so
that's potentially something that might have happened. Now, the other possibility is the
breakthrough possibility, right? Which is that, you know, there's all these jokes about open AI
achieving artificial general intelligence or human level intelligence. But there is a possibility
that some massive breakthrough came out, which made the board say, okay, right now, we really need
a deal with safety. And, you know, either you're in or you're out. Maybe there was dispute there. Maybe
that was the communication breakdown. In which case, I'd be thrilled to see, like, what this
breakthrough is. I mean, these comments with Alman and Lurine Powell jobs, you know, are something
that I'm going to start paying way more attention to because we have to figure out what was brewing
inside of there. You know, maybe it was something else like that other possibility. Who knows? Maybe there
was some kind you know some some some actual behavior uh you know i don't think that's what it was
but you can never rule that out completely and then maybe some combination of all this okay
obviously a live wire story lots breaking here i'm going to get back to reporting calling my
sources um we'll try to have another podcast on the feed on wednesday to go deeper into what
happened um i don't think we'll do it before then in the meantime just stay stay tuned and check out
big technology the newsletter sign up for premium if you can stay tuned to the podcast feed but that is
again what i think is going to happen two big takeaways here a reshuffling of competition in the
a i world is absolutely here and again for open ai not just deposing of a leader but a reorientation
of direction for the company that's exactly where it seems like we're heading um which is going to
make this space like jim fan says from invidia way more live
way more intense, way more competitive, and perhaps it will make it innovate even faster.
All right, that will do it for me here.
Thank you so much for watching if you're here on our live stream.
And thank you for listening.
We'll see you next time on Big Technology Podcast.